*.so.*
icd/common/libicd.a
icd/intel/intel_gpa.c
-loader/dispatch.c
-loader/table_ops.h
-tests/xgl_image_tests
-tests/xgl_render_tests
-tests/xglbase
-tests/xglinfo
-layers/xgl_dispatch_table_helper.h
-layers/xgl_enum_string_helper.h
-layers/xgl_generic_intercept_proc_helper.h
-layers/xgl_struct_string_helper.h
-layers/xgl_struct_wrappers.cpp
-layers/xgl_struct_wrappers.h
_out64
out32/*
out64/*
*.vcxproj
*.sdf
*.filters
+build
+dbuild
Example debug build:
```
-cd YOUR_DEV_DIRECTORY # cd to the root of the xgl git repository
+cd YOUR_DEV_DIRECTORY # cd to the root of the vk git repository
export KHRONOS_ACCOUNT_NAME= <subversion login name for svn checkout of BIL>
./update_external_sources.sh # fetches and builds glslang, llvm, LunarGLASS, and BIL
cmake -H. -Bdbuild -DCMAKE_BUILD_TYPE=Debug
make
```
-To run XGL programs you must tell the icd loader where to find the libraries. Set the
-environment variable LIBXGL_DRIVERS_PATH to the driver path. For example:
+To run VK programs you must tell the icd loader where to find the libraries. Set the
+environment variable LIBVK_DRIVERS_PATH to the driver path. For example:
```
-export LIBXGL_DRIVERS_PATH=$PWD/icd/intel
+export LIBVK_DRIVERS_PATH=$PWD/icd/intel
```
-To enable debug and validation layers with your XGL programs you must tell the icd loader
-where to find the layer libraries. Set the environment variable LIBXGL_LAYERS_PATH to
-the layer folder and indicate the layers you want loaded via LIBXGL_LAYER_NAMES.
+To enable debug and validation layers with your VK programs you must tell the icd loader
+where to find the layer libraries. Set the environment variable LIBVK_LAYERS_PATH to
+the layer folder and indicate the layers you want loaded via LIBVK_LAYER_NAMES.
For example, to enable the APIDump and DrawState layers, do:
```
-export LIBXGL_LAYERS_PATH=$PWD/layers
-export LIBXGL_LAYER_NAMES=APIDump:DrawState
+export LIBVK_LAYERS_PATH=$PWD/layers
+export LIBVK_LAYER_NAMES=APIDump:DrawState
```
##Linux Test
The test executibles can be found in the dbuild/tests directory. The tests use the Google
gtest infrastructure. Tests available so far:
-- xglinfo: Report GPU properties
-- xglbase: Test basic entry points
-- xgl_blit_tests: Test XGL Blits (copy, clear, and resolve)
-- xgl_image_tests: Test XGL image related calls needed by render_test
-- xgl_render_tests: Render a single triangle with XGL. Triangle will be in a .ppm in
+- vkinfo: Report GPU properties
+- vkbase: Test basic entry points
+- vk_blit_tests: Test VK Blits (copy, clear, and resolve)
+- vk_image_tests: Test VK image related calls needed by render_test
+- vk_render_tests: Render a single triangle with VK. Triangle will be in a .ppm in
the current directory at the end of the test.
##Linux Demos
Example debug build:
```
-cd GL-Next # cd to the root of the xgl git repository
+cd GL-Next # cd to the root of the vk git repository
mkdir _out64
cd _out64
cmake -G "Visual Studio 12 Win64" -DCMAKE_BUILD_TYPE=Debug ..
```
-At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "XGL.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
+At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "VK.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
-XGL programs must be able to find and use the XGL.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
+VK programs must be able to find and use the VK.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
-To run XGL programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
+To run VK programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- - Key: "SOFTWARE\XGL"
- - Value: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
-- Environment Variable: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
+ - Key: "SOFTWARE\VK"
+ - Value: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
+- Environment Variable: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list of folders.
- Within the search box, type "environment variable" and click on "Edit the system environment variables" (or navigate there via "System and Security->System->Advanced system settings").
- This will launch a window with several tabs, one of which is "Advanced". Click on the "Environment Variables..." button.
- For either "User variables" or "System variables" click "New...".
-- Enter "XGL_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
+- Enter "VK_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
It is possible to specify multiple icd folders. Simply use a semi-colon (i.e. ";") to separate folders in the environment variable.
-The icd loader searches in all of the folders for files that are named "XGL_*.dll" (e.g. "XGL_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
+The icd loader searches in all of the folders for files that are named "VK_*.dll" (e.g. "VK_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
-To enable debug and validation layers with your XGL programs you must tell the icd loader
+To enable debug and validation layers with your VK programs you must tell the icd loader
where to find the layer libraries, and which ones you desire to use. The default folder for layers is C:\Windows\System32. Again, this can be pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- - Key: "System\XGL"
- - Value: "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- - Value: "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
+ - Key: "System\VK"
+ - Value: "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
+ - Value: "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
- Environment Variables:
- - "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- - "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
+ - "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
+ - "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list.
-The icd loader searches in all of the folders for files that are named "XGLLayer*.dll" (e.g. "XGLLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.
+The icd loader searches in all of the folders for files that are named "VKLayer*.dll" (e.g. "VKLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.
-# Explicit GL (XGL) Ecosystem Components\r
+# Explicit GL (VK) Ecosystem Components\r
*Version 0.8, 04 Feb 2015*\r
\r
-This project provides *open source* tools for XGL Developers.\r
+This project provides *open source* tools for VK Developers.\r
\r
## Introduction\r
\r
-XGL is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an XGL driver. Applications have full control and responsibility. Any errors in how XGL is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.\r
+VK is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an VK driver. Applications have full control and responsibility. Any errors in how VK is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.\r
\r
-New with XGL in an extensible layered architecture that enables significant innovation in tools:\r
+New with VK in an extensible layered architecture that enables significant innovation in tools:\r
- Cross IHV support enables tools vendors to plug into a common, extensible layer architecture\r
- Layered tools during development enable validating, debugging and profiling without production performance impact\r
- Modular validation architecture encourages many fine-grained layers--and new layers can be added easily\r
demos for GDC.\r
\r
The following components are available:\r
-- XGL Library and header files, which include:\r
+- VK Library and header files, which include:\r
- [*ICD Loader*](loader) and [*Layer Manager*](layers/README.md)\r
- - Snapshot of *XGL* and *BIL* header files from [*Khronos*](www.khronos.org)\r
+ - Snapshot of *VK* and *BIL* header files from [*Khronos*](www.khronos.org)\r
\r
- [*GLAVE Debugger*](tools/glave)\r
\r
\r
## New\r
\r
-- Updated loader, driver, demos, tests and many tools to use "alpha" xgl.h (~ version 47).\r
+- Updated loader, driver, demos, tests and many tools to use "alpha" vulkan.h (~ version 47).\r
Supports new resource binding model, memory allocation, pixel FORMATs and\r
other updates.\r
APIDump layer is working with these new API elements.\r
\r
## Prior updates\r
\r
-- XGL API trace and capture tools. See tools/glave/README.md for details.\r
+- VK API trace and capture tools. See tools/glave/README.md for details.\r
- Sample driver now supports multiple render targets. Added TriangleMRT to test that functionality.\r
-- Added XGL_SLOT_SHADER_TEXTURE_RESOURCE to xgl.h as a descriptor slot type to work around confusion in GLSL\r
+- Added VK_SLOT_SHADER_TEXTURE_RESOURCE to vulkan.h as a descriptor slot type to work around confusion in GLSL\r
between textures and buffers as shader resources.\r
- Misc. fixes for layers and Intel sample driver\r
- Added mutex to APIDump, APIDumpFile and DrawState to prevent apparent threading issues using printf\r
\r
## References\r
This version of the components are written based on the following preliminary specs and proposals:\r
-- [**XGL Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)\r
+- [**VK Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)\r
- [**BIL**, revision 29](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/BIL/Specification/BIL.html)\r
\r
## License\r
This work is intended to be released as open source under a BSD-style\r
-license once the XGL specification is public. Until that time, this work\r
-is covered by the Khronos NDA governing the details of the XGL API.\r
+license once the VK specification is public. Until that time, this work\r
+is covered by the Khronos NDA governing the details of the VK API.\r
\r
## Acknowledgements\r
While this project is being developed by LunarG, Inc; there are many other\r
companies and individuals making this possible: Valve Software, funding\r
project development; Intel Corporation, providing full hardware specifications\r
-and valuable technical feedback; AMD, providing XGL spec editor contributions;\r
+and valuable technical feedback; AMD, providing VK spec editor contributions;\r
ARM, contributing a Chairman for this working group within Khronos; Nvidia,\r
providing an initial co-editor for the spec; Qualcomm for picking up the\r
co-editor's chair; and Khronos, for providing hosting within GitHub.\r
\r
## Contact\r
If you have questions or comments about this driver; or you would like to contribute\r
-directly to this effort, please contact us at XGL@LunarG.com; or if you prefer, via\r
+directly to this effort, please contact us at VK@LunarG.com; or if you prefer, via\r
the GL Common mailing list: gl_common@khronos.org\r
)
file(COPY ${TEXTURES} DESTINATION ${CMAKE_BINARY_DIR}/demos)
-set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES")
-set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES")
+set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES")
+set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES")
if(WIN32)
set (LIBRARIES "XGL")
#include <assert.h>
#include <xcb/xcb.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglWsiX11Ext.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkWsiX11Ext.h>
#include "icd-spv.h"
* structure to track all objects related to a texture.
*/
struct texture_object {
- XGL_SAMPLER sampler;
+ VK_SAMPLER sampler;
- XGL_IMAGE image;
- XGL_IMAGE_LAYOUT imageLayout;
+ VK_IMAGE image;
+ VK_IMAGE_LAYOUT imageLayout;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_IMAGE_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_IMAGE_VIEW view;
int32_t tex_width, tex_height;
};
"lunarg-logo-256x256-solid.png"
};
-struct xglcube_vs_uniform {
+struct vkcube_vs_uniform {
// Must start with MVP
float mvp[4][4];
float position[12*3][4];
float color[12*3][4];
};
-struct xgltexcube_vs_uniform {
+struct vktexcube_vs_uniform {
// Must start with MVP
float mvp[4][4];
float position[12*3][4];
xcb_screen_t *screen;
bool use_staging_buffer;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE device;
- XGL_QUEUE queue;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE device;
+ VK_QUEUE queue;
uint32_t graphics_queue_node_index;
- XGL_PHYSICAL_GPU_PROPERTIES *gpu_props;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_PHYSICAL_GPU_PROPERTIES *gpu_props;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
- XGL_FRAMEBUFFER framebuffer;
+ VK_FRAMEBUFFER framebuffer;
int width, height;
- XGL_FORMAT format;
+ VK_FORMAT format;
struct {
- XGL_IMAGE image;
- XGL_GPU_MEMORY mem;
- XGL_CMD_BUFFER cmd;
+ VK_IMAGE image;
+ VK_GPU_MEMORY mem;
+ VK_CMD_BUFFER cmd;
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_FENCE fence;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_FENCE fence;
} buffers[DEMO_BUFFER_COUNT];
struct {
- XGL_FORMAT format;
+ VK_FORMAT format;
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_DEPTH_STENCIL_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_DEPTH_STENCIL_VIEW view;
} depth;
struct texture_object textures[DEMO_TEXTURE_COUNT];
struct {
- XGL_BUFFER buf;
+ VK_BUFFER buf;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_BUFFER_VIEW view;
- XGL_BUFFER_VIEW_ATTACH_INFO attach;
+ VK_GPU_MEMORY *mem;
+ VK_BUFFER_VIEW view;
+ VK_BUFFER_VIEW_ATTACH_INFO attach;
} uniform_data;
- XGL_CMD_BUFFER cmd; // Buffer for initialization commands
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
- XGL_DESCRIPTOR_SET_LAYOUT desc_layout;
- XGL_PIPELINE pipeline;
+ VK_CMD_BUFFER cmd; // Buffer for initialization commands
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
+ VK_DESCRIPTOR_SET_LAYOUT desc_layout;
+ VK_PIPELINE pipeline;
- XGL_DYNAMIC_VP_STATE_OBJECT viewport;
- XGL_DYNAMIC_RS_STATE_OBJECT raster;
- XGL_DYNAMIC_CB_STATE_OBJECT color_blend;
- XGL_DYNAMIC_DS_STATE_OBJECT depth_stencil;
+ VK_DYNAMIC_VP_STATE_OBJECT viewport;
+ VK_DYNAMIC_RS_STATE_OBJECT raster;
+ VK_DYNAMIC_CB_STATE_OBJECT color_blend;
+ VK_DYNAMIC_DS_STATE_OBJECT depth_stencil;
mat4x4 projection_matrix;
mat4x4 view_matrix;
float spin_increment;
bool pause;
- XGL_DESCRIPTOR_POOL desc_pool;
- XGL_DESCRIPTOR_SET desc_set;
+ VK_DESCRIPTOR_POOL desc_pool;
+ VK_DESCRIPTOR_SET desc_set;
xcb_window_t window;
xcb_intern_atom_reply_t *atom_wm_delete_window;
static void demo_flush_init_cmd(struct demo *demo)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE)
+ if (demo->cmd == VK_NULL_HANDLE)
return;
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- const XGL_CMD_BUFFER cmd_bufs[] = { demo->cmd };
+ const VK_CMD_BUFFER cmd_bufs[] = { demo->cmd };
- err = xglQueueSubmit(demo->queue, 1, cmd_bufs, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, cmd_bufs, VK_NULL_HANDLE);
assert(!err);
- err = xglQueueWaitIdle(demo->queue);
+ err = vkQueueWaitIdle(demo->queue);
assert(!err);
- xglDestroyObject(demo->cmd);
- demo->cmd = XGL_NULL_HANDLE;
+ vkDestroyObject(demo->cmd);
+ demo->cmd = VK_NULL_HANDLE;
}
static void demo_add_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueAddMemReference(demo->queue, mem[i]);
+ vkQueueAddMemReference(demo->queue, mem[i]);
}
}
static void demo_remove_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueRemoveMemReference(demo->queue, mem[i]);
+ vkQueueRemoveMemReference(demo->queue, mem[i]);
}
}
static void demo_set_image_layout(
struct demo *demo,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT old_image_layout,
- XGL_IMAGE_LAYOUT new_image_layout)
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT old_image_layout,
+ VK_IMAGE_LAYOUT new_image_layout)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE) {
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ if (demo->cmd == VK_NULL_HANDLE) {
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
}
- XGL_IMAGE_MEMORY_BARRIER image_memory_barrier = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
+ VK_IMAGE_MEMORY_BARRIER image_memory_barrier = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = NULL,
.outputMask = 0,
.inputMask = 0,
.oldLayout = old_image_layout,
.newLayout = new_image_layout,
.image = image,
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
};
- if (new_image_layout == XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
/* Make sure anything that was copying from this image has completed */
- image_memory_barrier.inputMask = XGL_MEMORY_INPUT_COPY_BIT;
+ image_memory_barrier.inputMask = VK_MEMORY_INPUT_COPY_BIT;
}
- if (new_image_layout == XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
/* Make sure any Copy or CPU writes to image are flushed */
- image_memory_barrier.outputMask = XGL_MEMORY_OUTPUT_COPY_BIT | XGL_MEMORY_OUTPUT_CPU_WRITE_BIT;
+ image_memory_barrier.outputMask = VK_MEMORY_OUTPUT_COPY_BIT | VK_MEMORY_OUTPUT_CPU_WRITE_BIT;
}
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_TOP_OF_PIPE };
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_TOP_OF_PIPE };
- XGL_PIPELINE_BARRIER pipeline_barrier;
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPELINE_BARRIER pipeline_barrier;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.pNext = NULL;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
+ vkCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
}
-static void demo_draw_build_cmd(struct demo *demo, XGL_CMD_BUFFER cmd_buf)
+static void demo_draw_build_cmd(struct demo *demo, VK_CMD_BUFFER cmd_buf)
{
- const XGL_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
+ const VK_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
.view = demo->buffers[demo->current_buffer].view,
- .layout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
};
- const XGL_DEPTH_STENCIL_BIND_INFO depth_stencil = {
+ const VK_DEPTH_STENCIL_BIND_INFO depth_stencil = {
.view = demo->depth.view,
- .layout = XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
};
- const XGL_CLEAR_COLOR clear_color = {
+ const VK_CLEAR_COLOR clear_color = {
.color.floatColor = { 0.2f, 0.2f, 0.2f, 0.2f },
.useRawValue = false,
};
const float clear_depth = 1.0f;
- XGL_IMAGE_SUBRESOURCE_RANGE clear_range;
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_IMAGE_SUBRESOURCE_RANGE clear_range;
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- XGL_RESULT err;
- XGL_ATTACHMENT_LOAD_OP load_op = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
- XGL_ATTACHMENT_STORE_OP store_op = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- const XGL_FRAMEBUFFER_CREATE_INFO fb_info = {
- .sType = XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
+ VK_RESULT err;
+ VK_ATTACHMENT_LOAD_OP load_op = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+ VK_ATTACHMENT_STORE_OP store_op = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ const VK_FRAMEBUFFER_CREATE_INFO fb_info = {
+ .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.pNext = NULL,
.colorAttachmentCount = 1,
- .pColorAttachments = (XGL_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
- .pDepthStencilAttachment = (XGL_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
+ .pColorAttachments = (VK_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
+ .pDepthStencilAttachment = (VK_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
.sampleCount = 1,
.width = demo->width,
.height = demo->height,
.layers = 1,
};
- XGL_RENDER_PASS_CREATE_INFO rp_info;
- XGL_RENDER_PASS_BEGIN rp_begin;
+ VK_RENDER_PASS_CREATE_INFO rp_info;
+ VK_RENDER_PASS_BEGIN rp_begin;
memset(&rp_info, 0 , sizeof(rp_info));
- err = xglCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
+ err = vkCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
assert(!err);
- rp_info.sType = XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
rp_info.renderArea.extent.width = demo->width;
rp_info.renderArea.extent.height = demo->height;
rp_info.colorAttachmentCount = fb_info.colorAttachmentCount;
rp_info.pColorLoadOps = &load_op;
rp_info.pColorStoreOps = &store_op;
rp_info.pColorLoadClearValues = &clear_color;
- rp_info.depthStencilFormat = XGL_FMT_D16_UNORM;
+ rp_info.depthStencilFormat = VK_FMT_D16_UNORM;
rp_info.depthStencilLayout = depth_stencil.layout;
- rp_info.depthLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.depthLoadClearValue = clear_depth;
- rp_info.depthStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- rp_info.stencilLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ rp_info.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.stencilLoadClearValue = 0;
- rp_info.stencilStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- err = xglCreateRenderPass(demo->device, &rp_info, &rp_begin.renderPass);
+ rp_info.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ err = vkCreateRenderPass(demo->device, &rp_info, &rp_begin.renderPass);
assert(!err);
- err = xglBeginCommandBuffer(cmd_buf, &cmd_buf_info);
+ err = vkBeginCommandBuffer(cmd_buf, &cmd_buf_info);
assert(!err);
- xglCmdBindPipeline(cmd_buf, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindPipeline(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->pipeline);
- xglCmdBindDescriptorSets(cmd_buf, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindDescriptorSets(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->desc_layout_chain, 0, 1, &demo->desc_set, NULL);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_VIEWPORT, demo->viewport);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_RASTER, demo->raster);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_COLOR_BLEND,
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_VIEWPORT, demo->viewport);
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_RASTER, demo->raster);
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_COLOR_BLEND,
demo->color_blend);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_DEPTH_STENCIL,
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_DEPTH_STENCIL,
demo->depth_stencil);
- xglCmdBeginRenderPass(cmd_buf, &rp_begin);
- clear_range.aspect = XGL_IMAGE_ASPECT_COLOR;
+ vkCmdBeginRenderPass(cmd_buf, &rp_begin);
+ clear_range.aspect = VK_IMAGE_ASPECT_COLOR;
clear_range.baseMipLevel = 0;
clear_range.mipLevels = 1;
clear_range.baseArraySlice = 0;
clear_range.arraySize = 1;
- xglCmdClearColorImage(cmd_buf,
+ vkCmdClearColorImage(cmd_buf,
demo->buffers[demo->current_buffer].image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, 1, &clear_range);
- clear_range.aspect = XGL_IMAGE_ASPECT_DEPTH;
- xglCmdClearDepthStencil(cmd_buf, demo->depth.image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ clear_range.aspect = VK_IMAGE_ASPECT_DEPTH;
+ vkCmdClearDepthStencil(cmd_buf, demo->depth.image,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_depth, 0, 1, &clear_range);
- xglCmdDraw(cmd_buf, 0, 12 * 3, 0, 1);
- xglCmdEndRenderPass(cmd_buf, rp_begin.renderPass);
+ vkCmdDraw(cmd_buf, 0, 12 * 3, 0, 1);
+ vkCmdEndRenderPass(cmd_buf, rp_begin.renderPass);
- err = xglEndCommandBuffer(cmd_buf);
+ err = vkEndCommandBuffer(cmd_buf);
assert(!err);
- xglDestroyObject(rp_begin.renderPass);
- xglDestroyObject(rp_begin.framebuffer);
+ vkDestroyObject(rp_begin.renderPass);
+ vkDestroyObject(rp_begin.framebuffer);
}
mat4x4 MVP, Model, VP;
int matrixSize = sizeof(MVP);
uint8_t *pData;
- XGL_RESULT err;
+ VK_RESULT err;
mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix);
mat4x4_mul(MVP, VP, demo->model_matrix);
assert(demo->uniform_data.num_mem == 1);
- err = xglMapMemory(demo->uniform_data.mem[0], 0, (void **) &pData);
+ err = vkMapMemory(demo->uniform_data.mem[0], 0, (void **) &pData);
assert(!err);
memcpy(pData, (const void*) &MVP[0][0], matrixSize);
- err = xglUnmapMemory(demo->uniform_data.mem[0]);
+ err = vkUnmapMemory(demo->uniform_data.mem[0]);
assert(!err);
}
static void demo_draw(struct demo *demo)
{
- const XGL_WSI_X11_PRESENT_INFO present = {
+ const VK_WSI_X11_PRESENT_INFO present = {
.destWindow = demo->window,
.srcImage = demo->buffers[demo->current_buffer].image,
.async = true,
.flip = false,
};
- XGL_FENCE fence = demo->buffers[demo->current_buffer].fence;
- XGL_RESULT err;
+ VK_FENCE fence = demo->buffers[demo->current_buffer].fence;
+ VK_RESULT err;
- err = xglWaitForFences(demo->device, 1, &fence, XGL_TRUE, ~((uint64_t) 0));
- assert(err == XGL_SUCCESS || err == XGL_ERROR_UNAVAILABLE);
+ err = vkWaitForFences(demo->device, 1, &fence, VK_TRUE, ~((uint64_t) 0));
+ assert(err == VK_SUCCESS || err == VK_ERROR_UNAVAILABLE);
- err = xglQueueSubmit(demo->queue, 1, &demo->buffers[demo->current_buffer].cmd,
- XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, &demo->buffers[demo->current_buffer].cmd,
+ VK_NULL_HANDLE);
assert(!err);
- err = xglWsiX11QueuePresent(demo->queue, &present, fence);
+ err = vkWsiX11QueuePresent(demo->queue, &present, fence);
assert(!err);
demo->current_buffer = (demo->current_buffer + 1) % DEMO_BUFFER_COUNT;
static void demo_prepare_buffers(struct demo *demo)
{
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
.format = demo->format,
- .usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
+ .usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.extent = {
.width = demo->width,
.height = demo->height,
},
.flags = 0,
};
- const XGL_FENCE_CREATE_INFO fence = {
- .sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO,
+ const VK_FENCE_CREATE_INFO fence = {
+ .sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.pNext = NULL,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
- .sType = XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
+ .sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
.pNext = NULL,
.format = demo->format,
.mipLevel = 0,
.arraySize = 1,
};
- err = xglWsiX11CreatePresentableImage(demo->device, &presentable_image,
+ err = vkWsiX11CreatePresentableImage(demo->device, &presentable_image,
&demo->buffers[i].image, &demo->buffers[i].mem);
assert(!err);
demo_add_mem_refs(demo, 1, &demo->buffers[i].mem);
demo_set_image_layout(demo, demo->buffers[i].image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
color_attachment_view.image = demo->buffers[i].image;
- err = xglCreateColorAttachmentView(demo->device,
+ err = vkCreateColorAttachmentView(demo->device,
&color_attachment_view, &demo->buffers[i].view);
assert(!err);
- err = xglCreateFence(demo->device,
+ err = vkCreateFence(demo->device,
&fence, &demo->buffers[i].fence);
assert(!err);
}
static void demo_prepare_depth(struct demo *demo)
{
- const XGL_FORMAT depth_format = XGL_FMT_D16_UNORM;
- const XGL_IMAGE_CREATE_INFO image = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_FORMAT depth_format = VK_FMT_D16_UNORM;
+ const VK_IMAGE_CREATE_INFO image = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = depth_format,
.extent = { demo->width, demo->height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
- .tiling = XGL_OPTIMAL_TILING,
- .usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT,
+ .tiling = VK_OPTIMAL_TILING,
+ .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_GPU_ONLY,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_GPU_ONLY,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
+ .image = VK_NULL_HANDLE,
.mipLevel = 0,
.baseArraySlice = 0,
.arraySize = 1,
.flags = 0,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
- XGL_RESULT err;
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
+ VK_RESULT err;
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
demo->depth.format = depth_format;
/* create image */
- err = xglCreateImage(demo->device, &image,
+ err = vkCreateImage(demo->device, &image,
&demo->depth.image);
assert(!err);
- err = xglGetObjectInfo(demo->depth.image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
+ err = vkGetObjectInfo(demo->depth.image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->depth.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->depth.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->depth.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
mem_alloc.allocationSize = mem_reqs[i].size;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(demo->depth.mem[i]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(demo->depth.image, i,
+ err = vkBindObjectMemory(demo->depth.image, i,
demo->depth.mem[i], 0);
assert(!err);
}
demo_set_image_layout(demo, demo->depth.image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
demo_add_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
/* create image view */
view.image = demo->depth.image;
- err = xglCreateDepthStencilView(demo->device, &view,
+ err = vkCreateDepthStencilView(demo->device, &view,
&demo->depth.view);
assert(!err);
}
/** loadTexture
* loads a png file into an memory object, using cstdio , libpng.
*
- * \param demo : Needed to access XGL calls
+ * \param demo : Needed to access VK calls
* \param filename : the png file to be loaded
* \param width : width of png, to be updated as a side effect of this function
* \param height : height of png, to be updated as a side effect of this function
*
*/
bool loadTexture(const char *filename, uint8_t *rgba_data,
- XGL_SUBRESOURCE_LAYOUT *layout,
+ VK_SUBRESOURCE_LAYOUT *layout,
int32_t *width, int32_t *height)
{
//header for testing if it is a png
static void demo_prepare_texture_image(struct demo *demo,
const char *filename,
struct texture_object *tex_obj,
- XGL_IMAGE_TILING tiling,
- XGL_FLAGS mem_props)
+ VK_IMAGE_TILING tiling,
+ VK_FLAGS mem_props)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
int32_t tex_width;
int32_t tex_height;
- XGL_RESULT err;
+ VK_RESULT err;
err = loadTexture(filename, NULL, NULL, &tex_width, &tex_height);
assert(err);
tex_obj->tex_width = tex_width;
tex_obj->tex_height = tex_height;
- const XGL_IMAGE_CREATE_INFO image_create_info = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_IMAGE_CREATE_INFO image_create_info = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = tex_format,
.extent = { tex_width, tex_height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
.tiling = tiling,
- .usage = XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
+ .usage = VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = &buf_alloc,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
.memProps = mem_props,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- err = xglCreateImage(demo->device, &image_create_info,
+ err = vkCreateImage(demo->device, &image_create_info,
&tex_obj->image);
assert(!err);
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- tex_obj->mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ tex_obj->mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
- mem_alloc.memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ mem_alloc.memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
for (uint32_t j = 0; j < num_allocations; j ++) {
mem_alloc.memType = mem_reqs[j].memType;
mem_alloc.allocationSize = mem_reqs[j].size;
- if (mem_alloc.memType == XGL_MEMORY_TYPE_BUFFER) {
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ if (mem_alloc.memType == VK_MEMORY_TYPE_BUFFER) {
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
img_alloc.pNext = &buf_alloc;
} else {
}
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(tex_obj->mem[j]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
+ err = vkBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
assert(!err);
}
free(mem_reqs);
tex_obj->num_mem = num_allocations;
- if (mem_props & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
- const XGL_IMAGE_SUBRESOURCE subres = {
- .aspect = XGL_IMAGE_ASPECT_COLOR,
+ if (mem_props & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
+ const VK_IMAGE_SUBRESOURCE subres = {
+ .aspect = VK_IMAGE_ASPECT_COLOR,
.mipLevel = 0,
.arraySlice = 0,
};
- XGL_SUBRESOURCE_LAYOUT layout;
- size_t layout_size = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ VK_SUBRESOURCE_LAYOUT layout;
+ size_t layout_size = sizeof(VK_SUBRESOURCE_LAYOUT);
void *data;
- err = xglGetImageSubresourceInfo(tex_obj->image, &subres,
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo(tex_obj->image, &subres,
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&layout_size, &layout);
assert(!err && layout_size == sizeof(layout));
/* Linear texture must be within a single memory object */
assert(num_allocations == 1);
- err = xglMapMemory(tex_obj->mem[0], 0, &data);
+ err = vkMapMemory(tex_obj->mem[0], 0, &data);
assert(!err);
if (!loadTexture(filename, data, &layout, &tex_width, &tex_height)) {
fprintf(stderr, "Error loading texture: %s\n", filename);
}
- err = xglUnmapMemory(tex_obj->mem[0]);
+ err = vkUnmapMemory(tex_obj->mem[0]);
assert(!err);
}
- tex_obj->imageLayout = XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
+ tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
demo_set_image_layout(demo, tex_obj->image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_UNDEFINED,
tex_obj->imageLayout);
/* setting the image layout does not reference the actual memory so no need to add a mem ref */
}
{
/* clean up staging resources */
for (uint32_t j = 0; j < tex_objs->num_mem; j ++) {
- xglBindObjectMemory(tex_objs->image, j, XGL_NULL_HANDLE, 0);
- xglFreeMemory(tex_objs->mem[j]);
+ vkBindObjectMemory(tex_objs->image, j, VK_NULL_HANDLE, 0);
+ vkFreeMemory(tex_objs->mem[j]);
}
free(tex_objs->mem);
- xglDestroyObject(tex_objs->image);
+ vkDestroyObject(tex_objs->image);
}
static void demo_prepare_textures(struct demo *demo)
{
- const XGL_FORMAT tex_format = XGL_FMT_R8G8B8A8_UNORM;
- XGL_FORMAT_PROPERTIES props;
+ const VK_FORMAT tex_format = VK_FMT_R8G8B8A8_UNORM;
+ VK_FORMAT_PROPERTIES props;
size_t size = sizeof(props);
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
- err = xglGetFormatInfo(demo->device, tex_format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(demo->device, tex_format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &props);
assert(!err);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- if (props.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT && !demo->use_staging_buffer) {
+ if (props.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT && !demo->use_staging_buffer) {
/* Device can texture using linear textures */
demo_prepare_texture_image(demo, tex_files[i], &demo->textures[i],
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
- } else if (props.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ } else if (props.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
/* Must use staging buffer to copy linear texture to optimized */
struct texture_object staging_texture;
memset(&staging_texture, 0, sizeof(staging_texture));
demo_prepare_texture_image(demo, tex_files[i], &staging_texture,
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
demo_prepare_texture_image(demo, tex_files[i], &demo->textures[i],
- XGL_OPTIMAL_TILING, XGL_MEMORY_PROPERTY_GPU_ONLY);
+ VK_OPTIMAL_TILING, VK_MEMORY_PROPERTY_GPU_ONLY);
demo_set_image_layout(demo, staging_texture.image,
staging_texture.imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
demo_set_image_layout(demo, demo->textures[i].image,
demo->textures[i].imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
- XGL_IMAGE_COPY copy_region = {
- .srcSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ VK_IMAGE_COPY copy_region = {
+ .srcSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.srcOffset = { 0, 0, 0 },
- .destSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ .destSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.destOffset = { 0, 0, 0 },
.extent = { staging_texture.tex_width, staging_texture.tex_height, 1 },
};
- xglCmdCopyImage(demo->cmd,
- staging_texture.image, XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
- demo->textures[i].image, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyImage(demo->cmd,
+ staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, ©_region);
demo_add_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
demo_add_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
demo_set_image_layout(demo, demo->textures[i].image,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
demo->textures[i].imageLayout);
demo_flush_init_cmd(demo);
demo_destroy_texture_image(&staging_texture);
demo_remove_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
} else {
- /* Can't support XGL_FMT_B8G8R8A8_UNORM !? */
+ /* Can't support VK_FMT_B8G8R8A8_UNORM !? */
assert(!"No support for tB8G8R8A8_UNORM as texture image format");
}
- const XGL_SAMPLER_CREATE_INFO sampler = {
- .sType = XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
+ const VK_SAMPLER_CREATE_INFO sampler = {
+ .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
.pNext = NULL,
- .magFilter = XGL_TEX_FILTER_NEAREST,
- .minFilter = XGL_TEX_FILTER_NEAREST,
- .mipMode = XGL_TEX_MIPMAP_BASE,
- .addressU = XGL_TEX_ADDRESS_CLAMP,
- .addressV = XGL_TEX_ADDRESS_CLAMP,
- .addressW = XGL_TEX_ADDRESS_CLAMP,
+ .magFilter = VK_TEX_FILTER_NEAREST,
+ .minFilter = VK_TEX_FILTER_NEAREST,
+ .mipMode = VK_TEX_MIPMAP_BASE,
+ .addressU = VK_TEX_ADDRESS_CLAMP,
+ .addressV = VK_TEX_ADDRESS_CLAMP,
+ .addressW = VK_TEX_ADDRESS_CLAMP,
.mipLodBias = 0.0f,
.maxAnisotropy = 1,
- .compareFunc = XGL_COMPARE_NEVER,
+ .compareFunc = VK_COMPARE_NEVER,
.minLod = 0.0f,
.maxLod = 0.0f,
- .borderColorType = XGL_BORDER_COLOR_OPAQUE_WHITE,
+ .borderColorType = VK_BORDER_COLOR_OPAQUE_WHITE,
};
- XGL_IMAGE_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
+ VK_IMAGE_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
- .viewType = XGL_IMAGE_VIEW_2D,
+ .image = VK_NULL_HANDLE,
+ .viewType = VK_IMAGE_VIEW_2D,
.format = tex_format,
- .channels = { XGL_CHANNEL_SWIZZLE_R,
- XGL_CHANNEL_SWIZZLE_G,
- XGL_CHANNEL_SWIZZLE_B,
- XGL_CHANNEL_SWIZZLE_A, },
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
+ .channels = { VK_CHANNEL_SWIZZLE_R,
+ VK_CHANNEL_SWIZZLE_G,
+ VK_CHANNEL_SWIZZLE_B,
+ VK_CHANNEL_SWIZZLE_A, },
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
.minLod = 0.0f,
};
/* create sampler */
- err = xglCreateSampler(demo->device, &sampler,
+ err = vkCreateSampler(demo->device, &sampler,
&demo->textures[i].sampler);
assert(!err);
/* create image view */
view.image = demo->textures[i].image;
- err = xglCreateImageView(demo->device, &view,
+ err = vkCreateImageView(demo->device, &view,
&demo->textures[i].view);
assert(!err);
}
void demo_prepare_cube_data_buffer(struct demo *demo)
{
- XGL_BUFFER_CREATE_INFO buf_info;
- XGL_BUFFER_VIEW_CREATE_INFO view_info;
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_BUFFER_CREATE_INFO buf_info;
+ VK_BUFFER_VIEW_CREATE_INFO view_info;
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO alloc_info = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO alloc_info = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &buf_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
- .memType = XGL_MEMORY_TYPE_BUFFER,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
+ .memType = VK_MEMORY_TYPE_BUFFER,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
uint8_t *pData;
int i;
mat4x4 MVP, VP;
- XGL_RESULT err;
- struct xgltexcube_vs_uniform data;
+ VK_RESULT err;
+ struct vktexcube_vs_uniform data;
mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix);
mat4x4_mul(MVP, VP, demo->model_matrix);
}
memset(&buf_info, 0, sizeof(buf_info));
- buf_info.sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
buf_info.size = sizeof(data);
- buf_info.usage = XGL_BUFFER_USAGE_UNIFORM_READ_BIT;
- err = xglCreateBuffer(demo->device, &buf_info, &demo->uniform_data.buf);
+ buf_info.usage = VK_BUFFER_USAGE_UNIFORM_READ_BIT;
+ err = vkCreateBuffer(demo->device, &buf_info, &demo->uniform_data.buf);
assert(!err);
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->uniform_data.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->uniform_data.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->uniform_data.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
assert(!err && mem_reqs_size == num_allocations * sizeof(*mem_reqs));
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
for (uint32_t i = 0; i < num_allocations; i ++) {
alloc_info.allocationSize = mem_reqs[i].size;
- err = xglAllocMemory(demo->device, &alloc_info, &(demo->uniform_data.mem[i]));
+ err = vkAllocMemory(demo->device, &alloc_info, &(demo->uniform_data.mem[i]));
assert(!err);
- err = xglMapMemory(demo->uniform_data.mem[i], 0, (void **) &pData);
+ err = vkMapMemory(demo->uniform_data.mem[i], 0, (void **) &pData);
assert(!err);
memcpy(pData, &data, (size_t)alloc_info.allocationSize);
- err = xglUnmapMemory(demo->uniform_data.mem[i]);
+ err = vkUnmapMemory(demo->uniform_data.mem[i]);
assert(!err);
- err = xglBindObjectMemory(demo->uniform_data.buf, i,
+ err = vkBindObjectMemory(demo->uniform_data.buf, i,
demo->uniform_data.mem[i], 0);
assert(!err);
}
demo_add_mem_refs(demo, demo->uniform_data.num_mem, demo->uniform_data.mem);
memset(&view_info, 0, sizeof(view_info));
- view_info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
+ view_info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
view_info.buffer = demo->uniform_data.buf;
- view_info.viewType = XGL_BUFFER_VIEW_RAW;
+ view_info.viewType = VK_BUFFER_VIEW_RAW;
view_info.offset = 0;
view_info.range = sizeof(data);
- err = xglCreateBufferView(demo->device, &view_info, &demo->uniform_data.view);
+ err = vkCreateBufferView(demo->device, &view_info, &demo->uniform_data.view);
assert(!err);
- demo->uniform_data.attach.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
+ demo->uniform_data.attach.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
demo->uniform_data.attach.view = demo->uniform_data.view;
}
static void demo_prepare_descriptor_layout(struct demo *demo)
{
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING layout_bindings[2] = {
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING layout_bindings[2] = {
[0] = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
+ .descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
.count = 1,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_VERTEX_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_VERTEX_BIT,
.pImmutableSamplers = NULL,
},
[1] = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ .descriptorType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
.pImmutableSamplers = NULL,
},
};
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
.pNext = NULL,
.count = 2,
.pBinding = layout_bindings,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorSetLayout(demo->device,
+ err = vkCreateDescriptorSetLayout(demo->device,
&descriptor_layout, &demo->desc_layout);
assert(!err);
- err = xglCreateDescriptorSetLayoutChain(demo->device,
+ err = vkCreateDescriptorSetLayoutChain(demo->device,
1, &demo->desc_layout, &demo->desc_layout_chain);
assert(!err);
}
-static XGL_SHADER demo_prepare_shader(struct demo *demo,
- XGL_PIPELINE_SHADER_STAGE stage,
+static VK_SHADER demo_prepare_shader(struct demo *demo,
+ VK_PIPELINE_SHADER_STAGE stage,
const void *code,
size_t size)
{
- XGL_SHADER_CREATE_INFO createInfo;
- XGL_SHADER shader;
- XGL_RESULT err;
+ VK_SHADER_CREATE_INFO createInfo;
+ VK_SHADER shader;
+ VK_RESULT err;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
#ifdef EXTERNAL_SPV
createInfo.pCode = code;
createInfo.flags = 0;
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
}
createInfo.pCode = malloc(createInfo.codeSize);
createInfo.flags = 0;
- /* try version 0 first: XGL_PIPELINE_SHADER_STAGE followed by GLSL */
+ /* try version 0 first: VK_PIPELINE_SHADER_STAGE followed by GLSL */
((uint32_t *) createInfo.pCode)[0] = ICD_SPV_MAGIC;
((uint32_t *) createInfo.pCode)[1] = 0;
((uint32_t *) createInfo.pCode)[2] = stage;
memcpy(((uint32_t *) createInfo.pCode + 3), code, size + 1);
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
return NULL;
return shader_code;
}
-static XGL_SHADER demo_prepare_vs(struct demo *demo)
+static VK_SHADER demo_prepare_vs(struct demo *demo)
{
#ifdef EXTERNAL_SPV
void *vertShaderCode;
vertShaderCode = demo_read_spv("cube-vert.spv", &size);
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
vertShaderCode, size);
#else
static const char *vertShaderText =
" gl_Position = ubuf.MVP * ubuf.position[gl_VertexID];\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
(const void *) vertShaderText,
strlen(vertShaderText));
#endif
}
-static XGL_SHADER demo_prepare_fs(struct demo *demo)
+static VK_SHADER demo_prepare_fs(struct demo *demo)
{
#ifdef EXTERNAL_SPV
void *fragShaderCode;
fragShaderCode = demo_read_spv("cube-frag.spv", &size);
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
fragShaderCode, size);
#else
static const char *fragShaderText =
" gl_FragColor = texture(tex, texcoord.xy);\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
(const void *) fragShaderText,
strlen(fragShaderText));
#endif
static void demo_prepare_pipeline(struct demo *demo)
{
- XGL_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
- XGL_PIPELINE_IA_STATE_CREATE_INFO ia;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rs;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb;
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vp;
- XGL_PIPELINE_MS_STATE_CREATE_INFO ms;
- XGL_RESULT err;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
+ VK_PIPELINE_IA_STATE_CREATE_INFO ia;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rs;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vp;
+ VK_PIPELINE_MS_STATE_CREATE_INFO ms;
+ VK_RESULT err;
memset(&pipeline, 0, sizeof(pipeline));
- pipeline.sType = XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
pipeline.pSetLayoutChain = demo->desc_layout_chain;
memset(&ia, 0, sizeof(ia));
- ia.sType = XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
- ia.topology = XGL_TOPOLOGY_TRIANGLE_LIST;
+ ia.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
+ ia.topology = VK_TOPOLOGY_TRIANGLE_LIST;
memset(&rs, 0, sizeof(rs));
- rs.sType = XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
- rs.fillMode = XGL_FILL_SOLID;
- rs.cullMode = XGL_CULL_BACK;
- rs.frontFace = XGL_FRONT_FACE_CCW;
+ rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
+ rs.fillMode = VK_FILL_SOLID;
+ rs.cullMode = VK_CULL_BACK;
+ rs.frontFace = VK_FRONT_FACE_CCW;
memset(&cb, 0, sizeof(cb));
- cb.sType = XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
- XGL_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
+ cb.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
memset(att_state, 0, sizeof(att_state));
att_state[0].format = demo->format;
att_state[0].channelWriteMask = 0xf;
- att_state[0].blendEnable = XGL_FALSE;
+ att_state[0].blendEnable = VK_FALSE;
cb.attachmentCount = 1;
cb.pAttachments = att_state;
memset(&vp, 0, sizeof(vp));
- vp.sType = XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
+ vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
vp.numViewports = 1;
- vp.clipOrigin = XGL_COORDINATE_ORIGIN_LOWER_LEFT;
+ vp.clipOrigin = VK_COORDINATE_ORIGIN_LOWER_LEFT;
memset(&ds, 0, sizeof(ds));
- ds.sType = XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
+ ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
ds.format = demo->depth.format;
- ds.depthTestEnable = XGL_TRUE;
- ds.depthWriteEnable = XGL_TRUE;
- ds.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds.depthBoundsEnable = XGL_FALSE;
- ds.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds.stencilTestEnable = XGL_FALSE;
+ ds.depthTestEnable = VK_TRUE;
+ ds.depthWriteEnable = VK_TRUE;
+ ds.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds.depthBoundsEnable = VK_FALSE;
+ ds.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds.stencilTestEnable = VK_FALSE;
ds.front = ds.back;
memset(&vs, 0, sizeof(vs));
- vs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- vs.shader.stage = XGL_SHADER_STAGE_VERTEX;
+ vs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ vs.shader.stage = VK_SHADER_STAGE_VERTEX;
vs.shader.shader = demo_prepare_vs(demo);
assert(vs.shader.shader != NULL);
memset(&fs, 0, sizeof(fs));
- fs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- fs.shader.stage = XGL_SHADER_STAGE_FRAGMENT;
+ fs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ fs.shader.stage = VK_SHADER_STAGE_FRAGMENT;
fs.shader.shader = demo_prepare_fs(demo);
assert(fs.shader.shader != NULL);
memset(&ms, 0, sizeof(ms));
- ms.sType = XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
+ ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
ms.sampleMask = 1;
- ms.multisampleEnable = XGL_FALSE;
+ ms.multisampleEnable = VK_FALSE;
ms.samples = 1;
pipeline.pNext = (const void *) &ia;
ds.pNext = (const void *) &vs;
vs.pNext = (const void *) &fs;
- err = xglCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
+ err = vkCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
assert(!err);
- xglDestroyObject(vs.shader.shader);
- xglDestroyObject(fs.shader.shader);
+ vkDestroyObject(vs.shader.shader);
+ vkDestroyObject(fs.shader.shader);
}
static void demo_prepare_dynamic_states(struct demo *demo)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO raster;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
- XGL_RESULT err;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO raster;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
+ VK_RESULT err;
memset(&viewport_create, 0, sizeof(viewport_create));
- viewport_create.sType = XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
+ viewport_create.sType = VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
viewport_create.viewportAndScissorCount = 1;
- XGL_VIEWPORT viewport;
+ VK_VIEWPORT viewport;
memset(&viewport, 0, sizeof(viewport));
viewport.height = (float) demo->height;
viewport.width = (float) demo->width;
viewport.minDepth = (float) 0.0f;
viewport.maxDepth = (float) 1.0f;
viewport_create.pViewports = &viewport;
- XGL_RECT scissor;
+ VK_RECT scissor;
memset(&scissor, 0, sizeof(scissor));
scissor.extent.width = demo->width;
scissor.extent.height = demo->height;
viewport_create.pScissors = &scissor;
memset(&raster, 0, sizeof(raster));
- raster.sType = XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
+ raster.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
raster.pointSize = 1.0;
raster.lineWidth = 1.0;
memset(&color_blend, 0, sizeof(color_blend));
- color_blend.sType = XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
+ color_blend.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
color_blend.blendConst[0] = 1.0f;
color_blend.blendConst[1] = 1.0f;
color_blend.blendConst[2] = 1.0f;
color_blend.blendConst[3] = 1.0f;
memset(&depth_stencil, 0, sizeof(depth_stencil));
- depth_stencil.sType = XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
+ depth_stencil.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
depth_stencil.minDepth = 0.0f;
depth_stencil.maxDepth = 1.0f;
depth_stencil.stencilBackRef = 0;
depth_stencil.stencilReadMask = 0xff;
depth_stencil.stencilWriteMask = 0xff;
- err = xglCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
+ err = vkCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
assert(!err);
- err = xglCreateDynamicRasterState(demo->device, &raster, &demo->raster);
+ err = vkCreateDynamicRasterState(demo->device, &raster, &demo->raster);
assert(!err);
- err = xglCreateDynamicColorBlendState(demo->device,
+ err = vkCreateDynamicColorBlendState(demo->device,
&color_blend, &demo->color_blend);
assert(!err);
- err = xglCreateDynamicDepthStencilState(demo->device,
+ err = vkCreateDynamicDepthStencilState(demo->device,
&depth_stencil, &demo->depth_stencil);
assert(!err);
}
static void demo_prepare_descriptor_pool(struct demo *demo)
{
- const XGL_DESCRIPTOR_TYPE_COUNT type_counts[2] = {
+ const VK_DESCRIPTOR_TYPE_COUNT type_counts[2] = {
[0] = {
- .type = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
+ .type = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
.count = 1,
},
[1] = {
- .type = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ .type = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
},
};
- const XGL_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
.pNext = NULL,
.count = 2,
.pTypeCount = type_counts,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorPool(demo->device,
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
+ err = vkCreateDescriptorPool(demo->device,
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
&descriptor_pool, &demo->desc_pool);
assert(!err);
}
static void demo_prepare_descriptor_set(struct demo *demo)
{
- XGL_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
- XGL_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
- XGL_UPDATE_SAMPLER_TEXTURES update_fs;
- XGL_UPDATE_BUFFERS update_vs;
+ VK_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
+ VK_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
+ VK_UPDATE_SAMPLER_TEXTURES update_fs;
+ VK_UPDATE_BUFFERS update_vs;
const void *update_array[2] = { &update_vs, &update_fs };
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t count;
uint32_t i;
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- view_info[i].sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ view_info[i].sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
view_info[i].pNext = NULL;
view_info[i].view = demo->textures[i].view,
- view_info[i].layout = XGL_IMAGE_LAYOUT_GENERAL;
+ view_info[i].layout = VK_IMAGE_LAYOUT_GENERAL;
combined_info[i].sampler = demo->textures[i].sampler;
combined_info[i].pImageView = &view_info[i];
}
memset(&update_vs, 0, sizeof(update_vs));
- update_vs.sType = XGL_STRUCTURE_TYPE_UPDATE_BUFFERS;
+ update_vs.sType = VK_STRUCTURE_TYPE_UPDATE_BUFFERS;
update_vs.pNext = &update_fs;
- update_vs.descriptorType = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
+ update_vs.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
update_vs.count = 1;
update_vs.pBufferViews = &demo->uniform_data.attach;
memset(&update_fs, 0, sizeof(update_fs));
- update_fs.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
+ update_fs.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
update_fs.binding = 1;
update_fs.count = DEMO_TEXTURE_COUNT;
update_fs.pSamplerImageViews = combined_info;
- err = xglAllocDescriptorSets(demo->desc_pool,
- XGL_DESCRIPTOR_SET_USAGE_STATIC,
+ err = vkAllocDescriptorSets(demo->desc_pool,
+ VK_DESCRIPTOR_SET_USAGE_STATIC,
1, &demo->desc_layout,
&demo->desc_set, &count);
assert(!err && count == 1);
- xglBeginDescriptorPoolUpdate(demo->device,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST);
+ vkBeginDescriptorPoolUpdate(demo->device,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST);
- xglClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
- xglUpdateDescriptors(demo->desc_set, 2, update_array);
+ vkClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
+ vkUpdateDescriptors(demo->desc_set, 2, update_array);
- xglEndDescriptorPoolUpdate(demo->device, demo->buffers[demo->current_buffer].cmd);
+ vkEndDescriptorPoolUpdate(demo->device, demo->buffers[demo->current_buffer].cmd);
}
static void demo_prepare(struct demo *demo)
{
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
demo_prepare_buffers(demo);
demo_prepare_depth(demo);
demo_prepare_dynamic_states(demo);
for (int i = 0; i < DEMO_BUFFER_COUNT; i++) {
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->buffers[i].cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->buffers[i].cmd);
assert(!err);
}
}
// Wait for work to finish before updating MVP.
- xglDeviceWaitIdle(demo->device);
+ vkDeviceWaitIdle(demo->device);
demo_update_data_buffer(demo);
demo_draw(demo);
// Wait for work to finish before updating MVP.
- xglDeviceWaitIdle(demo->device);
+ vkDeviceWaitIdle(demo->device);
}
}
xcb_map_window(demo->connection, demo->window);
}
-static void demo_init_xgl(struct demo *demo)
+static void demo_init_vk(struct demo *demo)
{
- const XGL_APPLICATION_INFO app = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ const VK_APPLICATION_INFO app = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
.pAppName = "cube",
.appVersion = 0,
.pEngineName = "cube",
.engineVersion = 0,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app,
.pAllocCb = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
};
- const XGL_WSI_X11_CONNECTION_INFO connection = {
+ const VK_WSI_X11_CONNECTION_INFO connection = {
.pConnection = demo->connection,
.root = demo->screen->root,
.provider = 0,
};
- const XGL_DEVICE_QUEUE_CREATE_INFO queue = {
+ const VK_DEVICE_QUEUE_CREATE_INFO queue = {
.queueNodeIndex = 0,
.queueCount = 1,
};
const char *ext_names[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
- const XGL_DEVICE_CREATE_INFO device = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ const VK_DEVICE_CREATE_INFO device = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 1,
.pRequestedQueues = &queue,
.extensionCount = 1,
.ppEnabledExtensionNames = ext_names,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t gpu_count;
uint32_t i;
size_t data_size;
uint32_t queue_count;
- err = xglCreateInstance(&inst_info, &demo->inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &demo->inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
assert(!err);
}
- err = xglEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
+ err = vkEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
assert(!err && gpu_count == 1);
for (i = 0; i < device.extensionCount; i++) {
- err = xglGetExtensionSupport(demo->gpu, ext_names[i]);
+ err = vkGetExtensionSupport(demo->gpu, ext_names[i]);
assert(!err);
}
- err = xglWsiX11AssociateConnection(demo->gpu, &connection);
+ err = vkWsiX11AssociateConnection(demo->gpu, &connection);
assert(!err);
- err = xglCreateDevice(demo->gpu, &device, &demo->device);
+ err = vkCreateDevice(demo->gpu, &device, &demo->device);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->gpu_props = (XGL_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ demo->gpu_props = (VK_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, demo->gpu_props);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->queue_props = (XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ demo->queue_props = (VK_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, demo->queue_props);
assert(!err);
- queue_count = (uint32_t)(data_size / sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES));
+ queue_count = (uint32_t)(data_size / sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES));
assert(queue_count >= 1);
for (i = 0; i < queue_count; i++) {
- if (demo->queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT)
+ if (demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT)
break;
}
assert(i < queue_count);
demo->graphics_queue_node_index = i;
- err = xglGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
+ err = vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
0, &demo->queue);
assert(!err);
}
}
demo_init_connection(demo);
- demo_init_xgl(demo);
+ demo_init_vk(demo);
demo->width = 500;
demo->height = 500;
- demo->format = XGL_FMT_B8G8R8A8_UNORM;
+ demo->format = VK_FMT_B8G8R8A8_UNORM;
demo->spin_angle = 0.01f;
demo->spin_increment = 0.01f;
{
uint32_t i, j;
- xglDestroyObject(demo->desc_set);
- xglDestroyObject(demo->desc_pool);
+ vkDestroyObject(demo->desc_set);
+ vkDestroyObject(demo->desc_pool);
- xglDestroyObject(demo->viewport);
- xglDestroyObject(demo->raster);
- xglDestroyObject(demo->color_blend);
- xglDestroyObject(demo->depth_stencil);
+ vkDestroyObject(demo->viewport);
+ vkDestroyObject(demo->raster);
+ vkDestroyObject(demo->color_blend);
+ vkDestroyObject(demo->depth_stencil);
- xglDestroyObject(demo->pipeline);
- xglDestroyObject(demo->desc_layout_chain);
- xglDestroyObject(demo->desc_layout);
+ vkDestroyObject(demo->pipeline);
+ vkDestroyObject(demo->desc_layout_chain);
+ vkDestroyObject(demo->desc_layout);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- xglDestroyObject(demo->textures[i].view);
- xglBindObjectMemory(demo->textures[i].image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->textures[i].image);
+ vkDestroyObject(demo->textures[i].view);
+ vkBindObjectMemory(demo->textures[i].image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->textures[i].image);
demo_remove_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
for (j = 0; j < demo->textures[i].num_mem; j++)
- xglFreeMemory(demo->textures[i].mem[j]);
- xglDestroyObject(demo->textures[i].sampler);
+ vkFreeMemory(demo->textures[i].mem[j]);
+ vkDestroyObject(demo->textures[i].sampler);
}
- xglDestroyObject(demo->depth.view);
- xglBindObjectMemory(demo->depth.image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->depth.image);
+ vkDestroyObject(demo->depth.view);
+ vkBindObjectMemory(demo->depth.image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->depth.image);
demo_remove_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
for (j = 0; j < demo->depth.num_mem; j++) {
- xglFreeMemory(demo->depth.mem[j]);
+ vkFreeMemory(demo->depth.mem[j]);
}
- xglDestroyObject(demo->uniform_data.view);
- xglBindObjectMemory(demo->uniform_data.buf, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->uniform_data.buf);
+ vkDestroyObject(demo->uniform_data.view);
+ vkBindObjectMemory(demo->uniform_data.buf, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->uniform_data.buf);
demo_remove_mem_refs(demo, demo->uniform_data.num_mem, demo->uniform_data.mem);
for (j = 0; j < demo->uniform_data.num_mem; j++)
- xglFreeMemory(demo->uniform_data.mem[j]);
+ vkFreeMemory(demo->uniform_data.mem[j]);
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- xglDestroyObject(demo->buffers[i].fence);
- xglDestroyObject(demo->buffers[i].view);
- xglDestroyObject(demo->buffers[i].image);
- xglDestroyObject(demo->buffers[i].cmd);
+ vkDestroyObject(demo->buffers[i].fence);
+ vkDestroyObject(demo->buffers[i].view);
+ vkDestroyObject(demo->buffers[i].image);
+ vkDestroyObject(demo->buffers[i].cmd);
demo_remove_mem_refs(demo, 1, &demo->buffers[i].mem);
}
- xglDestroyDevice(demo->device);
- xglDestroyInstance(demo->inst);
+ vkDestroyDevice(demo->device);
+ vkDestroyInstance(demo->inst);
xcb_destroy_window(demo->connection, demo->window);
xcb_disconnect(demo->connection);
#include <assert.h>
#include <xcb/xcb.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglWsiX11Ext.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkWsiX11Ext.h>
#include "icd-spv.h"
#define VERTEX_BUFFER_BIND_ID 0
struct texture_object {
- XGL_SAMPLER sampler;
+ VK_SAMPLER sampler;
- XGL_IMAGE image;
- XGL_IMAGE_LAYOUT imageLayout;
+ VK_IMAGE image;
+ VK_IMAGE_LAYOUT imageLayout;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_IMAGE_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_IMAGE_VIEW view;
int32_t tex_width, tex_height;
};
xcb_connection_t *connection;
xcb_screen_t *screen;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE device;
- XGL_QUEUE queue;
- XGL_PHYSICAL_GPU_PROPERTIES *gpu_props;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE device;
+ VK_QUEUE queue;
+ VK_PHYSICAL_GPU_PROPERTIES *gpu_props;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
uint32_t graphics_queue_node_index;
int width, height;
- XGL_FORMAT format;
+ VK_FORMAT format;
struct {
- XGL_IMAGE image;
- XGL_GPU_MEMORY mem;
+ VK_IMAGE image;
+ VK_GPU_MEMORY mem;
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_FENCE fence;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_FENCE fence;
} buffers[DEMO_BUFFER_COUNT];
struct {
- XGL_FORMAT format;
+ VK_FORMAT format;
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_DEPTH_STENCIL_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_DEPTH_STENCIL_VIEW view;
} depth;
struct texture_object textures[DEMO_TEXTURE_COUNT];
struct {
- XGL_BUFFER buf;
+ VK_BUFFER buf;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
+ VK_GPU_MEMORY *mem;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_bindings[1];
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrs[2];
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_bindings[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrs[2];
} vertices;
- XGL_CMD_BUFFER cmd; // Buffer for initialization commands
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
- XGL_DESCRIPTOR_SET_LAYOUT desc_layout;
- XGL_PIPELINE pipeline;
+ VK_CMD_BUFFER cmd; // Buffer for initialization commands
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
+ VK_DESCRIPTOR_SET_LAYOUT desc_layout;
+ VK_PIPELINE pipeline;
- XGL_DYNAMIC_VP_STATE_OBJECT viewport;
- XGL_DYNAMIC_RS_STATE_OBJECT raster;
- XGL_DYNAMIC_CB_STATE_OBJECT color_blend;
- XGL_DYNAMIC_DS_STATE_OBJECT depth_stencil;
+ VK_DYNAMIC_VP_STATE_OBJECT viewport;
+ VK_DYNAMIC_RS_STATE_OBJECT raster;
+ VK_DYNAMIC_CB_STATE_OBJECT color_blend;
+ VK_DYNAMIC_DS_STATE_OBJECT depth_stencil;
- XGL_DESCRIPTOR_POOL desc_pool;
- XGL_DESCRIPTOR_SET desc_set;
+ VK_DESCRIPTOR_POOL desc_pool;
+ VK_DESCRIPTOR_SET desc_set;
xcb_window_t window;
xcb_intern_atom_reply_t *atom_wm_delete_window;
static void demo_flush_init_cmd(struct demo *demo)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE)
+ if (demo->cmd == VK_NULL_HANDLE)
return;
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- const XGL_CMD_BUFFER cmd_bufs[] = { demo->cmd };
+ const VK_CMD_BUFFER cmd_bufs[] = { demo->cmd };
- err = xglQueueSubmit(demo->queue, 1, cmd_bufs, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, cmd_bufs, VK_NULL_HANDLE);
assert(!err);
- err = xglQueueWaitIdle(demo->queue);
+ err = vkQueueWaitIdle(demo->queue);
assert(!err);
- xglDestroyObject(demo->cmd);
- demo->cmd = XGL_NULL_HANDLE;
+ vkDestroyObject(demo->cmd);
+ demo->cmd = VK_NULL_HANDLE;
}
static void demo_add_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueAddMemReference(demo->queue, mem[i]);
+ vkQueueAddMemReference(demo->queue, mem[i]);
}
}
static void demo_remove_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueRemoveMemReference(demo->queue, mem[i]);
+ vkQueueRemoveMemReference(demo->queue, mem[i]);
}
}
static void demo_set_image_layout(
struct demo *demo,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT old_image_layout,
- XGL_IMAGE_LAYOUT new_image_layout)
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT old_image_layout,
+ VK_IMAGE_LAYOUT new_image_layout)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE) {
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ if (demo->cmd == VK_NULL_HANDLE) {
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
}
- XGL_IMAGE_MEMORY_BARRIER image_memory_barrier = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
+ VK_IMAGE_MEMORY_BARRIER image_memory_barrier = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = NULL,
.outputMask = 0,
.inputMask = 0,
.oldLayout = old_image_layout,
.newLayout = new_image_layout,
.image = image,
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
};
- if (new_image_layout == XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
/* Make sure anything that was copying from this image has completed */
- image_memory_barrier.inputMask = XGL_MEMORY_INPUT_COPY_BIT;
+ image_memory_barrier.inputMask = VK_MEMORY_INPUT_COPY_BIT;
}
- if (new_image_layout == XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
/* Make sure any Copy or CPU writes to image are flushed */
- image_memory_barrier.outputMask = XGL_MEMORY_OUTPUT_COPY_BIT | XGL_MEMORY_OUTPUT_CPU_WRITE_BIT;
+ image_memory_barrier.outputMask = VK_MEMORY_OUTPUT_COPY_BIT | VK_MEMORY_OUTPUT_CPU_WRITE_BIT;
}
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_TOP_OF_PIPE };
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_TOP_OF_PIPE };
- XGL_PIPELINE_BARRIER pipeline_barrier;
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPELINE_BARRIER pipeline_barrier;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.pNext = NULL;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
+ vkCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
}
static void demo_draw_build_cmd(struct demo *demo)
{
- const XGL_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
+ const VK_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
.view = demo->buffers[demo->current_buffer].view,
- .layout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
};
- const XGL_DEPTH_STENCIL_BIND_INFO depth_stencil = {
+ const VK_DEPTH_STENCIL_BIND_INFO depth_stencil = {
.view = demo->depth.view,
- .layout = XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
};
- const XGL_CLEAR_COLOR clear_color = {
+ const VK_CLEAR_COLOR clear_color = {
.color.floatColor = { 0.2f, 0.2f, 0.2f, 0.2f },
.useRawValue = false,
};
const float clear_depth = 0.9f;
- XGL_IMAGE_SUBRESOURCE_RANGE clear_range;
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_IMAGE_SUBRESOURCE_RANGE clear_range;
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- XGL_RESULT err;
- XGL_ATTACHMENT_LOAD_OP load_op = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
- XGL_ATTACHMENT_STORE_OP store_op = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- const XGL_FRAMEBUFFER_CREATE_INFO fb_info = {
- .sType = XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
+ VK_RESULT err;
+ VK_ATTACHMENT_LOAD_OP load_op = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+ VK_ATTACHMENT_STORE_OP store_op = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ const VK_FRAMEBUFFER_CREATE_INFO fb_info = {
+ .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.pNext = NULL,
.colorAttachmentCount = 1,
- .pColorAttachments = (XGL_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
- .pDepthStencilAttachment = (XGL_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
+ .pColorAttachments = (VK_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
+ .pDepthStencilAttachment = (VK_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
.sampleCount = 1,
.width = demo->width,
.height = demo->height,
.layers = 1,
};
- XGL_RENDER_PASS_CREATE_INFO rp_info;
- XGL_RENDER_PASS_BEGIN rp_begin;
+ VK_RENDER_PASS_CREATE_INFO rp_info;
+ VK_RENDER_PASS_BEGIN rp_begin;
memset(&rp_info, 0 , sizeof(rp_info));
- err = xglCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
+ err = vkCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
assert(!err);
- rp_info.sType = XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
rp_info.renderArea.extent.width = demo->width;
rp_info.renderArea.extent.height = demo->height;
rp_info.colorAttachmentCount = fb_info.colorAttachmentCount;
rp_info.pColorLoadOps = &load_op;
rp_info.pColorStoreOps = &store_op;
rp_info.pColorLoadClearValues = &clear_color;
- rp_info.depthStencilFormat = XGL_FMT_D16_UNORM;
+ rp_info.depthStencilFormat = VK_FMT_D16_UNORM;
rp_info.depthStencilLayout = depth_stencil.layout;
- rp_info.depthLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.depthLoadClearValue = clear_depth;
- rp_info.depthStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- rp_info.stencilLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ rp_info.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.stencilLoadClearValue = 0;
- rp_info.stencilStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- err = xglCreateRenderPass(demo->device, &rp_info, &(rp_begin.renderPass));
+ rp_info.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ err = vkCreateRenderPass(demo->device, &rp_info, &(rp_begin.renderPass));
assert(!err);
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
assert(!err);
- xglCmdBindPipeline(demo->cmd, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindPipeline(demo->cmd, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->pipeline);
- xglCmdBindDescriptorSets(demo->cmd, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindDescriptorSets(demo->cmd, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->desc_layout_chain, 0, 1, & demo->desc_set, NULL);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_VIEWPORT, demo->viewport);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_RASTER, demo->raster);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_COLOR_BLEND,
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_VIEWPORT, demo->viewport);
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_RASTER, demo->raster);
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_COLOR_BLEND,
demo->color_blend);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_DEPTH_STENCIL,
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_DEPTH_STENCIL,
demo->depth_stencil);
- xglCmdBindVertexBuffer(demo->cmd, demo->vertices.buf, 0, VERTEX_BUFFER_BIND_ID);
+ vkCmdBindVertexBuffer(demo->cmd, demo->vertices.buf, 0, VERTEX_BUFFER_BIND_ID);
- xglCmdBeginRenderPass(demo->cmd, &rp_begin);
- clear_range.aspect = XGL_IMAGE_ASPECT_COLOR;
+ vkCmdBeginRenderPass(demo->cmd, &rp_begin);
+ clear_range.aspect = VK_IMAGE_ASPECT_COLOR;
clear_range.baseMipLevel = 0;
clear_range.mipLevels = 1;
clear_range.baseArraySlice = 0;
clear_range.arraySize = 1;
- xglCmdClearColorImage(demo->cmd,
+ vkCmdClearColorImage(demo->cmd,
demo->buffers[demo->current_buffer].image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, 1, &clear_range);
- clear_range.aspect = XGL_IMAGE_ASPECT_DEPTH;
- xglCmdClearDepthStencil(demo->cmd,
- demo->depth.image, XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ clear_range.aspect = VK_IMAGE_ASPECT_DEPTH;
+ vkCmdClearDepthStencil(demo->cmd,
+ demo->depth.image, VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_depth, 0, 1, &clear_range);
- xglCmdDraw(demo->cmd, 0, 3, 0, 1);
- xglCmdEndRenderPass(demo->cmd, rp_begin.renderPass);
+ vkCmdDraw(demo->cmd, 0, 3, 0, 1);
+ vkCmdEndRenderPass(demo->cmd, rp_begin.renderPass);
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- xglDestroyObject(rp_begin.renderPass);
- xglDestroyObject(rp_begin.framebuffer);
+ vkDestroyObject(rp_begin.renderPass);
+ vkDestroyObject(rp_begin.framebuffer);
}
static void demo_draw(struct demo *demo)
{
- const XGL_WSI_X11_PRESENT_INFO present = {
+ const VK_WSI_X11_PRESENT_INFO present = {
.destWindow = demo->window,
.srcImage = demo->buffers[demo->current_buffer].image,
};
- XGL_FENCE fence = demo->buffers[demo->current_buffer].fence;
- XGL_RESULT err;
+ VK_FENCE fence = demo->buffers[demo->current_buffer].fence;
+ VK_RESULT err;
demo_draw_build_cmd(demo);
- err = xglWaitForFences(demo->device, 1, &fence, XGL_TRUE, ~((uint64_t) 0));
- assert(err == XGL_SUCCESS || err == XGL_ERROR_UNAVAILABLE);
+ err = vkWaitForFences(demo->device, 1, &fence, VK_TRUE, ~((uint64_t) 0));
+ assert(err == VK_SUCCESS || err == VK_ERROR_UNAVAILABLE);
- err = xglQueueSubmit(demo->queue, 1, &demo->cmd, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, &demo->cmd, VK_NULL_HANDLE);
assert(!err);
- err = xglWsiX11QueuePresent(demo->queue, &present, fence);
+ err = vkWsiX11QueuePresent(demo->queue, &present, fence);
assert(!err);
demo->current_buffer = (demo->current_buffer + 1) % DEMO_BUFFER_COUNT;
static void demo_prepare_buffers(struct demo *demo)
{
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
.format = demo->format,
- .usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
+ .usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.extent = {
.width = demo->width,
.height = demo->height,
},
.flags = 0,
};
- const XGL_FENCE_CREATE_INFO fence = {
- .sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO,
+ const VK_FENCE_CREATE_INFO fence = {
+ .sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.pNext = NULL,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
- .sType = XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
+ .sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
.pNext = NULL,
.format = demo->format,
.mipLevel = 0,
.arraySize = 1,
};
- err = xglWsiX11CreatePresentableImage(demo->device, &presentable_image,
+ err = vkWsiX11CreatePresentableImage(demo->device, &presentable_image,
&demo->buffers[i].image, &demo->buffers[i].mem);
assert(!err);
-
demo_add_mem_refs(demo, 1, &demo->buffers[i].mem);
demo_set_image_layout(demo, demo->buffers[i].image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
color_attachment_view.image = demo->buffers[i].image;
- err = xglCreateColorAttachmentView(demo->device,
+ err = vkCreateColorAttachmentView(demo->device,
&color_attachment_view, &demo->buffers[i].view);
assert(!err);
- err = xglCreateFence(demo->device,
+ err = vkCreateFence(demo->device,
&fence, &demo->buffers[i].fence);
assert(!err);
}
static void demo_prepare_depth(struct demo *demo)
{
- const XGL_FORMAT depth_format = XGL_FMT_D16_UNORM;
- const XGL_IMAGE_CREATE_INFO image = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_FORMAT depth_format = VK_FMT_D16_UNORM;
+ const VK_IMAGE_CREATE_INFO image = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = depth_format,
.extent = { demo->width, demo->height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
- .tiling = XGL_OPTIMAL_TILING,
- .usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT,
+ .tiling = VK_OPTIMAL_TILING,
+ .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_GPU_ONLY,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_GPU_ONLY,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
+ .image = VK_NULL_HANDLE,
.mipLevel = 0,
.baseArraySlice = 0,
.arraySize = 1,
.flags = 0,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
- XGL_RESULT err;
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
+ VK_RESULT err;
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
demo->depth.format = depth_format;
/* create image */
- err = xglCreateImage(demo->device, &image,
+ err = vkCreateImage(demo->device, &image,
&demo->depth.image);
assert(!err);
- err = xglGetObjectInfo(demo->depth.image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
+ err = vkGetObjectInfo(demo->depth.image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->depth.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->depth.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->depth.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
mem_alloc.allocationSize = mem_reqs[i].size;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(demo->depth.mem[i]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(demo->depth.image, i,
+ err = vkBindObjectMemory(demo->depth.image, i,
demo->depth.mem[i], 0);
assert(!err);
}
demo_set_image_layout(demo, demo->depth.image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
demo_add_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
/* create image view */
view.image = demo->depth.image;
- err = xglCreateDepthStencilView(demo->device, &view,
+ err = vkCreateDepthStencilView(demo->device, &view,
&demo->depth.view);
assert(!err);
}
static void demo_prepare_texture_image(struct demo *demo,
const uint32_t *tex_colors,
struct texture_object *tex_obj,
- XGL_IMAGE_TILING tiling,
- XGL_FLAGS mem_props)
+ VK_IMAGE_TILING tiling,
+ VK_FLAGS mem_props)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
const int32_t tex_width = 2;
const int32_t tex_height = 2;
- XGL_RESULT err;
+ VK_RESULT err;
tex_obj->tex_width = tex_width;
tex_obj->tex_height = tex_height;
- const XGL_IMAGE_CREATE_INFO image_create_info = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_IMAGE_CREATE_INFO image_create_info = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = tex_format,
.extent = { tex_width, tex_height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
.tiling = tiling,
- .usage = XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
+ .usage = VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
.memProps = mem_props,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- err = xglCreateImage(demo->device, &image_create_info,
+ err = vkCreateImage(demo->device, &image_create_info,
&tex_obj->image);
assert(!err);
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- tex_obj->mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ tex_obj->mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
- mem_alloc.memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ mem_alloc.memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
for (uint32_t j = 0; j < num_allocations; j ++) {
mem_alloc.allocationSize = mem_reqs[j].size;
mem_alloc.memType = mem_reqs[j].memType;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(tex_obj->mem[j]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
+ err = vkBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
assert(!err);
}
free(mem_reqs);
tex_obj->num_mem = num_allocations;
- if (mem_props & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
- const XGL_IMAGE_SUBRESOURCE subres = {
- .aspect = XGL_IMAGE_ASPECT_COLOR,
+ if (mem_props & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
+ const VK_IMAGE_SUBRESOURCE subres = {
+ .aspect = VK_IMAGE_ASPECT_COLOR,
.mipLevel = 0,
.arraySlice = 0,
};
- XGL_SUBRESOURCE_LAYOUT layout;
- size_t layout_size = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ VK_SUBRESOURCE_LAYOUT layout;
+ size_t layout_size = sizeof(VK_SUBRESOURCE_LAYOUT);
void *data;
int32_t x, y;
- err = xglGetImageSubresourceInfo(tex_obj->image, &subres,
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo(tex_obj->image, &subres,
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&layout_size, &layout);
assert(!err && layout_size == sizeof(layout));
/* Linear texture must be within a single memory object */
assert(num_allocations == 1);
- err = xglMapMemory(tex_obj->mem[0], 0, &data);
+ err = vkMapMemory(tex_obj->mem[0], 0, &data);
assert(!err);
for (y = 0; y < tex_height; y++) {
row[x] = tex_colors[(x & 1) ^ (y & 1)];
}
- err = xglUnmapMemory(tex_obj->mem[0]);
+ err = vkUnmapMemory(tex_obj->mem[0]);
assert(!err);
}
- tex_obj->imageLayout = XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
+ tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
demo_set_image_layout(demo, tex_obj->image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_UNDEFINED,
tex_obj->imageLayout);
/* setting the image layout does not reference the actual memory so no need to add a mem ref */
}
{
/* clean up staging resources */
for (uint32_t j = 0; j < tex_obj->num_mem; j ++) {
- xglBindObjectMemory(tex_obj->image, j, XGL_NULL_HANDLE, 0);
- xglFreeMemory(tex_obj->mem[j]);
+ vkBindObjectMemory(tex_obj->image, j, VK_NULL_HANDLE, 0);
+ vkFreeMemory(tex_obj->mem[j]);
}
free(tex_obj->mem);
- xglDestroyObject(tex_obj->image);
+ vkDestroyObject(tex_obj->image);
}
static void demo_prepare_textures(struct demo *demo)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
- XGL_FORMAT_PROPERTIES props;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
+ VK_FORMAT_PROPERTIES props;
size_t size = sizeof(props);
const uint32_t tex_colors[DEMO_TEXTURE_COUNT][2] = {
{ 0xffff0000, 0xff00ff00 },
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
- err = xglGetFormatInfo(demo->device, tex_format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(demo->device, tex_format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &props);
assert(!err);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- if ((props.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) && !demo->use_staging_buffer) {
+ if ((props.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) && !demo->use_staging_buffer) {
/* Device can texture using linear textures */
demo_prepare_texture_image(demo, tex_colors[i], &demo->textures[i],
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
- } else if (props.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT){
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ } else if (props.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT){
/* Must use staging buffer to copy linear texture to optimized */
struct texture_object staging_texture;
memset(&staging_texture, 0, sizeof(staging_texture));
demo_prepare_texture_image(demo, tex_colors[i], &staging_texture,
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
demo_prepare_texture_image(demo, tex_colors[i], &demo->textures[i],
- XGL_OPTIMAL_TILING, XGL_MEMORY_PROPERTY_GPU_ONLY);
+ VK_OPTIMAL_TILING, VK_MEMORY_PROPERTY_GPU_ONLY);
demo_set_image_layout(demo, staging_texture.image,
staging_texture.imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
demo_set_image_layout(demo, demo->textures[i].image,
demo->textures[i].imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
- XGL_IMAGE_COPY copy_region = {
- .srcSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ VK_IMAGE_COPY copy_region = {
+ .srcSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.srcOffset = { 0, 0, 0 },
- .destSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ .destSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.destOffset = { 0, 0, 0 },
.extent = { staging_texture.tex_width, staging_texture.tex_height, 1 },
};
- xglCmdCopyImage(demo->cmd,
- staging_texture.image, XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
- demo->textures[i].image, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyImage(demo->cmd,
+ staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, ©_region);
demo_add_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
demo_add_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
demo_set_image_layout(demo, demo->textures[i].image,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
demo->textures[i].imageLayout);
demo_flush_init_cmd(demo);
demo_destroy_texture_image(&staging_texture);
demo_remove_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
} else {
- /* Can't support XGL_FMT_B8G8R8A8_UNORM !? */
+ /* Can't support VK_FMT_B8G8R8A8_UNORM !? */
assert(!"No support for B8G8R8A8_UNORM as texture image format");
}
- const XGL_SAMPLER_CREATE_INFO sampler = {
- .sType = XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
+ const VK_SAMPLER_CREATE_INFO sampler = {
+ .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
.pNext = NULL,
- .magFilter = XGL_TEX_FILTER_NEAREST,
- .minFilter = XGL_TEX_FILTER_NEAREST,
- .mipMode = XGL_TEX_MIPMAP_BASE,
- .addressU = XGL_TEX_ADDRESS_WRAP,
- .addressV = XGL_TEX_ADDRESS_WRAP,
- .addressW = XGL_TEX_ADDRESS_WRAP,
+ .magFilter = VK_TEX_FILTER_NEAREST,
+ .minFilter = VK_TEX_FILTER_NEAREST,
+ .mipMode = VK_TEX_MIPMAP_BASE,
+ .addressU = VK_TEX_ADDRESS_WRAP,
+ .addressV = VK_TEX_ADDRESS_WRAP,
+ .addressW = VK_TEX_ADDRESS_WRAP,
.mipLodBias = 0.0f,
.maxAnisotropy = 1,
- .compareFunc = XGL_COMPARE_NEVER,
+ .compareFunc = VK_COMPARE_NEVER,
.minLod = 0.0f,
.maxLod = 0.0f,
- .borderColorType = XGL_BORDER_COLOR_OPAQUE_WHITE,
+ .borderColorType = VK_BORDER_COLOR_OPAQUE_WHITE,
};
- XGL_IMAGE_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
+ VK_IMAGE_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
- .viewType = XGL_IMAGE_VIEW_2D,
+ .image = VK_NULL_HANDLE,
+ .viewType = VK_IMAGE_VIEW_2D,
.format = tex_format,
- .channels = { XGL_CHANNEL_SWIZZLE_R,
- XGL_CHANNEL_SWIZZLE_G,
- XGL_CHANNEL_SWIZZLE_B,
- XGL_CHANNEL_SWIZZLE_A, },
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
+ .channels = { VK_CHANNEL_SWIZZLE_R,
+ VK_CHANNEL_SWIZZLE_G,
+ VK_CHANNEL_SWIZZLE_B,
+ VK_CHANNEL_SWIZZLE_A, },
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
.minLod = 0.0f,
};
/* create sampler */
- err = xglCreateSampler(demo->device, &sampler,
+ err = vkCreateSampler(demo->device, &sampler,
&demo->textures[i].sampler);
assert(!err);
/* create image view */
view.image = demo->textures[i].image;
- err = xglCreateImageView(demo->device, &view,
+ err = vkCreateImageView(demo->device, &view,
&demo->textures[i].view);
assert(!err);
}
{ 1.0f, -1.0f, -0.5f, 1.0f, 0.0f },
{ 0.0f, 1.0f, 1.0f, 0.5f, 1.0f },
};
- const XGL_BUFFER_CREATE_INFO buf_info = {
- .sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
+ const VK_BUFFER_CREATE_INFO buf_info = {
+ .sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
.pNext = NULL,
.size = sizeof(vb),
- .usage = XGL_BUFFER_USAGE_VERTEX_FETCH_BIT,
+ .usage = VK_BUFFER_USAGE_VERTEX_FETCH_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &buf_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
- .memType = XGL_MEMORY_TYPE_BUFFER,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
+ .memType = VK_MEMORY_TYPE_BUFFER,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- XGL_RESULT err;
+ VK_RESULT err;
void *data;
memset(&demo->vertices, 0, sizeof(demo->vertices));
- err = xglCreateBuffer(demo->device, &buf_info, &demo->vertices.buf);
+ err = vkCreateBuffer(demo->device, &buf_info, &demo->vertices.buf);
assert(!err);
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->vertices.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->vertices.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->vertices.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
assert(!err && mem_reqs_size == sizeof(*mem_reqs));
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
for (uint32_t i = 0; i < num_allocations; i ++) {
mem_alloc.allocationSize = mem_reqs[i].size;
- err = xglAllocMemory(demo->device, &mem_alloc, &demo->vertices.mem[i]);
+ err = vkAllocMemory(demo->device, &mem_alloc, &demo->vertices.mem[i]);
assert(!err);
- err = xglMapMemory(demo->vertices.mem[i], 0, &data);
+ err = vkMapMemory(demo->vertices.mem[i], 0, &data);
assert(!err);
memcpy(data, vb, sizeof(vb));
- err = xglUnmapMemory(demo->vertices.mem[i]);
+ err = vkUnmapMemory(demo->vertices.mem[i]);
assert(!err);
- err = xglBindObjectMemory(demo->vertices.buf, i, demo->vertices.mem[i], 0);
+ err = vkBindObjectMemory(demo->vertices.buf, i, demo->vertices.mem[i], 0);
assert(!err);
}
demo_add_mem_refs(demo, demo->vertices.num_mem, demo->vertices.mem);
- demo->vertices.vi.sType = XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
+ demo->vertices.vi.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
demo->vertices.vi.pNext = NULL;
demo->vertices.vi.bindingCount = 1;
demo->vertices.vi.pVertexBindingDescriptions = demo->vertices.vi_bindings;
demo->vertices.vi_bindings[0].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_bindings[0].strideInBytes = sizeof(vb[0]);
- demo->vertices.vi_bindings[0].stepRate = XGL_VERTEX_INPUT_STEP_RATE_VERTEX;
+ demo->vertices.vi_bindings[0].stepRate = VK_VERTEX_INPUT_STEP_RATE_VERTEX;
demo->vertices.vi_attrs[0].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_attrs[0].location = 0;
- demo->vertices.vi_attrs[0].format = XGL_FMT_R32G32B32_SFLOAT;
+ demo->vertices.vi_attrs[0].format = VK_FMT_R32G32B32_SFLOAT;
demo->vertices.vi_attrs[0].offsetInBytes = 0;
demo->vertices.vi_attrs[1].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_attrs[1].location = 1;
- demo->vertices.vi_attrs[1].format = XGL_FMT_R32G32_SFLOAT;
+ demo->vertices.vi_attrs[1].format = VK_FMT_R32G32_SFLOAT;
demo->vertices.vi_attrs[1].offsetInBytes = sizeof(float) * 3;
}
static void demo_prepare_descriptor_layout(struct demo *demo)
{
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING layout_binding = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING layout_binding = {
+ .descriptorType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
.pImmutableSamplers = NULL,
};
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
.pNext = NULL,
.count = 1,
.pBinding = &layout_binding,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorSetLayout(demo->device,
+ err = vkCreateDescriptorSetLayout(demo->device,
&descriptor_layout, &demo->desc_layout);
assert(!err);
- err = xglCreateDescriptorSetLayoutChain(demo->device,
+ err = vkCreateDescriptorSetLayoutChain(demo->device,
1, &demo->desc_layout, &demo->desc_layout_chain);
assert(!err);
}
-static XGL_SHADER demo_prepare_shader(struct demo *demo,
- XGL_PIPELINE_SHADER_STAGE stage,
+static VK_SHADER demo_prepare_shader(struct demo *demo,
+ VK_PIPELINE_SHADER_STAGE stage,
const void *code,
size_t size)
{
- XGL_SHADER_CREATE_INFO createInfo;
- XGL_SHADER shader;
- XGL_RESULT err;
+ VK_SHADER_CREATE_INFO createInfo;
+ VK_SHADER shader;
+ VK_RESULT err;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
// Create fake SPV structure to feed GLSL
createInfo.pCode = malloc(createInfo.codeSize);
createInfo.flags = 0;
- /* try version 0 first: XGL_PIPELINE_SHADER_STAGE followed by GLSL */
+ /* try version 0 first: VK_PIPELINE_SHADER_STAGE followed by GLSL */
((uint32_t *) createInfo.pCode)[0] = ICD_SPV_MAGIC;
((uint32_t *) createInfo.pCode)[1] = 0;
((uint32_t *) createInfo.pCode)[2] = stage;
memcpy(((uint32_t *) createInfo.pCode + 3), code, size + 1);
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
return NULL;
return shader;
}
-static XGL_SHADER demo_prepare_vs(struct demo *demo)
+static VK_SHADER demo_prepare_vs(struct demo *demo)
{
static const char *vertShaderText =
"#version 140\n"
" gl_Position = pos;\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
(const void *) vertShaderText,
strlen(vertShaderText));
}
-static XGL_SHADER demo_prepare_fs(struct demo *demo)
+static VK_SHADER demo_prepare_fs(struct demo *demo)
{
static const char *fragShaderText =
"#version 140\n"
" gl_FragColor = texture(tex, texcoord);\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
(const void *) fragShaderText,
strlen(fragShaderText));
}
static void demo_prepare_pipeline(struct demo *demo)
{
- XGL_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
- XGL_PIPELINE_IA_STATE_CREATE_INFO ia;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rs;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb;
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vp;
- XGL_PIPELINE_MS_STATE_CREATE_INFO ms;
- XGL_RESULT err;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
+ VK_PIPELINE_IA_STATE_CREATE_INFO ia;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rs;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vp;
+ VK_PIPELINE_MS_STATE_CREATE_INFO ms;
+ VK_RESULT err;
memset(&pipeline, 0, sizeof(pipeline));
- pipeline.sType = XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
pipeline.pSetLayoutChain = demo->desc_layout_chain;
vi = demo->vertices.vi;
memset(&ia, 0, sizeof(ia));
- ia.sType = XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
- ia.topology = XGL_TOPOLOGY_TRIANGLE_LIST;
+ ia.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
+ ia.topology = VK_TOPOLOGY_TRIANGLE_LIST;
memset(&rs, 0, sizeof(rs));
- rs.sType = XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
- rs.fillMode = XGL_FILL_SOLID;
- rs.cullMode = XGL_CULL_NONE;
- rs.frontFace = XGL_FRONT_FACE_CCW;
+ rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
+ rs.fillMode = VK_FILL_SOLID;
+ rs.cullMode = VK_CULL_NONE;
+ rs.frontFace = VK_FRONT_FACE_CCW;
memset(&cb, 0, sizeof(cb));
- cb.sType = XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
- XGL_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
+ cb.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
memset(att_state, 0, sizeof(att_state));
att_state[0].format = demo->format;
att_state[0].channelWriteMask = 0xf;
- att_state[0].blendEnable = XGL_FALSE;
+ att_state[0].blendEnable = VK_FALSE;
cb.attachmentCount = 1;
cb.pAttachments = att_state;
memset(&vp, 0, sizeof(vp));
- vp.sType = XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
+ vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
vp.numViewports = 1;
- vp.clipOrigin = XGL_COORDINATE_ORIGIN_UPPER_LEFT;
+ vp.clipOrigin = VK_COORDINATE_ORIGIN_UPPER_LEFT;
memset(&ds, 0, sizeof(ds));
- ds.sType = XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
+ ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
ds.format = demo->depth.format;
- ds.depthTestEnable = XGL_TRUE;
- ds.depthWriteEnable = XGL_TRUE;
- ds.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds.depthBoundsEnable = XGL_FALSE;
- ds.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds.stencilTestEnable = XGL_FALSE;
+ ds.depthTestEnable = VK_TRUE;
+ ds.depthWriteEnable = VK_TRUE;
+ ds.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds.depthBoundsEnable = VK_FALSE;
+ ds.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds.stencilTestEnable = VK_FALSE;
ds.front = ds.back;
memset(&vs, 0, sizeof(vs));
- vs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- vs.shader.stage = XGL_SHADER_STAGE_VERTEX;
+ vs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ vs.shader.stage = VK_SHADER_STAGE_VERTEX;
vs.shader.shader = demo_prepare_vs(demo);
vs.shader.linkConstBufferCount = 0;
memset(&fs, 0, sizeof(fs));
- fs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- fs.shader.stage = XGL_SHADER_STAGE_FRAGMENT;
+ fs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ fs.shader.stage = VK_SHADER_STAGE_FRAGMENT;
fs.shader.shader = demo_prepare_fs(demo);
memset(&ms, 0, sizeof(ms));
- ms.sType = XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
+ ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
ms.sampleMask = 1;
- ms.multisampleEnable = XGL_FALSE;
+ ms.multisampleEnable = VK_FALSE;
ms.samples = 1;
pipeline.pNext = (const void *) &vi;
ds.pNext = (const void *) &vs;
vs.pNext = (const void *) &fs;
- err = xglCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
+ err = vkCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
assert(!err);
- xglDestroyObject(vs.shader.shader);
- xglDestroyObject(fs.shader.shader);
+ vkDestroyObject(vs.shader.shader);
+ vkDestroyObject(fs.shader.shader);
}
static void demo_prepare_dynamic_states(struct demo *demo)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO raster;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
- XGL_RESULT err;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO raster;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
+ VK_RESULT err;
memset(&viewport_create, 0, sizeof(viewport_create));
- viewport_create.sType = XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
+ viewport_create.sType = VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
viewport_create.viewportAndScissorCount = 1;
- XGL_VIEWPORT viewport;
+ VK_VIEWPORT viewport;
memset(&viewport, 0, sizeof(viewport));
viewport.height = (float) demo->height;
viewport.width = (float) demo->width;
viewport.minDepth = (float) 0.0f;
viewport.maxDepth = (float) 1.0f;
viewport_create.pViewports = &viewport;
- XGL_RECT scissor;
+ VK_RECT scissor;
memset(&scissor, 0, sizeof(scissor));
scissor.extent.width = demo->width;
scissor.extent.height = demo->height;
viewport_create.pScissors = &scissor;
memset(&raster, 0, sizeof(raster));
- raster.sType = XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
+ raster.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
raster.pointSize = 1.0;
raster.lineWidth = 1.0;
memset(&color_blend, 0, sizeof(color_blend));
- color_blend.sType = XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
+ color_blend.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
color_blend.blendConst[0] = 1.0f;
color_blend.blendConst[1] = 1.0f;
color_blend.blendConst[2] = 1.0f;
color_blend.blendConst[3] = 1.0f;
memset(&depth_stencil, 0, sizeof(depth_stencil));
- depth_stencil.sType = XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
+ depth_stencil.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
depth_stencil.minDepth = 0.0f;
depth_stencil.maxDepth = 1.0f;
depth_stencil.stencilBackRef = 0;
depth_stencil.stencilReadMask = 0xff;
depth_stencil.stencilWriteMask = 0xff;
- err = xglCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
+ err = vkCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
assert(!err);
- err = xglCreateDynamicRasterState(demo->device, &raster, &demo->raster);
+ err = vkCreateDynamicRasterState(demo->device, &raster, &demo->raster);
assert(!err);
- err = xglCreateDynamicColorBlendState(demo->device,
+ err = vkCreateDynamicColorBlendState(demo->device,
&color_blend, &demo->color_blend);
assert(!err);
- err = xglCreateDynamicDepthStencilState(demo->device,
+ err = vkCreateDynamicDepthStencilState(demo->device,
&depth_stencil, &demo->depth_stencil);
assert(!err);
}
static void demo_prepare_descriptor_pool(struct demo *demo)
{
- const XGL_DESCRIPTOR_TYPE_COUNT type_count = {
- .type = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ const VK_DESCRIPTOR_TYPE_COUNT type_count = {
+ .type = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
};
- const XGL_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
.pNext = NULL,
.count = 1,
.pTypeCount = &type_count,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorPool(demo->device,
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
+ err = vkCreateDescriptorPool(demo->device,
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
&descriptor_pool, &demo->desc_pool);
assert(!err);
}
static void demo_prepare_descriptor_set(struct demo *demo)
{
- XGL_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
- XGL_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
- XGL_UPDATE_SAMPLER_TEXTURES update;
+ VK_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
+ VK_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
+ VK_UPDATE_SAMPLER_TEXTURES update;
const void *update_array[1] = { &update };
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t count;
uint32_t i;
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- view_info[i].sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ view_info[i].sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
view_info[i].pNext = NULL;
view_info[i].view = demo->textures[i].view,
- view_info[i].layout = XGL_IMAGE_LAYOUT_GENERAL;
+ view_info[i].layout = VK_IMAGE_LAYOUT_GENERAL;
combined_info[i].sampler = demo->textures[i].sampler;
combined_info[i].pImageView = &view_info[i];
}
memset(&update, 0, sizeof(update));
- update.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
+ update.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
update.count = DEMO_TEXTURE_COUNT;
update.pSamplerImageViews = combined_info;
- err = xglAllocDescriptorSets(demo->desc_pool,
- XGL_DESCRIPTOR_SET_USAGE_STATIC,
+ err = vkAllocDescriptorSets(demo->desc_pool,
+ VK_DESCRIPTOR_SET_USAGE_STATIC,
1, &demo->desc_layout,
&demo->desc_set, &count);
assert(!err && count == 1);
- xglBeginDescriptorPoolUpdate(demo->device,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST);
+ vkBeginDescriptorPoolUpdate(demo->device,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST);
- xglClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
- xglUpdateDescriptors(demo->desc_set, 1, update_array);
+ vkClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
+ vkUpdateDescriptors(demo->desc_set, 1, update_array);
- xglEndDescriptorPoolUpdate(demo->device, demo->cmd);
+ vkEndDescriptorPoolUpdate(demo->device, demo->cmd);
}
static void demo_prepare(struct demo *demo)
{
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
demo_prepare_buffers(demo);
demo_prepare_depth(demo);
demo_prepare_pipeline(demo);
demo_prepare_dynamic_states(demo);
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
demo_prepare_descriptor_pool(demo);
xcb_map_window(demo->connection, demo->window);
}
-static void demo_init_xgl(struct demo *demo)
+static void demo_init_vk(struct demo *demo)
{
- const XGL_APPLICATION_INFO app = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ const VK_APPLICATION_INFO app = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
.pAppName = "tri",
.appVersion = 0,
.pEngineName = "tri",
.engineVersion = 0,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app,
.pAllocCb = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
};
- const XGL_WSI_X11_CONNECTION_INFO connection = {
+ const VK_WSI_X11_CONNECTION_INFO connection = {
.pConnection = demo->connection,
.root = demo->screen->root,
.provider = 0,
};
- const XGL_DEVICE_QUEUE_CREATE_INFO queue = {
+ const VK_DEVICE_QUEUE_CREATE_INFO queue = {
.queueNodeIndex = 0,
.queueCount = 1,
};
const char *ext_names[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
- const XGL_DEVICE_CREATE_INFO device = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ const VK_DEVICE_CREATE_INFO device = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 1,
.pRequestedQueues = &queue,
.extensionCount = 1,
.ppEnabledExtensionNames = ext_names,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t gpu_count;
uint32_t i;
size_t data_size;
uint32_t queue_count;
- err = xglCreateInstance(&inst_info, &demo->inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &demo->inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
assert(!err);
}
- err = xglEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
+ err = vkEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
assert(!err && gpu_count == 1);
for (i = 0; i < device.extensionCount; i++) {
- err = xglGetExtensionSupport(demo->gpu, ext_names[i]);
+ err = vkGetExtensionSupport(demo->gpu, ext_names[i]);
assert(!err);
}
- err = xglWsiX11AssociateConnection(demo->gpu, &connection);
+ err = vkWsiX11AssociateConnection(demo->gpu, &connection);
assert(!err);
- err = xglCreateDevice(demo->gpu, &device, &demo->device);
+ err = vkCreateDevice(demo->gpu, &device, &demo->device);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->gpu_props = (XGL_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ demo->gpu_props = (VK_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, demo->gpu_props);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->queue_props = (XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ demo->queue_props = (VK_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, demo->queue_props);
assert(!err);
- queue_count = (uint32_t) (data_size / sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES));
+ queue_count = (uint32_t) (data_size / sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES));
assert(queue_count >= 1);
for (i = 0; i < queue_count; i++) {
- if (demo->queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT)
+ if (demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT)
break;
}
assert(i < queue_count);
demo->graphics_queue_node_index = i;
- err = xglGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
+ err = vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
0, &demo->queue);
assert(!err);
}
}
demo_init_connection(demo);
- demo_init_xgl(demo);
+ demo_init_vk(demo);
demo->width = 300;
demo->height = 300;
- demo->format = XGL_FMT_B8G8R8A8_UNORM;
+ demo->format = VK_FMT_B8G8R8A8_UNORM;
}
static void demo_cleanup(struct demo *demo)
{
uint32_t i, j;
- xglDestroyObject(demo->desc_set);
- xglDestroyObject(demo->desc_pool);
+ vkDestroyObject(demo->desc_set);
+ vkDestroyObject(demo->desc_pool);
- xglDestroyObject(demo->cmd);
+ vkDestroyObject(demo->cmd);
- xglDestroyObject(demo->viewport);
- xglDestroyObject(demo->raster);
- xglDestroyObject(demo->color_blend);
- xglDestroyObject(demo->depth_stencil);
+ vkDestroyObject(demo->viewport);
+ vkDestroyObject(demo->raster);
+ vkDestroyObject(demo->color_blend);
+ vkDestroyObject(demo->depth_stencil);
- xglDestroyObject(demo->pipeline);
- xglDestroyObject(demo->desc_layout_chain);
- xglDestroyObject(demo->desc_layout);
+ vkDestroyObject(demo->pipeline);
+ vkDestroyObject(demo->desc_layout_chain);
+ vkDestroyObject(demo->desc_layout);
- xglBindObjectMemory(demo->vertices.buf, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->vertices.buf);
+ vkBindObjectMemory(demo->vertices.buf, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->vertices.buf);
demo_remove_mem_refs(demo, demo->vertices.num_mem, demo->vertices.mem);
for (j = 0; j < demo->vertices.num_mem; j++)
- xglFreeMemory(demo->vertices.mem[j]);
+ vkFreeMemory(demo->vertices.mem[j]);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- xglDestroyObject(demo->textures[i].view);
- xglBindObjectMemory(demo->textures[i].image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->textures[i].image);
+ vkDestroyObject(demo->textures[i].view);
+ vkBindObjectMemory(demo->textures[i].image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->textures[i].image);
demo_remove_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
for (j = 0; j < demo->textures[i].num_mem; j++)
- xglFreeMemory(demo->textures[i].mem[j]);
+ vkFreeMemory(demo->textures[i].mem[j]);
free(demo->textures[i].mem);
- xglDestroyObject(demo->textures[i].sampler);
+ vkDestroyObject(demo->textures[i].sampler);
}
- xglDestroyObject(demo->depth.view);
- xglBindObjectMemory(demo->depth.image, 0, XGL_NULL_HANDLE, 0);
+ vkDestroyObject(demo->depth.view);
+ vkBindObjectMemory(demo->depth.image, 0, VK_NULL_HANDLE, 0);
demo_remove_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
- xglDestroyObject(demo->depth.image);
+ vkDestroyObject(demo->depth.image);
for (j = 0; j < demo->depth.num_mem; j++)
- xglFreeMemory(demo->depth.mem[j]);
+ vkFreeMemory(demo->depth.mem[j]);
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- xglDestroyObject(demo->buffers[i].fence);
- xglDestroyObject(demo->buffers[i].view);
- xglDestroyObject(demo->buffers[i].image);
+ vkDestroyObject(demo->buffers[i].fence);
+ vkDestroyObject(demo->buffers[i].view);
+ vkDestroyObject(demo->buffers[i].image);
demo_remove_mem_refs(demo, 1, &demo->buffers[i].mem);
}
- xglDestroyDevice(demo->device);
- xglDestroyInstance(demo->inst);
+ vkDestroyDevice(demo->device);
+ vkDestroyInstance(demo->inst);
xcb_destroy_window(demo->connection, demo->window);
xcb_disconnect(demo->connection);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <string.h>
#include <assert.h>
-#include <xgl.h>
+#include <vulkan.h>
#define ERR(err) printf("%s:%d: failed with %s\n", \
- __FILE__, __LINE__, xgl_result_string(err));
+ __FILE__, __LINE__, vk_result_string(err));
#define ERR_EXIT(err) do { ERR(err); exit(-1); } while (0)
struct app_dev {
struct app_gpu *gpu; /* point back to the GPU */
- XGL_DEVICE obj;
+ VK_DEVICE obj;
- XGL_FORMAT_PROPERTIES format_props[XGL_NUM_FMT];
+ VK_FORMAT_PROPERTIES format_props[VK_NUM_FMT];
};
struct app_gpu {
uint32_t id;
- XGL_PHYSICAL_GPU obj;
+ VK_PHYSICAL_GPU obj;
- XGL_PHYSICAL_GPU_PROPERTIES props;
- XGL_PHYSICAL_GPU_PERFORMANCE perf;
+ VK_PHYSICAL_GPU_PROPERTIES props;
+ VK_PHYSICAL_GPU_PERFORMANCE perf;
uint32_t queue_count;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
- XGL_DEVICE_QUEUE_CREATE_INFO *queue_reqs;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_DEVICE_QUEUE_CREATE_INFO *queue_reqs;
- XGL_PHYSICAL_GPU_MEMORY_PROPERTIES memory_props;
+ VK_PHYSICAL_GPU_MEMORY_PROPERTIES memory_props;
uint32_t extension_count;
char **extensions;
struct app_dev dev;
};
-static const char *xgl_result_string(XGL_RESULT err)
+static const char *vk_result_string(VK_RESULT err)
{
switch (err) {
#define STR(r) case r: return #r
- STR(XGL_SUCCESS);
- STR(XGL_UNSUPPORTED);
- STR(XGL_NOT_READY);
- STR(XGL_TIMEOUT);
- STR(XGL_EVENT_SET);
- STR(XGL_EVENT_RESET);
- STR(XGL_ERROR_UNKNOWN);
- STR(XGL_ERROR_UNAVAILABLE);
- STR(XGL_ERROR_INITIALIZATION_FAILED);
- STR(XGL_ERROR_OUT_OF_MEMORY);
- STR(XGL_ERROR_OUT_OF_GPU_MEMORY);
- STR(XGL_ERROR_DEVICE_ALREADY_CREATED);
- STR(XGL_ERROR_DEVICE_LOST);
- STR(XGL_ERROR_INVALID_POINTER);
- STR(XGL_ERROR_INVALID_VALUE);
- STR(XGL_ERROR_INVALID_HANDLE);
- STR(XGL_ERROR_INVALID_ORDINAL);
- STR(XGL_ERROR_INVALID_MEMORY_SIZE);
- STR(XGL_ERROR_INVALID_EXTENSION);
- STR(XGL_ERROR_INVALID_FLAGS);
- STR(XGL_ERROR_INVALID_ALIGNMENT);
- STR(XGL_ERROR_INVALID_FORMAT);
- STR(XGL_ERROR_INVALID_IMAGE);
- STR(XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA);
- STR(XGL_ERROR_INVALID_QUEUE_TYPE);
- STR(XGL_ERROR_INVALID_OBJECT_TYPE);
- STR(XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
- STR(XGL_ERROR_BAD_SHADER_CODE);
- STR(XGL_ERROR_BAD_PIPELINE_DATA);
- STR(XGL_ERROR_TOO_MANY_MEMORY_REFERENCES);
- STR(XGL_ERROR_NOT_MAPPABLE);
- STR(XGL_ERROR_MEMORY_MAP_FAILED);
- STR(XGL_ERROR_MEMORY_UNMAP_FAILED);
- STR(XGL_ERROR_INCOMPATIBLE_DEVICE);
- STR(XGL_ERROR_INCOMPATIBLE_DRIVER);
- STR(XGL_ERROR_INCOMPLETE_COMMAND_BUFFER);
- STR(XGL_ERROR_BUILDING_COMMAND_BUFFER);
- STR(XGL_ERROR_MEMORY_NOT_BOUND);
- STR(XGL_ERROR_INCOMPATIBLE_QUEUE);
- STR(XGL_ERROR_NOT_SHAREABLE);
+ STR(VK_SUCCESS);
+ STR(VK_UNSUPPORTED);
+ STR(VK_NOT_READY);
+ STR(VK_TIMEOUT);
+ STR(VK_EVENT_SET);
+ STR(VK_EVENT_RESET);
+ STR(VK_ERROR_UNKNOWN);
+ STR(VK_ERROR_UNAVAILABLE);
+ STR(VK_ERROR_INITIALIZATION_FAILED);
+ STR(VK_ERROR_OUT_OF_MEMORY);
+ STR(VK_ERROR_OUT_OF_GPU_MEMORY);
+ STR(VK_ERROR_DEVICE_ALREADY_CREATED);
+ STR(VK_ERROR_DEVICE_LOST);
+ STR(VK_ERROR_INVALID_POINTER);
+ STR(VK_ERROR_INVALID_VALUE);
+ STR(VK_ERROR_INVALID_HANDLE);
+ STR(VK_ERROR_INVALID_ORDINAL);
+ STR(VK_ERROR_INVALID_MEMORY_SIZE);
+ STR(VK_ERROR_INVALID_EXTENSION);
+ STR(VK_ERROR_INVALID_FLAGS);
+ STR(VK_ERROR_INVALID_ALIGNMENT);
+ STR(VK_ERROR_INVALID_FORMAT);
+ STR(VK_ERROR_INVALID_IMAGE);
+ STR(VK_ERROR_INVALID_DESCRIPTOR_SET_DATA);
+ STR(VK_ERROR_INVALID_QUEUE_TYPE);
+ STR(VK_ERROR_INVALID_OBJECT_TYPE);
+ STR(VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
+ STR(VK_ERROR_BAD_SHADER_CODE);
+ STR(VK_ERROR_BAD_PIPELINE_DATA);
+ STR(VK_ERROR_TOO_MANY_MEMORY_REFERENCES);
+ STR(VK_ERROR_NOT_MAPPABLE);
+ STR(VK_ERROR_MEMORY_MAP_FAILED);
+ STR(VK_ERROR_MEMORY_UNMAP_FAILED);
+ STR(VK_ERROR_INCOMPATIBLE_DEVICE);
+ STR(VK_ERROR_INCOMPATIBLE_DRIVER);
+ STR(VK_ERROR_INCOMPLETE_COMMAND_BUFFER);
+ STR(VK_ERROR_BUILDING_COMMAND_BUFFER);
+ STR(VK_ERROR_MEMORY_NOT_BOUND);
+ STR(VK_ERROR_INCOMPATIBLE_QUEUE);
+ STR(VK_ERROR_NOT_SHAREABLE);
#undef STR
default: return "UNKNOWN_RESULT";
}
}
-static const char *xgl_gpu_type_string(XGL_PHYSICAL_GPU_TYPE type)
+static const char *vk_gpu_type_string(VK_PHYSICAL_GPU_TYPE type)
{
switch (type) {
-#define STR(r) case XGL_GPU_TYPE_ ##r: return #r
+#define STR(r) case VK_GPU_TYPE_ ##r: return #r
STR(OTHER);
STR(INTEGRATED);
STR(DISCRETE);
}
}
-static const char *xgl_format_string(XGL_FORMAT fmt)
+static const char *vk_format_string(VK_FORMAT fmt)
{
switch (fmt) {
-#define STR(r) case XGL_FMT_ ##r: return #r
+#define STR(r) case VK_FMT_ ##r: return #r
STR(UNDEFINED);
STR(R4G4_UNORM);
STR(R4G4_USCALED);
static void app_dev_init_formats(struct app_dev *dev)
{
- XGL_FORMAT f;
+ VK_FORMAT f;
- for (f = 0; f < XGL_NUM_FMT; f++) {
- const XGL_FORMAT fmt = f;
- XGL_RESULT err;
+ for (f = 0; f < VK_NUM_FMT; f++) {
+ const VK_FORMAT fmt = f;
+ VK_RESULT err;
size_t size = sizeof(dev->format_props[f]);
- err = xglGetFormatInfo(dev->obj, fmt,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(dev->obj, fmt,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &dev->format_props[f]);
if (err) {
memset(&dev->format_props[f], 0,
sizeof(dev->format_props[f]));
}
else if (size != sizeof(dev->format_props[f])) {
- ERR_EXIT(XGL_ERROR_UNKNOWN);
+ ERR_EXIT(VK_ERROR_UNKNOWN);
}
}
}
static void app_dev_init(struct app_dev *dev, struct app_gpu *gpu)
{
- XGL_DEVICE_CREATE_INFO info = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ VK_DEVICE_CREATE_INFO info = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 0,
.pRequestedQueues = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
/* request all queues */
info.queueRecordCount = gpu->queue_count;
info.extensionCount = gpu->extension_count;
info.ppEnabledExtensionNames = (const char*const*) gpu->extensions;
dev->gpu = gpu;
- err = xglCreateDevice(gpu->obj, &info, &dev->obj);
+ err = vkCreateDevice(gpu->obj, &info, &dev->obj);
if (err)
ERR_EXIT(err);
static void app_dev_destroy(struct app_dev *dev)
{
- xglDestroyDevice(dev->obj);
+ vkDestroyDevice(dev->obj);
}
static void app_gpu_init_extensions(struct app_gpu *gpu)
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
static char *known_extensions[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
for (i = 0; i < ARRAY_SIZE(known_extensions); i++) {
- err = xglGetExtensionSupport(gpu->obj, known_extensions[i]);
+ err = vkGetExtensionSupport(gpu->obj, known_extensions[i]);
if (!err)
gpu->extension_count++;
}
gpu->extensions =
malloc(sizeof(gpu->extensions[0]) * gpu->extension_count);
if (!gpu->extensions)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
gpu->extension_count = 0;
for (i = 0; i < ARRAY_SIZE(known_extensions); i++) {
- err = xglGetExtensionSupport(gpu->obj, known_extensions[i]);
+ err = vkGetExtensionSupport(gpu->obj, known_extensions[i]);
if (!err)
gpu->extensions[gpu->extension_count++] = known_extensions[i];
}
}
-static void app_gpu_init(struct app_gpu *gpu, uint32_t id, XGL_PHYSICAL_GPU obj)
+static void app_gpu_init(struct app_gpu *gpu, uint32_t id, VK_PHYSICAL_GPU obj)
{
size_t size;
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
memset(gpu, 0, sizeof(*gpu));
gpu->id = id;
gpu->obj = obj;
size = sizeof(gpu->props);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&size, &gpu->props);
if (err || size != sizeof(gpu->props))
ERR_EXIT(err);
size = sizeof(gpu->perf);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE,
&size, &gpu->perf);
if (err || size != sizeof(gpu->perf))
ERR_EXIT(err);
/* get queue count */
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&size, NULL);
if (err || size % sizeof(gpu->queue_props[0]))
ERR_EXIT(err);
malloc(sizeof(gpu->queue_props[0]) * gpu->queue_count);
size = sizeof(gpu->queue_props[0]) * gpu->queue_count;
if (!gpu->queue_props)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&size, gpu->queue_props);
if (err || size != sizeof(gpu->queue_props[0]) * gpu->queue_count)
ERR_EXIT(err);
size = sizeof(*gpu->queue_reqs) * gpu->queue_count;
gpu->queue_reqs = malloc(sizeof(*gpu->queue_reqs) * gpu->queue_count);
if (!gpu->queue_reqs)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
for (i = 0; i < gpu->queue_count; i++) {
gpu->queue_reqs[i].queueNodeIndex = i;
gpu->queue_reqs[i].queueCount = gpu->queue_props[i].queueCount;
}
size = sizeof(gpu->memory_props);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
&size, &gpu->memory_props);
if (err || size != sizeof(gpu->memory_props))
ERR_EXIT(err);
free(gpu->queue_props);
}
-static void app_dev_dump_format_props(const struct app_dev *dev, XGL_FORMAT fmt)
+static void app_dev_dump_format_props(const struct app_dev *dev, VK_FORMAT fmt)
{
- const XGL_FORMAT_PROPERTIES *props = &dev->format_props[fmt];
+ const VK_FORMAT_PROPERTIES *props = &dev->format_props[fmt];
struct {
const char *name;
- XGL_FLAGS flags;
+ VK_FLAGS flags;
} tilings[2];
uint32_t i;
tilings[1].name = "optimal";
tilings[1].flags = props->optimalTilingFeatures;
- printf("FORMAT_%s\n", xgl_format_string(fmt));
+ printf("FORMAT_%s\n", vk_format_string(fmt));
for (i = 0; i < ARRAY_SIZE(tilings); i++) {
if (!tilings[i].flags)
continue;
printf("\t%s tiling image =%s%s%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_IMAGE_SHADER_READ_BIT) ? " read" : "",
- (tilings[i].flags & XGL_FORMAT_IMAGE_SHADER_WRITE_BIT) ? " write" : "",
- (tilings[i].flags & XGL_FORMAT_IMAGE_COPY_BIT) ? " copy" : "");
+ (tilings[i].flags & VK_FORMAT_IMAGE_SHADER_READ_BIT) ? " read" : "",
+ (tilings[i].flags & VK_FORMAT_IMAGE_SHADER_WRITE_BIT) ? " write" : "",
+ (tilings[i].flags & VK_FORMAT_IMAGE_COPY_BIT) ? " copy" : "");
printf("\t%s tiling memory =%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT) ? " access" : "");
+ (tilings[i].flags & VK_FORMAT_MEMORY_SHADER_ACCESS_BIT) ? " access" : "");
printf("\t%s tiling attachment =%s%s%s%s%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_COLOR_ATTACHMENT_WRITE_BIT) ? " color" : "",
- (tilings[i].flags & XGL_FORMAT_COLOR_ATTACHMENT_BLEND_BIT) ? " blend" : "",
- (tilings[i].flags & XGL_FORMAT_DEPTH_ATTACHMENT_BIT) ? " depth" : "",
- (tilings[i].flags & XGL_FORMAT_STENCIL_ATTACHMENT_BIT) ? " stencil" : "",
- (tilings[i].flags & XGL_FORMAT_MSAA_ATTACHMENT_BIT) ? " msaa" : "");
+ (tilings[i].flags & VK_FORMAT_COLOR_ATTACHMENT_WRITE_BIT) ? " color" : "",
+ (tilings[i].flags & VK_FORMAT_COLOR_ATTACHMENT_BLEND_BIT) ? " blend" : "",
+ (tilings[i].flags & VK_FORMAT_DEPTH_ATTACHMENT_BIT) ? " depth" : "",
+ (tilings[i].flags & VK_FORMAT_STENCIL_ATTACHMENT_BIT) ? " stencil" : "",
+ (tilings[i].flags & VK_FORMAT_MSAA_ATTACHMENT_BIT) ? " msaa" : "");
printf("\t%s tiling conversion = %u\n", tilings[i].name,
- (bool) (tilings[i].flags & XGL_FORMAT_CONVERSION_BIT));
+ (bool) (tilings[i].flags & VK_FORMAT_CONVERSION_BIT));
}
}
static void
app_dev_dump(const struct app_dev *dev)
{
- XGL_FORMAT fmt;
+ VK_FORMAT fmt;
- for (fmt = 0; fmt < XGL_NUM_FMT; fmt++) {
+ for (fmt = 0; fmt < VK_NUM_FMT; fmt++) {
app_dev_dump_format_props(dev, fmt);
}
}
static void app_gpu_dump_multi_compat(const struct app_gpu *gpu, const struct app_gpu *other,
- const XGL_GPU_COMPATIBILITY_INFO *info)
+ const VK_GPU_COMPATIBILITY_INFO *info)
{
- printf("XGL_GPU_COMPATIBILITY_INFO[GPU%d]\n", other->id);
+ printf("VK_GPU_COMPATIBILITY_INFO[GPU%d]\n", other->id);
-#define TEST(info, b) printf(#b " = %u\n", (bool) (info->compatibilityFlags & XGL_GPU_COMPAT_ ##b## _BIT))
+#define TEST(info, b) printf(#b " = %u\n", (bool) (info->compatibilityFlags & VK_GPU_COMPAT_ ##b## _BIT))
TEST(info, ASIC_FEATURES);
TEST(info, IQ_MATCH);
TEST(info, PEER_TRANSFER);
static void app_gpu_multi_compat(struct app_gpu *gpus, uint32_t gpu_count)
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i, j;
for (i = 0; i < gpu_count; i++) {
for (j = 0; j < gpu_count; j++) {
- XGL_GPU_COMPATIBILITY_INFO info;
+ VK_GPU_COMPATIBILITY_INFO info;
if (i == j)
continue;
- err = xglGetMultiGpuCompatibility(gpus[i].obj,
+ err = vkGetMultiGpuCompatibility(gpus[i].obj,
gpus[j].obj, &info);
if (err)
ERR_EXIT(err);
static void app_gpu_dump_props(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_PROPERTIES *props = &gpu->props;
+ const VK_PHYSICAL_GPU_PROPERTIES *props = &gpu->props;
- printf("XGL_PHYSICAL_GPU_PROPERTIES\n");
+ printf("VK_PHYSICAL_GPU_PROPERTIES\n");
printf("\tapiVersion = %u\n", props->apiVersion);
printf("\tdriverVersion = %u\n", props->driverVersion);
printf("\tvendorId = 0x%04x\n", props->vendorId);
printf("\tdeviceId = 0x%04x\n", props->deviceId);
- printf("\tgpuType = %s\n", xgl_gpu_type_string(props->gpuType));
+ printf("\tgpuType = %s\n", vk_gpu_type_string(props->gpuType));
printf("\tgpuName = %s\n", props->gpuName);
printf("\tmaxInlineMemoryUpdateSize = %zu\n", props->maxInlineMemoryUpdateSize);
printf("\tmaxBoundDescriptorSets = %u\n", props->maxBoundDescriptorSets);
static void app_gpu_dump_perf(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_PERFORMANCE *perf = &gpu->perf;
+ const VK_PHYSICAL_GPU_PERFORMANCE *perf = &gpu->perf;
- printf("XGL_PHYSICAL_GPU_PERFORMANCE\n");
+ printf("VK_PHYSICAL_GPU_PERFORMANCE\n");
printf("\tmaxGpuClock = %f\n", perf->maxGpuClock);
printf("\taluPerClock = %f\n", perf->aluPerClock);
printf("\ttexPerClock = %f\n", perf->texPerClock);
static void app_gpu_dump_queue_props(const struct app_gpu *gpu, uint32_t id)
{
- const XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *props = &gpu->queue_props[id];
+ const VK_PHYSICAL_GPU_QUEUE_PROPERTIES *props = &gpu->queue_props[id];
- printf("XGL_PHYSICAL_GPU_QUEUE_PROPERTIES[%d]\n", id);
+ printf("VK_PHYSICAL_GPU_QUEUE_PROPERTIES[%d]\n", id);
printf("\tqueueFlags = %c%c%c%c\n",
- (props->queueFlags & XGL_QUEUE_GRAPHICS_BIT) ? 'G' : '.',
- (props->queueFlags & XGL_QUEUE_COMPUTE_BIT) ? 'C' : '.',
- (props->queueFlags & XGL_QUEUE_DMA_BIT) ? 'D' : '.',
- (props->queueFlags & XGL_QUEUE_EXTENDED_BIT) ? 'X' : '.');
+ (props->queueFlags & VK_QUEUE_GRAPHICS_BIT) ? 'G' : '.',
+ (props->queueFlags & VK_QUEUE_COMPUTE_BIT) ? 'C' : '.',
+ (props->queueFlags & VK_QUEUE_DMA_BIT) ? 'D' : '.',
+ (props->queueFlags & VK_QUEUE_EXTENDED_BIT) ? 'X' : '.');
printf("\tqueueCount = %u\n", props->queueCount);
printf("\tmaxAtomicCounters = %u\n", props->maxAtomicCounters);
printf("\tsupportsTimestamps = %u\n", props->supportsTimestamps);
static void app_gpu_dump_memory_props(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_MEMORY_PROPERTIES *props = &gpu->memory_props;
+ const VK_PHYSICAL_GPU_MEMORY_PROPERTIES *props = &gpu->memory_props;
- printf("XGL_PHYSICAL_GPU_MEMORY_PROPERTIES\n");
+ printf("VK_PHYSICAL_GPU_MEMORY_PROPERTIES\n");
printf("\tsupportsMigration = %u\n", props->supportsMigration);
printf("\tsupportsPinning = %u\n", props->supportsPinning);
}
int main(int argc, char **argv)
{
- static const XGL_APPLICATION_INFO app_info = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ static const VK_APPLICATION_INFO app_info = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
- .pAppName = "xglinfo",
+ .pAppName = "vkinfo",
.appVersion = 1,
- .pEngineName = "xglinfo",
+ .pEngineName = "vkinfo",
.engineVersion = 1,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- static const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ static const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app_info,
.pAllocCb = NULL,
.ppEnabledExtensionNames = NULL,
};
struct app_gpu gpus[MAX_GPUS];
- XGL_PHYSICAL_GPU objs[MAX_GPUS];
- XGL_INSTANCE inst;
+ VK_PHYSICAL_GPU objs[MAX_GPUS];
+ VK_INSTANCE inst;
uint32_t gpu_count, i;
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateInstance(&inst_info, &inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
ERR_EXIT(err);
}
- err = xglEnumerateGpus(inst, MAX_GPUS, &gpu_count, objs);
+ err = vkEnumerateGpus(inst, MAX_GPUS, &gpu_count, objs);
if (err)
ERR_EXIT(err);
for (i = 0; i < gpu_count; i++)
app_gpu_destroy(&gpus[i]);
- xglDestroyInstance(inst);
+ vkDestroyInstance(inst);
return 0;
}
- [Implementation for Intel GPUs](intel)
- [Null driver](nulldrv)
- [*Sample Driver Tests*](../tests)
- - Now includes Golden images to verify xgl_render_tests rendering.
+ - Now includes Golden images to verify vk_render_tests rendering.
-common/ provides helper and utility functions, as well as all XGL entry points
-except xglInitAndEnumerateGpus. Hardware drivers are required to provide that
-function, and to embed a "XGL_LAYER_DISPATCH_TABLE *" as the first member of
-XGL_PHYSICAL_GPU and all XGL_BASE_OBJECT.
+common/ provides helper and utility functions, as well as all VK entry points
+except vkInitAndEnumerateGpus. Hardware drivers are required to provide that
+function, and to embed a "VK_LAYER_DISPATCH_TABLE *" as the first member of
+VK_PHYSICAL_GPU and all VK_BASE_OBJECT.
Thread safety
They require that there is no other thread calling the ICD when these
functions are called
- - xglInitAndEnumerateGpus
- - xglDbgRegisterMsgCallback
- - xglDbgUnregisterMsgCallback
- - xglDbgSetGlobalOption
+ - vkInitAndEnumerateGpus
+ - vkDbgRegisterMsgCallback
+ - vkDbgUnregisterMsgCallback
+ - vkDbgSetGlobalOption
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
return devices;
} else {
dev = icd_instance_alloc(instance, sizeof(*dev), 0,
- XGL_SYSTEM_ALLOC_INTERNAL_TEMP);
+ VK_SYSTEM_ALLOC_INTERNAL_TEMP);
if (!dev)
return devices;
udev = udev_new();
if (udev == NULL) {
- icd_instance_log(instance, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "failed to initialize udev context");
+ icd_instance_log(instance, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "failed to initialize udev context");
return NULL;
}
e = udev_enumerate_new(udev);
if (e == NULL) {
- icd_instance_log(instance, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0,
+ icd_instance_log(instance, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0,
"failed to initialize udev enumerate context");
udev_unref(udev);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
static const struct icd_format_info {
size_t size;
uint32_t channel_count;
-} icd_format_table[XGL_NUM_FMT] = {
- [XGL_FMT_UNDEFINED] = { 0, 0 },
- [XGL_FMT_R4G4_UNORM] = { 1, 2 },
- [XGL_FMT_R4G4_USCALED] = { 1, 2 },
- [XGL_FMT_R4G4B4A4_UNORM] = { 2, 4 },
- [XGL_FMT_R4G4B4A4_USCALED] = { 2, 4 },
- [XGL_FMT_R5G6B5_UNORM] = { 2, 3 },
- [XGL_FMT_R5G6B5_USCALED] = { 2, 3 },
- [XGL_FMT_R5G5B5A1_UNORM] = { 2, 4 },
- [XGL_FMT_R5G5B5A1_USCALED] = { 2, 4 },
- [XGL_FMT_R8_UNORM] = { 1, 1 },
- [XGL_FMT_R8_SNORM] = { 1, 1 },
- [XGL_FMT_R8_USCALED] = { 1, 1 },
- [XGL_FMT_R8_SSCALED] = { 1, 1 },
- [XGL_FMT_R8_UINT] = { 1, 1 },
- [XGL_FMT_R8_SINT] = { 1, 1 },
- [XGL_FMT_R8_SRGB] = { 1, 1 },
- [XGL_FMT_R8G8_UNORM] = { 2, 2 },
- [XGL_FMT_R8G8_SNORM] = { 2, 2 },
- [XGL_FMT_R8G8_USCALED] = { 2, 2 },
- [XGL_FMT_R8G8_SSCALED] = { 2, 2 },
- [XGL_FMT_R8G8_UINT] = { 2, 2 },
- [XGL_FMT_R8G8_SINT] = { 2, 2 },
- [XGL_FMT_R8G8_SRGB] = { 2, 2 },
- [XGL_FMT_R8G8B8_UNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_SNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_USCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_SSCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_UINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SRGB] = { 3, 3 },
- [XGL_FMT_R8G8B8A8_UNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_USCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_UINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SRGB] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_USCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UINT] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SINT] = { 4, 4 },
- [XGL_FMT_R16_UNORM] = { 2, 1 },
- [XGL_FMT_R16_SNORM] = { 2, 1 },
- [XGL_FMT_R16_USCALED] = { 2, 1 },
- [XGL_FMT_R16_SSCALED] = { 2, 1 },
- [XGL_FMT_R16_UINT] = { 2, 1 },
- [XGL_FMT_R16_SINT] = { 2, 1 },
- [XGL_FMT_R16_SFLOAT] = { 2, 1 },
- [XGL_FMT_R16G16_UNORM] = { 4, 2 },
- [XGL_FMT_R16G16_SNORM] = { 4, 2 },
- [XGL_FMT_R16G16_USCALED] = { 4, 2 },
- [XGL_FMT_R16G16_SSCALED] = { 4, 2 },
- [XGL_FMT_R16G16_UINT] = { 4, 2 },
- [XGL_FMT_R16G16_SINT] = { 4, 2 },
- [XGL_FMT_R16G16_SFLOAT] = { 4, 2 },
- [XGL_FMT_R16G16B16_UNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_SNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_USCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_SSCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_UINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SFLOAT] = { 6, 3 },
- [XGL_FMT_R16G16B16A16_UNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_USCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_UINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
- [XGL_FMT_R32_UINT] = { 4, 1 },
- [XGL_FMT_R32_SINT] = { 4, 1 },
- [XGL_FMT_R32_SFLOAT] = { 4, 1 },
- [XGL_FMT_R32G32_UINT] = { 8, 2 },
- [XGL_FMT_R32G32_SINT] = { 8, 2 },
- [XGL_FMT_R32G32_SFLOAT] = { 8, 2 },
- [XGL_FMT_R32G32B32_UINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SFLOAT] = { 12, 3 },
- [XGL_FMT_R32G32B32A32_UINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
- [XGL_FMT_R64_SFLOAT] = { 8, 1 },
- [XGL_FMT_R64G64_SFLOAT] = { 16, 2 },
- [XGL_FMT_R64G64B64_SFLOAT] = { 24, 3 },
- [XGL_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
- [XGL_FMT_R11G11B10_UFLOAT] = { 4, 3 },
- [XGL_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
- [XGL_FMT_D16_UNORM] = { 2, 1 },
- [XGL_FMT_D24_UNORM] = { 3, 1 },
- [XGL_FMT_D32_SFLOAT] = { 4, 1 },
- [XGL_FMT_S8_UINT] = { 1, 1 },
- [XGL_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
- [XGL_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
- [XGL_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
- [XGL_FMT_BC1_RGB_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGB_SRGB] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_SRGB] = { 8, 4 },
- [XGL_FMT_BC2_UNORM] = { 16, 4 },
- [XGL_FMT_BC2_SRGB] = { 16, 4 },
- [XGL_FMT_BC3_UNORM] = { 16, 4 },
- [XGL_FMT_BC3_SRGB] = { 16, 4 },
- [XGL_FMT_BC4_UNORM] = { 8, 4 },
- [XGL_FMT_BC4_SNORM] = { 8, 4 },
- [XGL_FMT_BC5_UNORM] = { 16, 4 },
- [XGL_FMT_BC5_SNORM] = { 16, 4 },
- [XGL_FMT_BC6H_UFLOAT] = { 16, 4 },
- [XGL_FMT_BC6H_SFLOAT] = { 16, 4 },
- [XGL_FMT_BC7_UNORM] = { 16, 4 },
- [XGL_FMT_BC7_SRGB] = { 16, 4 },
+} icd_format_table[VK_NUM_FMT] = {
+ [VK_FMT_UNDEFINED] = { 0, 0 },
+ [VK_FMT_R4G4_UNORM] = { 1, 2 },
+ [VK_FMT_R4G4_USCALED] = { 1, 2 },
+ [VK_FMT_R4G4B4A4_UNORM] = { 2, 4 },
+ [VK_FMT_R4G4B4A4_USCALED] = { 2, 4 },
+ [VK_FMT_R5G6B5_UNORM] = { 2, 3 },
+ [VK_FMT_R5G6B5_USCALED] = { 2, 3 },
+ [VK_FMT_R5G5B5A1_UNORM] = { 2, 4 },
+ [VK_FMT_R5G5B5A1_USCALED] = { 2, 4 },
+ [VK_FMT_R8_UNORM] = { 1, 1 },
+ [VK_FMT_R8_SNORM] = { 1, 1 },
+ [VK_FMT_R8_USCALED] = { 1, 1 },
+ [VK_FMT_R8_SSCALED] = { 1, 1 },
+ [VK_FMT_R8_UINT] = { 1, 1 },
+ [VK_FMT_R8_SINT] = { 1, 1 },
+ [VK_FMT_R8_SRGB] = { 1, 1 },
+ [VK_FMT_R8G8_UNORM] = { 2, 2 },
+ [VK_FMT_R8G8_SNORM] = { 2, 2 },
+ [VK_FMT_R8G8_USCALED] = { 2, 2 },
+ [VK_FMT_R8G8_SSCALED] = { 2, 2 },
+ [VK_FMT_R8G8_UINT] = { 2, 2 },
+ [VK_FMT_R8G8_SINT] = { 2, 2 },
+ [VK_FMT_R8G8_SRGB] = { 2, 2 },
+ [VK_FMT_R8G8B8_UNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_SNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_USCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_SSCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_UINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SRGB] = { 3, 3 },
+ [VK_FMT_R8G8B8A8_UNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_USCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_UINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SRGB] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_USCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UINT] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SINT] = { 4, 4 },
+ [VK_FMT_R16_UNORM] = { 2, 1 },
+ [VK_FMT_R16_SNORM] = { 2, 1 },
+ [VK_FMT_R16_USCALED] = { 2, 1 },
+ [VK_FMT_R16_SSCALED] = { 2, 1 },
+ [VK_FMT_R16_UINT] = { 2, 1 },
+ [VK_FMT_R16_SINT] = { 2, 1 },
+ [VK_FMT_R16_SFLOAT] = { 2, 1 },
+ [VK_FMT_R16G16_UNORM] = { 4, 2 },
+ [VK_FMT_R16G16_SNORM] = { 4, 2 },
+ [VK_FMT_R16G16_USCALED] = { 4, 2 },
+ [VK_FMT_R16G16_SSCALED] = { 4, 2 },
+ [VK_FMT_R16G16_UINT] = { 4, 2 },
+ [VK_FMT_R16G16_SINT] = { 4, 2 },
+ [VK_FMT_R16G16_SFLOAT] = { 4, 2 },
+ [VK_FMT_R16G16B16_UNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_SNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_USCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_SSCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_UINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SFLOAT] = { 6, 3 },
+ [VK_FMT_R16G16B16A16_UNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_USCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_UINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
+ [VK_FMT_R32_UINT] = { 4, 1 },
+ [VK_FMT_R32_SINT] = { 4, 1 },
+ [VK_FMT_R32_SFLOAT] = { 4, 1 },
+ [VK_FMT_R32G32_UINT] = { 8, 2 },
+ [VK_FMT_R32G32_SINT] = { 8, 2 },
+ [VK_FMT_R32G32_SFLOAT] = { 8, 2 },
+ [VK_FMT_R32G32B32_UINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SFLOAT] = { 12, 3 },
+ [VK_FMT_R32G32B32A32_UINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
+ [VK_FMT_R64_SFLOAT] = { 8, 1 },
+ [VK_FMT_R64G64_SFLOAT] = { 16, 2 },
+ [VK_FMT_R64G64B64_SFLOAT] = { 24, 3 },
+ [VK_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
+ [VK_FMT_R11G11B10_UFLOAT] = { 4, 3 },
+ [VK_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
+ [VK_FMT_D16_UNORM] = { 2, 1 },
+ [VK_FMT_D24_UNORM] = { 3, 1 },
+ [VK_FMT_D32_SFLOAT] = { 4, 1 },
+ [VK_FMT_S8_UINT] = { 1, 1 },
+ [VK_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
+ [VK_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
+ [VK_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
+ [VK_FMT_BC1_RGB_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGB_SRGB] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_SRGB] = { 8, 4 },
+ [VK_FMT_BC2_UNORM] = { 16, 4 },
+ [VK_FMT_BC2_SRGB] = { 16, 4 },
+ [VK_FMT_BC3_UNORM] = { 16, 4 },
+ [VK_FMT_BC3_SRGB] = { 16, 4 },
+ [VK_FMT_BC4_UNORM] = { 8, 4 },
+ [VK_FMT_BC4_SNORM] = { 8, 4 },
+ [VK_FMT_BC5_UNORM] = { 16, 4 },
+ [VK_FMT_BC5_SNORM] = { 16, 4 },
+ [VK_FMT_BC6H_UFLOAT] = { 16, 4 },
+ [VK_FMT_BC6H_SFLOAT] = { 16, 4 },
+ [VK_FMT_BC7_UNORM] = { 16, 4 },
+ [VK_FMT_BC7_SRGB] = { 16, 4 },
/* TODO: Initialize remaining compressed formats. */
- [XGL_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11_SNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_SNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_SRGB] = { 0, 0 },
- [XGL_FMT_B5G6R5_UNORM] = { 2, 3 },
- [XGL_FMT_B5G6R5_USCALED] = { 2, 3 },
- [XGL_FMT_B8G8R8_UNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_SNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_USCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_SSCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_UINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SRGB] = { 3, 3 },
- [XGL_FMT_B8G8R8A8_UNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_USCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_UINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SRGB] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_USCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UINT] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SINT] = { 4, 4 },
+ [VK_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11_SNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_SNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_SRGB] = { 0, 0 },
+ [VK_FMT_B5G6R5_UNORM] = { 2, 3 },
+ [VK_FMT_B5G6R5_USCALED] = { 2, 3 },
+ [VK_FMT_B8G8R8_UNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_SNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_USCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_SSCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_UINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SRGB] = { 3, 3 },
+ [VK_FMT_B8G8R8A8_UNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_USCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_UINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SRGB] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_USCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UINT] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SINT] = { 4, 4 },
};
-bool icd_format_is_ds(XGL_FORMAT format)
+bool icd_format_is_ds(VK_FORMAT format)
{
bool is_ds = false;
switch (format) {
- case XGL_FMT_D16_UNORM:
- case XGL_FMT_D24_UNORM:
- case XGL_FMT_D32_SFLOAT:
- case XGL_FMT_S8_UINT:
- case XGL_FMT_D16_UNORM_S8_UINT:
- case XGL_FMT_D24_UNORM_S8_UINT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D16_UNORM:
+ case VK_FMT_D24_UNORM:
+ case VK_FMT_D32_SFLOAT:
+ case VK_FMT_S8_UINT:
+ case VK_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D24_UNORM_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
is_ds = true;
break;
default:
return is_ds;
}
-bool icd_format_is_norm(XGL_FORMAT format)
+bool icd_format_is_norm(VK_FORMAT format)
{
bool is_norm = false;
switch (format) {
- case XGL_FMT_R4G4_UNORM:
- case XGL_FMT_R4G4B4A4_UNORM:
- case XGL_FMT_R5G6B5_UNORM:
- case XGL_FMT_R5G5B5A1_UNORM:
- case XGL_FMT_R8_UNORM:
- case XGL_FMT_R8_SNORM:
- case XGL_FMT_R8G8_UNORM:
- case XGL_FMT_R8G8_SNORM:
- case XGL_FMT_R8G8B8_UNORM:
- case XGL_FMT_R8G8B8_SNORM:
- case XGL_FMT_R8G8B8A8_UNORM:
- case XGL_FMT_R8G8B8A8_SNORM:
- case XGL_FMT_R10G10B10A2_UNORM:
- case XGL_FMT_R10G10B10A2_SNORM:
- case XGL_FMT_R16_UNORM:
- case XGL_FMT_R16_SNORM:
- case XGL_FMT_R16G16_UNORM:
- case XGL_FMT_R16G16_SNORM:
- case XGL_FMT_R16G16B16_UNORM:
- case XGL_FMT_R16G16B16_SNORM:
- case XGL_FMT_R16G16B16A16_UNORM:
- case XGL_FMT_R16G16B16A16_SNORM:
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_ETC2_R8G8B8_UNORM:
- case XGL_FMT_ETC2_R8G8B8A1_UNORM:
- case XGL_FMT_ETC2_R8G8B8A8_UNORM:
- case XGL_FMT_EAC_R11_UNORM:
- case XGL_FMT_EAC_R11_SNORM:
- case XGL_FMT_EAC_R11G11_UNORM:
- case XGL_FMT_EAC_R11G11_SNORM:
- case XGL_FMT_ASTC_4x4_UNORM:
- case XGL_FMT_ASTC_5x4_UNORM:
- case XGL_FMT_ASTC_5x5_UNORM:
- case XGL_FMT_ASTC_6x5_UNORM:
- case XGL_FMT_ASTC_6x6_UNORM:
- case XGL_FMT_ASTC_8x5_UNORM:
- case XGL_FMT_ASTC_8x6_UNORM:
- case XGL_FMT_ASTC_8x8_UNORM:
- case XGL_FMT_ASTC_10x5_UNORM:
- case XGL_FMT_ASTC_10x6_UNORM:
- case XGL_FMT_ASTC_10x8_UNORM:
- case XGL_FMT_ASTC_10x10_UNORM:
- case XGL_FMT_ASTC_12x10_UNORM:
- case XGL_FMT_ASTC_12x12_UNORM:
- case XGL_FMT_B5G6R5_UNORM:
- case XGL_FMT_B8G8R8_UNORM:
- case XGL_FMT_B8G8R8_SNORM:
- case XGL_FMT_B8G8R8A8_UNORM:
- case XGL_FMT_B8G8R8A8_SNORM:
- case XGL_FMT_B10G10R10A2_UNORM:
- case XGL_FMT_B10G10R10A2_SNORM:
+ case VK_FMT_R4G4_UNORM:
+ case VK_FMT_R4G4B4A4_UNORM:
+ case VK_FMT_R5G6B5_UNORM:
+ case VK_FMT_R5G5B5A1_UNORM:
+ case VK_FMT_R8_UNORM:
+ case VK_FMT_R8_SNORM:
+ case VK_FMT_R8G8_UNORM:
+ case VK_FMT_R8G8_SNORM:
+ case VK_FMT_R8G8B8_UNORM:
+ case VK_FMT_R8G8B8_SNORM:
+ case VK_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_SNORM:
+ case VK_FMT_R10G10B10A2_UNORM:
+ case VK_FMT_R10G10B10A2_SNORM:
+ case VK_FMT_R16_UNORM:
+ case VK_FMT_R16_SNORM:
+ case VK_FMT_R16G16_UNORM:
+ case VK_FMT_R16G16_SNORM:
+ case VK_FMT_R16G16B16_UNORM:
+ case VK_FMT_R16G16B16_SNORM:
+ case VK_FMT_R16G16B16A16_UNORM:
+ case VK_FMT_R16G16B16A16_SNORM:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_ETC2_R8G8B8_UNORM:
+ case VK_FMT_ETC2_R8G8B8A1_UNORM:
+ case VK_FMT_ETC2_R8G8B8A8_UNORM:
+ case VK_FMT_EAC_R11_UNORM:
+ case VK_FMT_EAC_R11_SNORM:
+ case VK_FMT_EAC_R11G11_UNORM:
+ case VK_FMT_EAC_R11G11_SNORM:
+ case VK_FMT_ASTC_4x4_UNORM:
+ case VK_FMT_ASTC_5x4_UNORM:
+ case VK_FMT_ASTC_5x5_UNORM:
+ case VK_FMT_ASTC_6x5_UNORM:
+ case VK_FMT_ASTC_6x6_UNORM:
+ case VK_FMT_ASTC_8x5_UNORM:
+ case VK_FMT_ASTC_8x6_UNORM:
+ case VK_FMT_ASTC_8x8_UNORM:
+ case VK_FMT_ASTC_10x5_UNORM:
+ case VK_FMT_ASTC_10x6_UNORM:
+ case VK_FMT_ASTC_10x8_UNORM:
+ case VK_FMT_ASTC_10x10_UNORM:
+ case VK_FMT_ASTC_12x10_UNORM:
+ case VK_FMT_ASTC_12x12_UNORM:
+ case VK_FMT_B5G6R5_UNORM:
+ case VK_FMT_B8G8R8_UNORM:
+ case VK_FMT_B8G8R8_SNORM:
+ case VK_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_SNORM:
+ case VK_FMT_B10G10R10A2_UNORM:
+ case VK_FMT_B10G10R10A2_SNORM:
is_norm = true;
break;
default:
return is_norm;
};
-bool icd_format_is_int(XGL_FORMAT format)
+bool icd_format_is_int(VK_FORMAT format)
{
bool is_int = false;
switch (format) {
- case XGL_FMT_R8_UINT:
- case XGL_FMT_R8_SINT:
- case XGL_FMT_R8G8_UINT:
- case XGL_FMT_R8G8_SINT:
- case XGL_FMT_R8G8B8_UINT:
- case XGL_FMT_R8G8B8_SINT:
- case XGL_FMT_R8G8B8A8_UINT:
- case XGL_FMT_R8G8B8A8_SINT:
- case XGL_FMT_R10G10B10A2_UINT:
- case XGL_FMT_R10G10B10A2_SINT:
- case XGL_FMT_R16_UINT:
- case XGL_FMT_R16_SINT:
- case XGL_FMT_R16G16_UINT:
- case XGL_FMT_R16G16_SINT:
- case XGL_FMT_R16G16B16_UINT:
- case XGL_FMT_R16G16B16_SINT:
- case XGL_FMT_R16G16B16A16_UINT:
- case XGL_FMT_R16G16B16A16_SINT:
- case XGL_FMT_R32_UINT:
- case XGL_FMT_R32_SINT:
- case XGL_FMT_R32G32_UINT:
- case XGL_FMT_R32G32_SINT:
- case XGL_FMT_R32G32B32_UINT:
- case XGL_FMT_R32G32B32_SINT:
- case XGL_FMT_R32G32B32A32_UINT:
- case XGL_FMT_R32G32B32A32_SINT:
- case XGL_FMT_B8G8R8_UINT:
- case XGL_FMT_B8G8R8_SINT:
- case XGL_FMT_B8G8R8A8_UINT:
- case XGL_FMT_B8G8R8A8_SINT:
- case XGL_FMT_B10G10R10A2_UINT:
- case XGL_FMT_B10G10R10A2_SINT:
+ case VK_FMT_R8_UINT:
+ case VK_FMT_R8_SINT:
+ case VK_FMT_R8G8_UINT:
+ case VK_FMT_R8G8_SINT:
+ case VK_FMT_R8G8B8_UINT:
+ case VK_FMT_R8G8B8_SINT:
+ case VK_FMT_R8G8B8A8_UINT:
+ case VK_FMT_R8G8B8A8_SINT:
+ case VK_FMT_R10G10B10A2_UINT:
+ case VK_FMT_R10G10B10A2_SINT:
+ case VK_FMT_R16_UINT:
+ case VK_FMT_R16_SINT:
+ case VK_FMT_R16G16_UINT:
+ case VK_FMT_R16G16_SINT:
+ case VK_FMT_R16G16B16_UINT:
+ case VK_FMT_R16G16B16_SINT:
+ case VK_FMT_R16G16B16A16_UINT:
+ case VK_FMT_R16G16B16A16_SINT:
+ case VK_FMT_R32_UINT:
+ case VK_FMT_R32_SINT:
+ case VK_FMT_R32G32_UINT:
+ case VK_FMT_R32G32_SINT:
+ case VK_FMT_R32G32B32_UINT:
+ case VK_FMT_R32G32B32_SINT:
+ case VK_FMT_R32G32B32A32_UINT:
+ case VK_FMT_R32G32B32A32_SINT:
+ case VK_FMT_B8G8R8_UINT:
+ case VK_FMT_B8G8R8_SINT:
+ case VK_FMT_B8G8R8A8_UINT:
+ case VK_FMT_B8G8R8A8_SINT:
+ case VK_FMT_B10G10R10A2_UINT:
+ case VK_FMT_B10G10R10A2_SINT:
is_int = true;
break;
default:
return is_int;
}
-bool icd_format_is_float(XGL_FORMAT format)
+bool icd_format_is_float(VK_FORMAT format)
{
bool is_float = false;
switch (format) {
- case XGL_FMT_R16_SFLOAT:
- case XGL_FMT_R16G16_SFLOAT:
- case XGL_FMT_R16G16B16_SFLOAT:
- case XGL_FMT_R16G16B16A16_SFLOAT:
- case XGL_FMT_R32_SFLOAT:
- case XGL_FMT_R32G32_SFLOAT:
- case XGL_FMT_R32G32B32_SFLOAT:
- case XGL_FMT_R32G32B32A32_SFLOAT:
- case XGL_FMT_R64_SFLOAT:
- case XGL_FMT_R64G64_SFLOAT:
- case XGL_FMT_R64G64B64_SFLOAT:
- case XGL_FMT_R64G64B64A64_SFLOAT:
- case XGL_FMT_R11G11B10_UFLOAT:
- case XGL_FMT_R9G9B9E5_UFLOAT:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
+ case VK_FMT_R16_SFLOAT:
+ case VK_FMT_R16G16_SFLOAT:
+ case VK_FMT_R16G16B16_SFLOAT:
+ case VK_FMT_R16G16B16A16_SFLOAT:
+ case VK_FMT_R32_SFLOAT:
+ case VK_FMT_R32G32_SFLOAT:
+ case VK_FMT_R32G32B32_SFLOAT:
+ case VK_FMT_R32G32B32A32_SFLOAT:
+ case VK_FMT_R64_SFLOAT:
+ case VK_FMT_R64G64_SFLOAT:
+ case VK_FMT_R64G64B64_SFLOAT:
+ case VK_FMT_R64G64B64A64_SFLOAT:
+ case VK_FMT_R11G11B10_UFLOAT:
+ case VK_FMT_R9G9B9E5_UFLOAT:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
is_float = true;
break;
default:
return is_float;
}
-bool icd_format_is_srgb(XGL_FORMAT format)
+bool icd_format_is_srgb(VK_FORMAT format)
{
bool is_srgb = false;
switch (format) {
- case XGL_FMT_R8_SRGB:
- case XGL_FMT_R8G8_SRGB:
- case XGL_FMT_R8G8B8_SRGB:
- case XGL_FMT_R8G8B8A8_SRGB:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC7_SRGB:
- case XGL_FMT_ASTC_4x4_SRGB:
- case XGL_FMT_ASTC_5x4_SRGB:
- case XGL_FMT_ASTC_5x5_SRGB:
- case XGL_FMT_ASTC_6x5_SRGB:
- case XGL_FMT_ASTC_6x6_SRGB:
- case XGL_FMT_ASTC_8x5_SRGB:
- case XGL_FMT_ASTC_8x6_SRGB:
- case XGL_FMT_ASTC_8x8_SRGB:
- case XGL_FMT_ASTC_10x5_SRGB:
- case XGL_FMT_ASTC_10x6_SRGB:
- case XGL_FMT_ASTC_10x8_SRGB:
- case XGL_FMT_ASTC_10x10_SRGB:
- case XGL_FMT_ASTC_12x10_SRGB:
- case XGL_FMT_ASTC_12x12_SRGB:
- case XGL_FMT_B8G8R8_SRGB:
- case XGL_FMT_B8G8R8A8_SRGB:
+ case VK_FMT_R8_SRGB:
+ case VK_FMT_R8G8_SRGB:
+ case VK_FMT_R8G8B8_SRGB:
+ case VK_FMT_R8G8B8A8_SRGB:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC7_SRGB:
+ case VK_FMT_ASTC_4x4_SRGB:
+ case VK_FMT_ASTC_5x4_SRGB:
+ case VK_FMT_ASTC_5x5_SRGB:
+ case VK_FMT_ASTC_6x5_SRGB:
+ case VK_FMT_ASTC_6x6_SRGB:
+ case VK_FMT_ASTC_8x5_SRGB:
+ case VK_FMT_ASTC_8x6_SRGB:
+ case VK_FMT_ASTC_8x8_SRGB:
+ case VK_FMT_ASTC_10x5_SRGB:
+ case VK_FMT_ASTC_10x6_SRGB:
+ case VK_FMT_ASTC_10x8_SRGB:
+ case VK_FMT_ASTC_10x10_SRGB:
+ case VK_FMT_ASTC_12x10_SRGB:
+ case VK_FMT_ASTC_12x12_SRGB:
+ case VK_FMT_B8G8R8_SRGB:
+ case VK_FMT_B8G8R8A8_SRGB:
is_srgb = true;
break;
default:
return is_srgb;
}
-bool icd_format_is_compressed(XGL_FORMAT format)
+bool icd_format_is_compressed(VK_FORMAT format)
{
switch (format) {
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_BC7_SRGB:
- case XGL_FMT_ETC2_R8G8B8_UNORM:
- case XGL_FMT_ETC2_R8G8B8A1_UNORM:
- case XGL_FMT_ETC2_R8G8B8A8_UNORM:
- case XGL_FMT_EAC_R11_UNORM:
- case XGL_FMT_EAC_R11_SNORM:
- case XGL_FMT_EAC_R11G11_UNORM:
- case XGL_FMT_EAC_R11G11_SNORM:
- case XGL_FMT_ASTC_4x4_UNORM:
- case XGL_FMT_ASTC_4x4_SRGB:
- case XGL_FMT_ASTC_5x4_UNORM:
- case XGL_FMT_ASTC_5x4_SRGB:
- case XGL_FMT_ASTC_5x5_UNORM:
- case XGL_FMT_ASTC_5x5_SRGB:
- case XGL_FMT_ASTC_6x5_UNORM:
- case XGL_FMT_ASTC_6x5_SRGB:
- case XGL_FMT_ASTC_6x6_UNORM:
- case XGL_FMT_ASTC_6x6_SRGB:
- case XGL_FMT_ASTC_8x5_UNORM:
- case XGL_FMT_ASTC_8x5_SRGB:
- case XGL_FMT_ASTC_8x6_UNORM:
- case XGL_FMT_ASTC_8x6_SRGB:
- case XGL_FMT_ASTC_8x8_UNORM:
- case XGL_FMT_ASTC_8x8_SRGB:
- case XGL_FMT_ASTC_10x5_UNORM:
- case XGL_FMT_ASTC_10x5_SRGB:
- case XGL_FMT_ASTC_10x6_UNORM:
- case XGL_FMT_ASTC_10x6_SRGB:
- case XGL_FMT_ASTC_10x8_UNORM:
- case XGL_FMT_ASTC_10x8_SRGB:
- case XGL_FMT_ASTC_10x10_UNORM:
- case XGL_FMT_ASTC_10x10_SRGB:
- case XGL_FMT_ASTC_12x10_UNORM:
- case XGL_FMT_ASTC_12x10_SRGB:
- case XGL_FMT_ASTC_12x12_UNORM:
- case XGL_FMT_ASTC_12x12_SRGB:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_BC7_SRGB:
+ case VK_FMT_ETC2_R8G8B8_UNORM:
+ case VK_FMT_ETC2_R8G8B8A1_UNORM:
+ case VK_FMT_ETC2_R8G8B8A8_UNORM:
+ case VK_FMT_EAC_R11_UNORM:
+ case VK_FMT_EAC_R11_SNORM:
+ case VK_FMT_EAC_R11G11_UNORM:
+ case VK_FMT_EAC_R11G11_SNORM:
+ case VK_FMT_ASTC_4x4_UNORM:
+ case VK_FMT_ASTC_4x4_SRGB:
+ case VK_FMT_ASTC_5x4_UNORM:
+ case VK_FMT_ASTC_5x4_SRGB:
+ case VK_FMT_ASTC_5x5_UNORM:
+ case VK_FMT_ASTC_5x5_SRGB:
+ case VK_FMT_ASTC_6x5_UNORM:
+ case VK_FMT_ASTC_6x5_SRGB:
+ case VK_FMT_ASTC_6x6_UNORM:
+ case VK_FMT_ASTC_6x6_SRGB:
+ case VK_FMT_ASTC_8x5_UNORM:
+ case VK_FMT_ASTC_8x5_SRGB:
+ case VK_FMT_ASTC_8x6_UNORM:
+ case VK_FMT_ASTC_8x6_SRGB:
+ case VK_FMT_ASTC_8x8_UNORM:
+ case VK_FMT_ASTC_8x8_SRGB:
+ case VK_FMT_ASTC_10x5_UNORM:
+ case VK_FMT_ASTC_10x5_SRGB:
+ case VK_FMT_ASTC_10x6_UNORM:
+ case VK_FMT_ASTC_10x6_SRGB:
+ case VK_FMT_ASTC_10x8_UNORM:
+ case VK_FMT_ASTC_10x8_SRGB:
+ case VK_FMT_ASTC_10x10_UNORM:
+ case VK_FMT_ASTC_10x10_SRGB:
+ case VK_FMT_ASTC_12x10_UNORM:
+ case VK_FMT_ASTC_12x10_SRGB:
+ case VK_FMT_ASTC_12x12_UNORM:
+ case VK_FMT_ASTC_12x12_SRGB:
return true;
default:
return false;
}
}
-size_t icd_format_get_size(XGL_FORMAT format)
+size_t icd_format_get_size(VK_FORMAT format)
{
return icd_format_table[format].size;
}
-XGL_IMAGE_FORMAT_CLASS icd_format_get_class(XGL_FORMAT format)
+VK_IMAGE_FORMAT_CLASS icd_format_get_class(VK_FORMAT format)
{
if (icd_format_is_undef(format))
assert(!"undefined format");
if (icd_format_is_compressed(format)) {
switch (icd_format_get_size(format)) {
case 8:
- return XGL_IMAGE_FORMAT_CLASS_64_BIT_BLOCK;
+ return VK_IMAGE_FORMAT_CLASS_64_BIT_BLOCK;
case 16:
- return XGL_IMAGE_FORMAT_CLASS_128_BIT_BLOCK;
+ return VK_IMAGE_FORMAT_CLASS_128_BIT_BLOCK;
default:
assert(!"illegal compressed format");
}
} else if (icd_format_is_ds(format)) {
switch (icd_format_get_size(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_S8;
+ return VK_IMAGE_FORMAT_CLASS_S8;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D16;
+ return VK_IMAGE_FORMAT_CLASS_D16;
case 3:
switch (icd_format_get_channel_count(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_D24;
+ return VK_IMAGE_FORMAT_CLASS_D24;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D16S8;
+ return VK_IMAGE_FORMAT_CLASS_D16S8;
default:
assert(!"illegal depth stencil format channels");
}
case 4:
switch (icd_format_get_channel_count(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_D32;
+ return VK_IMAGE_FORMAT_CLASS_D32;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D24S8;
+ return VK_IMAGE_FORMAT_CLASS_D24S8;
default:
assert(!"illegal depth stencil format channels");
}
case 5:
- return XGL_IMAGE_FORMAT_CLASS_D32S8;
+ return VK_IMAGE_FORMAT_CLASS_D32S8;
default:
assert(!"illegal depth stencil format");
}
} else { /* uncompressed color format */
switch (icd_format_get_size(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_8_BITS;
+ return VK_IMAGE_FORMAT_CLASS_8_BITS;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_16_BITS;
+ return VK_IMAGE_FORMAT_CLASS_16_BITS;
case 3:
- return XGL_IMAGE_FORMAT_CLASS_24_BITS;
+ return VK_IMAGE_FORMAT_CLASS_24_BITS;
case 4:
- return XGL_IMAGE_FORMAT_CLASS_32_BITS;
+ return VK_IMAGE_FORMAT_CLASS_32_BITS;
case 6:
- return XGL_IMAGE_FORMAT_CLASS_48_BITS;
+ return VK_IMAGE_FORMAT_CLASS_48_BITS;
case 8:
- return XGL_IMAGE_FORMAT_CLASS_64_BITS;
+ return VK_IMAGE_FORMAT_CLASS_64_BITS;
case 12:
- return XGL_IMAGE_FORMAT_CLASS_96_BITS;
+ return VK_IMAGE_FORMAT_CLASS_96_BITS;
case 16:
- return XGL_IMAGE_FORMAT_CLASS_128_BITS;
+ return VK_IMAGE_FORMAT_CLASS_128_BITS;
default:
assert(!"illegal uncompressed color format");
}
}
}
-unsigned int icd_format_get_channel_count(XGL_FORMAT format)
+unsigned int icd_format_get_channel_count(VK_FORMAT format)
{
return icd_format_table[format].channel_count;
}
* Convert a raw RGBA color to a raw value. \p value must have at least
* icd_format_get_size(format) bytes.
*/
-void icd_format_get_raw_value(XGL_FORMAT format,
+void icd_format_get_raw_value(VK_FORMAT format,
const uint32_t color[4],
void *value)
{
/* assume little-endian */
switch (format) {
- case XGL_FMT_UNDEFINED:
+ case VK_FMT_UNDEFINED:
break;
- case XGL_FMT_R4G4_UNORM:
- case XGL_FMT_R4G4_USCALED:
+ case VK_FMT_R4G4_UNORM:
+ case VK_FMT_R4G4_USCALED:
((uint8_t *) value)[0] = (color[0] & 0xf) << 0 |
(color[1] & 0xf) << 4;
break;
- case XGL_FMT_R4G4B4A4_UNORM:
- case XGL_FMT_R4G4B4A4_USCALED:
+ case VK_FMT_R4G4B4A4_UNORM:
+ case VK_FMT_R4G4B4A4_USCALED:
((uint16_t *) value)[0] = (color[0] & 0xf) << 0 |
(color[1] & 0xf) << 4 |
(color[2] & 0xf) << 8 |
(color[3] & 0xf) << 12;
break;
- case XGL_FMT_R5G6B5_UNORM:
- case XGL_FMT_R5G6B5_USCALED:
+ case VK_FMT_R5G6B5_UNORM:
+ case VK_FMT_R5G6B5_USCALED:
((uint16_t *) value)[0] = (color[0] & 0x1f) << 0 |
(color[1] & 0x3f) << 5 |
(color[2] & 0x1f) << 11;
break;
- case XGL_FMT_B5G6R5_UNORM:
+ case VK_FMT_B5G6R5_UNORM:
((uint16_t *) value)[0] = (color[2] & 0x1f) << 0 |
(color[1] & 0x3f) << 5 |
(color[0] & 0x1f) << 11;
break;
- case XGL_FMT_R5G5B5A1_UNORM:
- case XGL_FMT_R5G5B5A1_USCALED:
+ case VK_FMT_R5G5B5A1_UNORM:
+ case VK_FMT_R5G5B5A1_USCALED:
((uint16_t *) value)[0] = (color[0] & 0x1f) << 0 |
(color[1] & 0x1f) << 5 |
(color[2] & 0x1f) << 10 |
(color[3] & 0x1) << 15;
break;
- case XGL_FMT_R8_UNORM:
- case XGL_FMT_R8_SNORM:
- case XGL_FMT_R8_USCALED:
- case XGL_FMT_R8_SSCALED:
- case XGL_FMT_R8_UINT:
- case XGL_FMT_R8_SINT:
- case XGL_FMT_R8_SRGB:
+ case VK_FMT_R8_UNORM:
+ case VK_FMT_R8_SNORM:
+ case VK_FMT_R8_USCALED:
+ case VK_FMT_R8_SSCALED:
+ case VK_FMT_R8_UINT:
+ case VK_FMT_R8_SINT:
+ case VK_FMT_R8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
break;
- case XGL_FMT_R8G8_UNORM:
- case XGL_FMT_R8G8_SNORM:
- case XGL_FMT_R8G8_USCALED:
- case XGL_FMT_R8G8_SSCALED:
- case XGL_FMT_R8G8_UINT:
- case XGL_FMT_R8G8_SINT:
- case XGL_FMT_R8G8_SRGB:
+ case VK_FMT_R8G8_UNORM:
+ case VK_FMT_R8G8_SNORM:
+ case VK_FMT_R8G8_USCALED:
+ case VK_FMT_R8G8_SSCALED:
+ case VK_FMT_R8G8_UINT:
+ case VK_FMT_R8G8_SINT:
+ case VK_FMT_R8G8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
break;
- case XGL_FMT_R8G8B8A8_UNORM:
- case XGL_FMT_R8G8B8A8_SNORM:
- case XGL_FMT_R8G8B8A8_USCALED:
- case XGL_FMT_R8G8B8A8_SSCALED:
- case XGL_FMT_R8G8B8A8_UINT:
- case XGL_FMT_R8G8B8A8_SINT:
- case XGL_FMT_R8G8B8A8_SRGB:
+ case VK_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_SNORM:
+ case VK_FMT_R8G8B8A8_USCALED:
+ case VK_FMT_R8G8B8A8_SSCALED:
+ case VK_FMT_R8G8B8A8_UINT:
+ case VK_FMT_R8G8B8A8_SINT:
+ case VK_FMT_R8G8B8A8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[2];
((uint8_t *) value)[3] = (uint8_t) color[3];
break;
- case XGL_FMT_B8G8R8A8_UNORM:
- case XGL_FMT_B8G8R8A8_SRGB:
+ case VK_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[2];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[0];
((uint8_t *) value)[3] = (uint8_t) color[3];
break;
- case XGL_FMT_R11G11B10_UFLOAT:
+ case VK_FMT_R11G11B10_UFLOAT:
((uint32_t *) value)[0] = (color[0] & 0x7ff) << 0 |
(color[1] & 0x7ff) << 11 |
(color[2] & 0x3ff) << 22;
break;
- case XGL_FMT_R10G10B10A2_UNORM:
- case XGL_FMT_R10G10B10A2_SNORM:
- case XGL_FMT_R10G10B10A2_USCALED:
- case XGL_FMT_R10G10B10A2_SSCALED:
- case XGL_FMT_R10G10B10A2_UINT:
- case XGL_FMT_R10G10B10A2_SINT:
+ case VK_FMT_R10G10B10A2_UNORM:
+ case VK_FMT_R10G10B10A2_SNORM:
+ case VK_FMT_R10G10B10A2_USCALED:
+ case VK_FMT_R10G10B10A2_SSCALED:
+ case VK_FMT_R10G10B10A2_UINT:
+ case VK_FMT_R10G10B10A2_SINT:
((uint32_t *) value)[0] = (color[0] & 0x3ff) << 0 |
(color[1] & 0x3ff) << 10 |
(color[2] & 0x3ff) << 20 |
(color[3] & 0x3) << 30;
break;
- case XGL_FMT_R16_UNORM:
- case XGL_FMT_R16_SNORM:
- case XGL_FMT_R16_USCALED:
- case XGL_FMT_R16_SSCALED:
- case XGL_FMT_R16_UINT:
- case XGL_FMT_R16_SINT:
- case XGL_FMT_R16_SFLOAT:
+ case VK_FMT_R16_UNORM:
+ case VK_FMT_R16_SNORM:
+ case VK_FMT_R16_USCALED:
+ case VK_FMT_R16_SSCALED:
+ case VK_FMT_R16_UINT:
+ case VK_FMT_R16_SINT:
+ case VK_FMT_R16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
break;
- case XGL_FMT_R16G16_UNORM:
- case XGL_FMT_R16G16_SNORM:
- case XGL_FMT_R16G16_USCALED:
- case XGL_FMT_R16G16_SSCALED:
- case XGL_FMT_R16G16_UINT:
- case XGL_FMT_R16G16_SINT:
- case XGL_FMT_R16G16_SFLOAT:
+ case VK_FMT_R16G16_UNORM:
+ case VK_FMT_R16G16_SNORM:
+ case VK_FMT_R16G16_USCALED:
+ case VK_FMT_R16G16_SSCALED:
+ case VK_FMT_R16G16_UINT:
+ case VK_FMT_R16G16_SINT:
+ case VK_FMT_R16G16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
break;
- case XGL_FMT_R16G16B16A16_UNORM:
- case XGL_FMT_R16G16B16A16_SNORM:
- case XGL_FMT_R16G16B16A16_USCALED:
- case XGL_FMT_R16G16B16A16_SSCALED:
- case XGL_FMT_R16G16B16A16_UINT:
- case XGL_FMT_R16G16B16A16_SINT:
- case XGL_FMT_R16G16B16A16_SFLOAT:
+ case VK_FMT_R16G16B16A16_UNORM:
+ case VK_FMT_R16G16B16A16_SNORM:
+ case VK_FMT_R16G16B16A16_USCALED:
+ case VK_FMT_R16G16B16A16_SSCALED:
+ case VK_FMT_R16G16B16A16_UINT:
+ case VK_FMT_R16G16B16A16_SINT:
+ case VK_FMT_R16G16B16A16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
((uint16_t *) value)[2] = (uint16_t) color[2];
((uint16_t *) value)[3] = (uint16_t) color[3];
break;
- case XGL_FMT_R32_UINT:
- case XGL_FMT_R32_SINT:
- case XGL_FMT_R32_SFLOAT:
+ case VK_FMT_R32_UINT:
+ case VK_FMT_R32_SINT:
+ case VK_FMT_R32_SFLOAT:
((uint32_t *) value)[0] = color[0];
break;
- case XGL_FMT_R32G32_UINT:
- case XGL_FMT_R32G32_SINT:
- case XGL_FMT_R32G32_SFLOAT:
+ case VK_FMT_R32G32_UINT:
+ case VK_FMT_R32G32_SINT:
+ case VK_FMT_R32G32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
break;
- case XGL_FMT_R32G32B32_UINT:
- case XGL_FMT_R32G32B32_SINT:
- case XGL_FMT_R32G32B32_SFLOAT:
+ case VK_FMT_R32G32B32_UINT:
+ case VK_FMT_R32G32B32_SINT:
+ case VK_FMT_R32G32B32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
((uint32_t *) value)[2] = color[2];
break;
- case XGL_FMT_R32G32B32A32_UINT:
- case XGL_FMT_R32G32B32A32_SINT:
- case XGL_FMT_R32G32B32A32_SFLOAT:
+ case VK_FMT_R32G32B32A32_UINT:
+ case VK_FMT_R32G32B32A32_SINT:
+ case VK_FMT_R32G32B32A32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
((uint32_t *) value)[2] = color[2];
((uint32_t *) value)[3] = color[3];
break;
- case XGL_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D16_UNORM_S8_UINT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((char *) value)[2] = (uint8_t) color[1];
break;
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
((uint32_t *) value)[0] = (uint32_t) color[0];
((char *) value)[4] = (uint8_t) color[1];
break;
- case XGL_FMT_R9G9B9E5_UFLOAT:
+ case VK_FMT_R9G9B9E5_UFLOAT:
((uint32_t *) value)[0] = (color[0] & 0x1ff) << 0 |
(color[1] & 0x1ff) << 9 |
(color[2] & 0x1ff) << 18 |
(color[3] & 0x1f) << 27;
break;
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
memcpy(value, color, 8);
break;
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_BC7_SRGB:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_BC7_SRGB:
memcpy(value, color, 16);
break;
- case XGL_FMT_R8G8B8_UNORM:
- case XGL_FMT_R8G8B8_SNORM:
- case XGL_FMT_R8G8B8_USCALED:
- case XGL_FMT_R8G8B8_SSCALED:
- case XGL_FMT_R8G8B8_UINT:
- case XGL_FMT_R8G8B8_SINT:
- case XGL_FMT_R8G8B8_SRGB:
+ case VK_FMT_R8G8B8_UNORM:
+ case VK_FMT_R8G8B8_SNORM:
+ case VK_FMT_R8G8B8_USCALED:
+ case VK_FMT_R8G8B8_SSCALED:
+ case VK_FMT_R8G8B8_UINT:
+ case VK_FMT_R8G8B8_SINT:
+ case VK_FMT_R8G8B8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[2];
break;
- case XGL_FMT_R16G16B16_UNORM:
- case XGL_FMT_R16G16B16_SNORM:
- case XGL_FMT_R16G16B16_USCALED:
- case XGL_FMT_R16G16B16_SSCALED:
- case XGL_FMT_R16G16B16_UINT:
- case XGL_FMT_R16G16B16_SINT:
- case XGL_FMT_R16G16B16_SFLOAT:
+ case VK_FMT_R16G16B16_UNORM:
+ case VK_FMT_R16G16B16_SNORM:
+ case VK_FMT_R16G16B16_USCALED:
+ case VK_FMT_R16G16B16_SSCALED:
+ case VK_FMT_R16G16B16_UINT:
+ case VK_FMT_R16G16B16_SINT:
+ case VK_FMT_R16G16B16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
((uint16_t *) value)[2] = (uint16_t) color[2];
break;
- case XGL_FMT_B10G10R10A2_UNORM:
- case XGL_FMT_B10G10R10A2_SNORM:
- case XGL_FMT_B10G10R10A2_USCALED:
- case XGL_FMT_B10G10R10A2_SSCALED:
- case XGL_FMT_B10G10R10A2_UINT:
- case XGL_FMT_B10G10R10A2_SINT:
+ case VK_FMT_B10G10R10A2_UNORM:
+ case VK_FMT_B10G10R10A2_SNORM:
+ case VK_FMT_B10G10R10A2_USCALED:
+ case VK_FMT_B10G10R10A2_SSCALED:
+ case VK_FMT_B10G10R10A2_UINT:
+ case VK_FMT_B10G10R10A2_SINT:
((uint32_t *) value)[0] = (color[2] & 0x3ff) << 0 |
(color[1] & 0x3ff) << 10 |
(color[0] & 0x3ff) << 20 |
(color[3] & 0x3) << 30;
break;
- case XGL_FMT_R64_SFLOAT:
+ case VK_FMT_R64_SFLOAT:
/* higher 32 bits always 0 */
((uint64_t *) value)[0] = color[0];
break;
- case XGL_FMT_R64G64_SFLOAT:
+ case VK_FMT_R64G64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
break;
- case XGL_FMT_R64G64B64_SFLOAT:
+ case VK_FMT_R64G64B64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
((uint64_t *) value)[2] = color[2];
break;
- case XGL_FMT_R64G64B64A64_SFLOAT:
+ case VK_FMT_R64G64B64A64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
((uint64_t *) value)[2] = color[2];
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <stdbool.h>
#include "icd.h"
-static inline bool icd_format_is_undef(XGL_FORMAT format)
+static inline bool icd_format_is_undef(VK_FORMAT format)
{
- return (format == XGL_FMT_UNDEFINED);
+ return (format == VK_FMT_UNDEFINED);
}
-bool icd_format_is_ds(XGL_FORMAT format);
+bool icd_format_is_ds(VK_FORMAT format);
-static inline bool icd_format_is_color(XGL_FORMAT format)
+static inline bool icd_format_is_color(VK_FORMAT format)
{
return !(icd_format_is_undef(format) || icd_format_is_ds(format));
}
-bool icd_format_is_norm(XGL_FORMAT format);
+bool icd_format_is_norm(VK_FORMAT format);
-bool icd_format_is_int(XGL_FORMAT format);
+bool icd_format_is_int(VK_FORMAT format);
-bool icd_format_is_float(XGL_FORMAT format);
+bool icd_format_is_float(VK_FORMAT format);
-bool icd_format_is_srgb(XGL_FORMAT format);
+bool icd_format_is_srgb(VK_FORMAT format);
-bool icd_format_is_compressed(XGL_FORMAT format);
+bool icd_format_is_compressed(VK_FORMAT format);
-static inline int icd_format_get_block_width(XGL_FORMAT format)
+static inline int icd_format_get_block_width(VK_FORMAT format)
{
/* all compressed formats use 4x4 blocks */
return (icd_format_is_compressed(format)) ? 4 : 1;
}
-static inline bool icd_blend_mode_is_dual_src(XGL_BLEND mode)
+static inline bool icd_blend_mode_is_dual_src(VK_BLEND mode)
{
- return (mode == XGL_BLEND_SRC1_COLOR) ||
- (mode == XGL_BLEND_SRC1_ALPHA) ||
- (mode == XGL_BLEND_ONE_MINUS_SRC1_COLOR) ||
- (mode == XGL_BLEND_ONE_MINUS_SRC1_ALPHA);
+ return (mode == VK_BLEND_SRC1_COLOR) ||
+ (mode == VK_BLEND_SRC1_ALPHA) ||
+ (mode == VK_BLEND_ONE_MINUS_SRC1_COLOR) ||
+ (mode == VK_BLEND_ONE_MINUS_SRC1_ALPHA);
}
-static inline bool icd_pipeline_cb_att_needs_dual_source_blending(const XGL_PIPELINE_CB_ATTACHMENT_STATE *att)
+static inline bool icd_pipeline_cb_att_needs_dual_source_blending(const VK_PIPELINE_CB_ATTACHMENT_STATE *att)
{
if (icd_blend_mode_is_dual_src(att->srcBlendColor) ||
icd_blend_mode_is_dual_src(att->srcBlendAlpha) ||
return false;
}
-size_t icd_format_get_size(XGL_FORMAT format);
+size_t icd_format_get_size(VK_FORMAT format);
-XGL_IMAGE_FORMAT_CLASS icd_format_get_class(XGL_FORMAT format);
+VK_IMAGE_FORMAT_CLASS icd_format_get_class(VK_FORMAT format);
-unsigned int icd_format_get_channel_count(XGL_FORMAT format);
+unsigned int icd_format_get_channel_count(VK_FORMAT format);
-void icd_format_get_raw_value(XGL_FORMAT format,
+void icd_format_get_raw_value(VK_FORMAT format,
const uint32_t color[4],
void *value);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014-2015 LunarG, Inc.
*
#include <string.h>
#include "icd-instance.h"
-static void * XGLAPI default_alloc(void *user_data, size_t size,
+static void * VKAPI default_alloc(void *user_data, size_t size,
size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE allocType)
+ VK_SYSTEM_ALLOC_TYPE allocType)
{
if (alignment <= 1) {
return malloc(size);
}
}
-static void XGLAPI default_free(void *user_data, void *ptr)
+static void VKAPI default_free(void *user_data, void *ptr)
{
free(ptr);
}
-struct icd_instance *icd_instance_create(const XGL_APPLICATION_INFO *app_info,
- const XGL_ALLOC_CALLBACKS *alloc_cb)
+struct icd_instance *icd_instance_create(const VK_APPLICATION_INFO *app_info,
+ const VK_ALLOC_CALLBACKS *alloc_cb)
{
- static const XGL_ALLOC_CALLBACKS default_alloc_cb = {
+ static const VK_ALLOC_CALLBACKS default_alloc_cb = {
.pfnAlloc = default_alloc,
.pfnFree = default_free,
};
alloc_cb = &default_alloc_cb;
instance = alloc_cb->pfnAlloc(alloc_cb->pUserData, sizeof(*instance), 0,
- XGL_SYSTEM_ALLOC_API_OBJECT);
+ VK_SYSTEM_ALLOC_API_OBJECT);
if (!instance)
return NULL;
name = (app_info->pAppName) ? app_info->pAppName : "unnamed";
len = strlen(name);
instance->name = alloc_cb->pfnAlloc(alloc_cb->pUserData, len + 1, 0,
- XGL_SYSTEM_ALLOC_INTERNAL);
+ VK_SYSTEM_ALLOC_INTERNAL);
if (!instance->name) {
alloc_cb->pfnFree(alloc_cb->pUserData, instance);
return NULL;
icd_instance_free(instance, instance);
}
-XGL_RESULT icd_instance_set_bool(struct icd_instance *instance,
- XGL_DBG_GLOBAL_OPTION option, bool yes)
+VK_RESULT icd_instance_set_bool(struct icd_instance *instance,
+ VK_DBG_GLOBAL_OPTION option, bool yes)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
switch (option) {
- case XGL_DBG_OPTION_DEBUG_ECHO_ENABLE:
+ case VK_DBG_OPTION_DEBUG_ECHO_ENABLE:
instance->debug_echo_enable = yes;
break;
- case XGL_DBG_OPTION_BREAK_ON_ERROR:
+ case VK_DBG_OPTION_BREAK_ON_ERROR:
instance->break_on_error = yes;
break;
- case XGL_DBG_OPTION_BREAK_ON_WARNING:
+ case VK_DBG_OPTION_BREAK_ON_WARNING:
instance->break_on_warning = yes;
break;
default:
- res = XGL_ERROR_INVALID_VALUE;
+ res = VK_ERROR_INVALID_VALUE;
break;
}
return res;
}
-XGL_RESULT icd_instance_add_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func,
+VK_RESULT icd_instance_add_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func,
void *user_data)
{
struct icd_instance_logger *logger;
if (!logger) {
logger = icd_instance_alloc(instance, sizeof(*logger), 0,
- XGL_SYSTEM_ALLOC_DEBUG);
+ VK_SYSTEM_ALLOC_DEBUG);
if (!logger)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
logger->func = func;
logger->next = instance->loggers;
logger->user_data = user_data;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT icd_instance_remove_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func)
+VK_RESULT icd_instance_remove_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func)
{
struct icd_instance_logger *logger, *prev;
}
if (!logger)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (prev)
prev->next = logger->next;
icd_instance_free(instance, logger);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void icd_instance_log(const struct icd_instance *instance,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *msg)
{
}
switch (msg_type) {
- case XGL_DBG_MSG_ERROR:
+ case VK_DBG_MSG_ERROR:
if (instance->break_on_error)
abort();
/* fall through */
- case XGL_DBG_MSG_WARNING:
+ case VK_DBG_MSG_WARNING:
if (instance->break_on_warning)
abort();
break;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014-2015 LunarG, Inc.
*
#endif
struct icd_instance_logger {
- XGL_DBG_MSG_CALLBACK_FUNCTION func;
+ VK_DBG_MSG_CALLBACK_FUNCTION func;
void *user_data;
struct icd_instance_logger *next;
bool break_on_error;
bool break_on_warning;
- XGL_ALLOC_CALLBACKS alloc_cb;
+ VK_ALLOC_CALLBACKS alloc_cb;
struct icd_instance_logger *loggers;
};
-struct icd_instance *icd_instance_create(const XGL_APPLICATION_INFO *app_info,
- const XGL_ALLOC_CALLBACKS *alloc_cb);
+struct icd_instance *icd_instance_create(const VK_APPLICATION_INFO *app_info,
+ const VK_ALLOC_CALLBACKS *alloc_cb);
void icd_instance_destroy(struct icd_instance *instance);
-XGL_RESULT icd_instance_set_bool(struct icd_instance *instance,
- XGL_DBG_GLOBAL_OPTION option, bool yes);
+VK_RESULT icd_instance_set_bool(struct icd_instance *instance,
+ VK_DBG_GLOBAL_OPTION option, bool yes);
static inline void *icd_instance_alloc(const struct icd_instance *instance,
size_t size, size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE type)
+ VK_SYSTEM_ALLOC_TYPE type)
{
return instance->alloc_cb.pfnAlloc(instance->alloc_cb.pUserData,
size, alignment, type);
instance->alloc_cb.pfnFree(instance->alloc_cb.pUserData, ptr);
}
-XGL_RESULT icd_instance_add_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func,
+VK_RESULT icd_instance_add_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func,
void *user_data);
-XGL_RESULT icd_instance_remove_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func);
+VK_RESULT icd_instance_remove_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func);
void icd_instance_log(const struct icd_instance *instance,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *msg);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#ifndef ICD_H
#define ICD_H
-#include <xgl.h>
-#include <xglPlatform.h>
-#include <xglDbg.h>
+#include <vulkan.h>
+#include <vkPlatform.h>
+#include <vkDbg.h>
#if defined(PLATFORM_LINUX)
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#else
-#include <xglWsiWinExt.h>
+#include <vkWsiWinExt.h>
#endif
# Create the i965 XGL DRI library
-set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wno-sign-compare")
+set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wno-sign-compare")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-sign-compare")
add_subdirectory(kmd)
list(APPEND sources wsi_null.c)
endif()
-add_library(XGL_i965 SHARED ${sources})
-target_compile_definitions(XGL_i965 PRIVATE ${definitions})
-target_include_directories(XGL_i965 PRIVATE ${include_dirs})
-target_link_libraries(XGL_i965 ${libraries})
+add_library(VK_i965 SHARED ${sources})
+target_compile_definitions(VK_i965 PRIVATE ${definitions})
+target_include_directories(VK_i965 PRIVATE ${include_dirs})
+target_link_libraries(VK_i965 ${libraries})
# set -Bsymbolic for xglGetProcAddr()
-set_target_properties(XGL_i965 PROPERTIES
+set_target_properties(VK_i965 PROPERTIES
COMPILE_FLAGS "-Wmissing-declarations"
LINK_FLAGS "-Wl,-Bsymbolic -Wl,-no-undefined -Wl,--exclude-libs,ALL")
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_buf_destroy(buf);
}
-static XGL_RESULT buf_get_info(struct intel_base *base, int type,
+static VK_RESULT buf_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
struct intel_buf *buf = intel_buf_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
* bytes added beyond that to account for the L1 cache line."
*/
mem_req->size = buf->size;
- if (buf->usage & XGL_BUFFER_USAGE_SHADER_ACCESS_READ_BIT)
+ if (buf->usage & VK_BUFFER_USAGE_SHADER_ACCESS_READ_BIT)
mem_req->size = u_align(mem_req->size, 256) + 16;
mem_req->alignment = 4096;
- mem_req->memType = XGL_MEMORY_TYPE_BUFFER;
+ mem_req->memType = VK_MEMORY_TYPE_BUFFER;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = buf->usage;
return ret;
}
-XGL_RESULT intel_buf_create(struct intel_dev *dev,
- const XGL_BUFFER_CREATE_INFO *info,
+VK_RESULT intel_buf_create(struct intel_dev *dev,
+ const VK_BUFFER_CREATE_INFO *info,
struct intel_buf **buf_ret)
{
struct intel_buf *buf;
buf = (struct intel_buf *) intel_base_create(&dev->base.handle,
- sizeof(*buf), dev->base.dbg, XGL_DBG_OBJECT_BUFFER, info, 0);
+ sizeof(*buf), dev->base.dbg, VK_DBG_OBJECT_BUFFER, info, 0);
if (!buf)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
buf->size = info->size;
buf->usage = info->usage;
*buf_ret = buf;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_buf_destroy(struct intel_buf *buf)
intel_base_destroy(&buf->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(
- XGL_DEVICE device,
- const XGL_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_BUFFER* pBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBuffer(
+ VK_DEVICE device,
+ const VK_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_BUFFER* pBuffer)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_buf {
struct intel_obj obj;
- XGL_GPU_SIZE size;
- XGL_FLAGS usage;
+ VK_GPU_SIZE size;
+ VK_FLAGS usage;
};
-static inline struct intel_buf *intel_buf(XGL_BUFFER buf)
+static inline struct intel_buf *intel_buf(VK_BUFFER buf)
{
return (struct intel_buf *) buf;
}
return intel_buf_from_base(&obj->base);
}
-XGL_RESULT intel_buf_create(struct intel_dev *dev,
- const XGL_BUFFER_CREATE_INFO *info,
+VK_RESULT intel_buf_create(struct intel_dev *dev,
+ const VK_BUFFER_CREATE_INFO *info,
struct intel_buf **buf_ret);
void intel_buf_destroy(struct intel_buf *buf);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/**
* Allocate and map the buffer for writing.
*/
-static XGL_RESULT cmd_writer_alloc_and_map(struct intel_cmd *cmd,
+static VK_RESULT cmd_writer_alloc_and_map(struct intel_cmd *cmd,
enum intel_cmd_writer_type which)
{
struct intel_cmd_writer *writer = &cmd->writers[which];
/* reuse the old bo */
cmd_writer_discard(cmd, which);
} else {
- return XGL_ERROR_OUT_OF_GPU_MEMORY;
+ return VK_ERROR_OUT_OF_GPU_MEMORY;
}
writer->used = 0;
writer->ptr = intel_bo_map(writer->bo, true);
if (!writer->ptr)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/**
new_bo = alloc_writer_bo(cmd->dev->winsys, which, new_size);
if (!new_bo) {
cmd_writer_discard(cmd, which);
- cmd_fail(cmd, XGL_ERROR_OUT_OF_GPU_MEMORY);
+ cmd_fail(cmd, VK_ERROR_OUT_OF_GPU_MEMORY);
return;
}
if (!new_ptr) {
intel_bo_unref(new_bo);
cmd_writer_discard(cmd, which);
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
struct intel_cmd_item *items;
items = intel_alloc(cmd, sizeof(writer->items[0]) * new_alloc,
- 0, XGL_SYSTEM_ALLOC_DEBUG);
+ 0, VK_SYSTEM_ALLOC_DEBUG);
if (!items) {
writer->item_used = 0;
- cmd_fail(cmd, XGL_ERROR_OUT_OF_MEMORY);
+ cmd_fail(cmd, VK_ERROR_OUT_OF_MEMORY);
return;
}
memset(&cmd->bind, 0, sizeof(cmd->bind));
cmd->reloc_used = 0;
- cmd->result = XGL_SUCCESS;
+ cmd->result = VK_SUCCESS;
}
static void cmd_destroy(struct intel_obj *obj)
intel_cmd_destroy(cmd);
}
-XGL_RESULT intel_cmd_create(struct intel_dev *dev,
- const XGL_CMD_BUFFER_CREATE_INFO *info,
+VK_RESULT intel_cmd_create(struct intel_dev *dev,
+ const VK_CMD_BUFFER_CREATE_INFO *info,
struct intel_cmd **cmd_ret)
{
int pipeline_select;
pipeline_select = GEN6_PIPELINE_SELECT_DW0_SELECT_3D;
break;
default:
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
break;
}
cmd = (struct intel_cmd *) intel_base_create(&dev->base.handle,
- sizeof(*cmd), dev->base.dbg, XGL_DBG_OBJECT_CMD_BUFFER, info, 0);
+ sizeof(*cmd), dev->base.dbg, VK_DBG_OBJECT_CMD_BUFFER, info, 0);
if (!cmd)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
cmd->obj.destroy = cmd_destroy;
*/
cmd->reloc_count = dev->gpu->batch_buffer_reloc_count;
cmd->relocs = intel_alloc(cmd, sizeof(cmd->relocs[0]) * cmd->reloc_count,
- 4096, XGL_SYSTEM_ALLOC_INTERNAL);
+ 4096, VK_SYSTEM_ALLOC_INTERNAL);
if (!cmd->relocs) {
intel_cmd_destroy(cmd);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
*cmd_ret = cmd;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_cmd_destroy(struct intel_cmd *cmd)
intel_base_destroy(&cmd->obj.base);
}
-XGL_RESULT intel_cmd_begin(struct intel_cmd *cmd, const XGL_CMD_BUFFER_BEGIN_INFO *info)
+VK_RESULT intel_cmd_begin(struct intel_cmd *cmd, const VK_CMD_BUFFER_BEGIN_INFO *info)
{
- const XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO *ginfo;
- XGL_RESULT ret;
+ const VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO *ginfo;
+ VK_RESULT ret;
uint32_t i;
- XGL_FLAGS flags = 0;
+ VK_FLAGS flags = 0;
cmd_reset(cmd);
while (info != NULL) {
switch (info->sType) {
- case XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO:
+ case VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO:
flags = info->flags;
break;
- case XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO:
- ginfo = (const XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO *) info;
+ case VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO:
+ ginfo = (const VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO *) info;
cmd_begin_render_pass(cmd, intel_render_pass(ginfo->renderPassContinue.renderPass),
intel_fb(ginfo->renderPassContinue.framebuffer));
break;
default:
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
break;
}
- info = (const XGL_CMD_BUFFER_BEGIN_INFO*) info->pNext;
+ info = (const VK_CMD_BUFFER_BEGIN_INFO*) info->pNext;
}
if (cmd->flags != flags) {
const uint32_t size = cmd->dev->gpu->max_batch_buffer_size / 2;
uint32_t divider = 1;
- if (flags & XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT)
+ if (flags & VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT)
divider *= 4;
cmd->writers[INTEL_CMD_WRITER_BATCH].size = size / divider;
for (i = 0; i < INTEL_CMD_WRITER_COUNT; i++) {
ret = cmd_writer_alloc_and_map(cmd, i);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
cmd_reset(cmd);
return ret;
}
cmd_batch_begin(cmd);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_cmd_end(struct intel_cmd *cmd)
+VK_RESULT intel_cmd_end(struct intel_cmd *cmd)
{
struct intel_winsys *winsys = cmd->dev->winsys;
uint32_t i;
/* no matching intel_cmd_begin() */
if (!cmd->writers[INTEL_CMD_WRITER_BATCH].ptr)
- return XGL_ERROR_INCOMPLETE_COMMAND_BUFFER;
+ return VK_ERROR_INCOMPLETE_COMMAND_BUFFER;
cmd_batch_end(cmd);
(struct intel_bo *) reloc->target, reloc->target_offset,
reloc->flags, &presumed_offset);
if (err) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
break;
}
reloc->flags & ~INTEL_CMD_RELOC_TARGET_IS_WRITER,
&presumed_offset);
if (err) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
break;
}
for (i = 0; i < INTEL_CMD_WRITER_COUNT; i++)
cmd_writer_unmap(cmd, i);
- if (cmd->result != XGL_SUCCESS)
+ if (cmd->result != VK_SUCCESS)
return cmd->result;
if (intel_winsys_can_submit_bo(winsys,
&cmd->writers[INTEL_CMD_WRITER_BATCH].bo, 1))
- return XGL_SUCCESS;
+ return VK_SUCCESS;
else
- return XGL_ERROR_TOO_MANY_MEMORY_REFERENCES;
+ return VK_ERROR_TOO_MANY_MEMORY_REFERENCES;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(
- XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(
+ VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_cmd **) pCmdBuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_CMD_BUFFER_BEGIN_INFO *info)
+ICD_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_CMD_BUFFER_BEGIN_INFO *info)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
return intel_cmd_begin(cmd, info);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
return intel_cmd_end(cmd);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
cmd_reset(cmd);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT void XGLAPI xglCmdInitAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdInitAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
const uint32_t* pData)
{
}
-ICD_EXPORT void XGLAPI xglCmdLoadAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdLoadAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER srcBuffer,
- XGL_GPU_SIZE srcOffset)
+ VK_BUFFER srcBuffer,
+ VK_GPU_SIZE srcOffset)
{
}
-ICD_EXPORT void XGLAPI xglCmdSaveAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdSaveAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerBegin(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDbgMarkerBegin(
+ VK_CMD_BUFFER cmdBuffer,
const char* pMarker)
{
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerEnd(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT void VKAPI vkCmdDbgMarkerEnd(
+ VK_CMD_BUFFER cmdBuffer)
{
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct {
const struct intel_buf *buf[INTEL_MAX_VERTEX_BINDING_COUNT];
- XGL_GPU_SIZE offset[INTEL_MAX_VERTEX_BINDING_COUNT];
+ VK_GPU_SIZE offset[INTEL_MAX_VERTEX_BINDING_COUNT];
} vertex;
struct {
const struct intel_buf *buf;
- XGL_GPU_SIZE offset;
- XGL_INDEX_TYPE type;
+ VK_GPU_SIZE offset;
+ VK_INDEX_TYPE type;
} index;
struct intel_cmd_reloc *relocs;
uint32_t reloc_count;
- XGL_FLAGS flags;
+ VK_FLAGS flags;
struct intel_cmd_writer writers[INTEL_CMD_WRITER_COUNT];
uint32_t reloc_used;
- XGL_RESULT result;
+ VK_RESULT result;
struct intel_cmd_bind bind;
};
-static inline struct intel_cmd *intel_cmd(XGL_CMD_BUFFER cmd)
+static inline struct intel_cmd *intel_cmd(VK_CMD_BUFFER cmd)
{
return (struct intel_cmd *) cmd;
}
return (struct intel_cmd *) obj;
}
-XGL_RESULT intel_cmd_create(struct intel_dev *dev,
- const XGL_CMD_BUFFER_CREATE_INFO *info,
+VK_RESULT intel_cmd_create(struct intel_dev *dev,
+ const VK_CMD_BUFFER_CREATE_INFO *info,
struct intel_cmd **cmd_ret);
void intel_cmd_destroy(struct intel_cmd *cmd);
-XGL_RESULT intel_cmd_begin(struct intel_cmd *cmd, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
-XGL_RESULT intel_cmd_end(struct intel_cmd *cmd);
+VK_RESULT intel_cmd_begin(struct intel_cmd *cmd, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
+VK_RESULT intel_cmd_end(struct intel_cmd *cmd);
void intel_cmd_decode(struct intel_cmd *cmd, bool decode_inst_writer);
static inline struct intel_bo *intel_cmd_get_batch(const struct intel_cmd *cmd,
- XGL_GPU_SIZE *used)
+ VK_GPU_SIZE *used)
{
const struct intel_cmd_writer *writer =
&cmd->writers[INTEL_CMD_WRITER_BATCH];
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
};
static uint32_t img_get_layout_ops(const struct intel_img *img,
- XGL_IMAGE_LAYOUT layout)
+ VK_IMAGE_LAYOUT layout)
{
uint32_t ops;
switch (layout) {
- case XGL_IMAGE_LAYOUT_GENERAL:
+ case VK_IMAGE_LAYOUT_GENERAL:
ops = READ_OP | WRITE_OP;
break;
- case XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
ops = READ_OP | WRITE_OP;
break;
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
ops = READ_OP | WRITE_OP | HIZ_OP;
break;
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
ops = READ_OP | HIZ_OP;
break;
- case XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
ops = READ_OP;
break;
- case XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL:
+ case VK_IMAGE_LAYOUT_CLEAR_OPTIMAL:
ops = WRITE_OP | HIZ_OP;
break;
- case XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
+ case VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
ops = READ_OP;
break;
- case XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
+ case VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
ops = WRITE_OP;
break;
- case XGL_IMAGE_LAYOUT_UNDEFINED:
+ case VK_IMAGE_LAYOUT_UNDEFINED:
default:
ops = 0;
break;
}
static uint32_t img_get_layout_caches(const struct intel_img *img,
- XGL_IMAGE_LAYOUT layout)
+ VK_IMAGE_LAYOUT layout)
{
uint32_t caches;
switch (layout) {
- case XGL_IMAGE_LAYOUT_GENERAL:
+ case VK_IMAGE_LAYOUT_GENERAL:
// General layout when image can be used for any kind of access
caches = MEM_CACHE | DATA_READ_CACHE | DATA_WRITE_CACHE | RENDER_CACHE | SAMPLER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
// Optimal layout when image is only used for color attachment read/write
caches = DATA_WRITE_CACHE | RENDER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
// Optimal layout when image is only used for depth/stencil attachment read/write
caches = DATA_WRITE_CACHE | RENDER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
// Optimal layout when image is used for read only depth/stencil attachment and shader access
caches = RENDER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
// Optimal layout when image is used for read only shader access
caches = DATA_READ_CACHE | SAMPLER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL:
+ case VK_IMAGE_LAYOUT_CLEAR_OPTIMAL:
// Optimal layout when image is used only for clear operations
caches = RENDER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
+ case VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
// Optimal layout when image is used only as source of transfer operations
caches = MEM_CACHE | DATA_READ_CACHE | RENDER_CACHE | SAMPLER_CACHE;
break;
- case XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
+ case VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
// Optimal layout when image is used only as destination of transfer operations
caches = MEM_CACHE | DATA_WRITE_CACHE | RENDER_CACHE;
break;
static void cmd_resolve_depth(struct intel_cmd *cmd,
struct intel_img *img,
- XGL_IMAGE_LAYOUT old_layout,
- XGL_IMAGE_LAYOUT new_layout,
- const XGL_IMAGE_SUBRESOURCE_RANGE *range)
+ VK_IMAGE_LAYOUT old_layout,
+ VK_IMAGE_LAYOUT new_layout,
+ const VK_IMAGE_SUBRESOURCE_RANGE *range)
{
const uint32_t old_ops = img_get_layout_ops(img, old_layout);
const uint32_t new_ops = img_get_layout_ops(img, new_layout);
const void** memory_barriers)
{
uint32_t i;
- XGL_FLAGS input_mask = 0;
- XGL_FLAGS output_mask = 0;
+ VK_FLAGS input_mask = 0;
+ VK_FLAGS output_mask = 0;
for (i = 0; i < memory_barrier_count; i++) {
const union {
- XGL_STRUCTURE_TYPE type;
+ VK_STRUCTURE_TYPE type;
- XGL_MEMORY_BARRIER mem;
- XGL_BUFFER_MEMORY_BARRIER buf;
- XGL_IMAGE_MEMORY_BARRIER img;
+ VK_MEMORY_BARRIER mem;
+ VK_BUFFER_MEMORY_BARRIER buf;
+ VK_IMAGE_MEMORY_BARRIER img;
} *u = memory_barriers[i];
switch(u->type)
{
- case XGL_STRUCTURE_TYPE_MEMORY_BARRIER:
+ case VK_STRUCTURE_TYPE_MEMORY_BARRIER:
output_mask |= u->mem.outputMask;
input_mask |= u->mem.inputMask;
break;
- case XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER:
+ case VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER:
output_mask |= u->buf.outputMask;
input_mask |= u->buf.inputMask;
break;
- case XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER:
+ case VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER:
output_mask |= u->img.outputMask;
input_mask |= u->img.inputMask;
{
}
}
- if (output_mask & XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT) {
+ if (output_mask & VK_MEMORY_OUTPUT_SHADER_WRITE_BIT) {
flush_flags |= GEN7_PIPE_CONTROL_DC_FLUSH;
}
- if (output_mask & XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT) {
+ if (output_mask & VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT) {
flush_flags |= GEN6_PIPE_CONTROL_RENDER_CACHE_FLUSH;
}
- if (output_mask & XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT) {
+ if (output_mask & VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT) {
flush_flags |= GEN6_PIPE_CONTROL_DEPTH_CACHE_FLUSH;
}
- /* CPU write is cache coherent, so XGL_MEMORY_OUTPUT_CPU_WRITE_BIT needs no flush. */
- /* Meta handles flushes, so XGL_MEMORY_OUTPUT_COPY_BIT needs no flush. */
+ /* CPU write is cache coherent, so VK_MEMORY_OUTPUT_CPU_WRITE_BIT needs no flush. */
+ /* Meta handles flushes, so VK_MEMORY_OUTPUT_COPY_BIT needs no flush. */
- if (input_mask & (XGL_MEMORY_INPUT_SHADER_READ_BIT | XGL_MEMORY_INPUT_UNIFORM_READ_BIT)) {
+ if (input_mask & (VK_MEMORY_INPUT_SHADER_READ_BIT | VK_MEMORY_INPUT_UNIFORM_READ_BIT)) {
flush_flags |= GEN6_PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
}
- if (input_mask & XGL_MEMORY_INPUT_UNIFORM_READ_BIT) {
+ if (input_mask & VK_MEMORY_INPUT_UNIFORM_READ_BIT) {
flush_flags |= GEN6_PIPE_CONTROL_CONSTANT_CACHE_INVALIDATE;
}
- if (input_mask & XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT) {
+ if (input_mask & VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT) {
flush_flags |= GEN6_PIPE_CONTROL_VF_CACHE_INVALIDATE;
}
/* These bits have no corresponding cache invalidate operation.
- * XGL_MEMORY_INPUT_CPU_READ_BIT
- * XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT
- * XGL_MEMORY_INPUT_INDEX_FETCH_BIT
- * XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT
- * XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT
- * XGL_MEMORY_INPUT_COPY_BIT
+ * VK_MEMORY_INPUT_CPU_READ_BIT
+ * VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT
+ * VK_MEMORY_INPUT_INDEX_FETCH_BIT
+ * VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT
+ * VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT
+ * VK_MEMORY_INPUT_COPY_BIT
*/
cmd_batch_flush(cmd, flush_flags);
}
-ICD_EXPORT void XGLAPI xglCmdWaitEvents(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_EVENT_WAIT_INFO* pWaitInfo)
+ICD_EXPORT void VKAPI vkCmdWaitEvents(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_EVENT_WAIT_INFO* pWaitInfo)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
- /* This hardware will always wait at XGL_WAIT_EVENT_TOP_OF_PIPE.
- * Passing a pWaitInfo->waitEvent of XGL_WAIT_EVENT_BEFORE_FRAGMENT_PROCESSING
+ /* This hardware will always wait at VK_WAIT_EVENT_TOP_OF_PIPE.
+ * Passing a pWaitInfo->waitEvent of VK_WAIT_EVENT_BEFORE_FRAGMENT_PROCESSING
* does not change that.
*/
/* Because the command buffer is serialized, reaching
* a pipelined wait is always after completion of prior events.
* pWaitInfo->pEvents need not be examined.
- * xglCmdWaitEvents is equivalent to memory barrier part of xglCmdPipelineBarrier.
+ * vkCmdWaitEvents is equivalent to memory barrier part of vkCmdPipelineBarrier.
* cmd_memory_barriers will wait for GEN6_PIPE_CONTROL_CS_STALL and perform
* appropriate cache control.
*/
pWaitInfo->memBarrierCount, pWaitInfo->ppMemBarriers);
}
-ICD_EXPORT void XGLAPI xglCmdPipelineBarrier(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_PIPELINE_BARRIER* pBarrier)
+ICD_EXPORT void VKAPI vkCmdPipelineBarrier(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_PIPELINE_BARRIER* pBarrier)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
uint32_t pipe_control_flags = 0;
uint32_t i;
- /* This hardware will always wait at XGL_WAIT_EVENT_TOP_OF_PIPE.
- * Passing a pBarrier->waitEvent of XGL_WAIT_EVENT_BEFORE_FRAGMENT_PROCESSING
+ /* This hardware will always wait at VK_WAIT_EVENT_TOP_OF_PIPE.
+ * Passing a pBarrier->waitEvent of VK_WAIT_EVENT_BEFORE_FRAGMENT_PROCESSING
* does not change that.
*/
/* Cache control is done with PIPE_CONTROL flags.
- * With no GEN6_PIPE_CONTROL_CS_STALL flag set, it behaves as XGL_PIPE_EVENT_TOP_OF_PIPE.
- * All other pEvents values will behave as XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE.
+ * With no GEN6_PIPE_CONTROL_CS_STALL flag set, it behaves as VK_PIPE_EVENT_TOP_OF_PIPE.
+ * All other pEvents values will behave as VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE.
*/
for (i = 0; i < pBarrier->eventCount; i++) {
switch(pBarrier->pEvents[i])
{
- case XGL_PIPE_EVENT_TOP_OF_PIPE:
+ case VK_PIPE_EVENT_TOP_OF_PIPE:
break;
- case XGL_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE:
- case XGL_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE:
- case XGL_PIPE_EVENT_TRANSFER_COMPLETE:
- case XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE:
+ case VK_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE:
+ case VK_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE:
+ case VK_PIPE_EVENT_TRANSFER_COMPLETE:
+ case VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE:
pipe_control_flags |= GEN6_PIPE_CONTROL_CS_STALL;
break;
default:
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
break;
}
{
int i;
- assert(cmd->result == XGL_SUCCESS);
+ assert(cmd->result == VK_SUCCESS);
for (i = 0; i < INTEL_CMD_WRITER_COUNT; i++)
cmd_writer_decode(cmd, i, decode_inst_writer);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "state.h"
#include "cmd_priv.h"
-static XGL_RESULT cmd_meta_create_buf_view(struct intel_cmd *cmd,
- XGL_BUFFER buf,
- XGL_GPU_SIZE range,
- XGL_FORMAT format,
+static VK_RESULT cmd_meta_create_buf_view(struct intel_cmd *cmd,
+ VK_BUFFER buf,
+ VK_GPU_SIZE range,
+ VK_FORMAT format,
struct intel_buf_view **view)
{
- XGL_BUFFER_VIEW_CREATE_INFO info;
- XGL_GPU_SIZE stride;
+ VK_BUFFER_VIEW_CREATE_INFO info;
+ VK_GPU_SIZE stride;
memset(&info, 0, sizeof(info));
- info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
+ info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
info.buffer = buf;
- info.viewType = XGL_BUFFER_VIEW_TYPED;
+ info.viewType = VK_BUFFER_VIEW_TYPED;
info.format = format;
info.range = range;
static void cmd_meta_set_src_for_buf(struct intel_cmd *cmd,
const struct intel_buf *buf,
- XGL_FORMAT format,
+ VK_FORMAT format,
struct intel_cmd_meta *meta)
{
struct intel_buf_view *view;
- XGL_RESULT res;
+ VK_RESULT res;
- res = cmd_meta_create_buf_view(cmd, (XGL_BUFFER) buf,
+ res = cmd_meta_create_buf_view(cmd, (VK_BUFFER) buf,
buf->size, format, &view);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
cmd_fail(cmd, res);
return;
}
static void cmd_meta_set_dst_for_buf(struct intel_cmd *cmd,
const struct intel_buf *buf,
- XGL_FORMAT format,
+ VK_FORMAT format,
struct intel_cmd_meta *meta)
{
struct intel_buf_view *view;
- XGL_RESULT res;
+ VK_RESULT res;
- res = cmd_meta_create_buf_view(cmd, (XGL_BUFFER) buf,
+ res = cmd_meta_create_buf_view(cmd, (VK_BUFFER) buf,
buf->size, format, &view);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
cmd_fail(cmd, res);
return;
}
static void cmd_meta_set_src_for_img(struct intel_cmd *cmd,
const struct intel_img *img,
- XGL_FORMAT format,
- XGL_IMAGE_ASPECT aspect,
+ VK_FORMAT format,
+ VK_IMAGE_ASPECT aspect,
struct intel_cmd_meta *meta)
{
- XGL_IMAGE_VIEW_CREATE_INFO info;
+ VK_IMAGE_VIEW_CREATE_INFO info;
struct intel_img_view *view;
- XGL_RESULT ret;
+ VK_RESULT ret;
memset(&info, 0, sizeof(info));
- info.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
- info.image = (XGL_IMAGE) img;
+ info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
+ info.image = (VK_IMAGE) img;
switch (img->type) {
- case XGL_IMAGE_1D:
- info.viewType = XGL_IMAGE_VIEW_1D;
+ case VK_IMAGE_1D:
+ info.viewType = VK_IMAGE_VIEW_1D;
break;
- case XGL_IMAGE_2D:
- info.viewType = XGL_IMAGE_VIEW_2D;
+ case VK_IMAGE_2D:
+ info.viewType = VK_IMAGE_VIEW_2D;
break;
- case XGL_IMAGE_3D:
- info.viewType = XGL_IMAGE_VIEW_3D;
+ case VK_IMAGE_3D:
+ info.viewType = VK_IMAGE_VIEW_3D;
break;
default:
break;
}
info.format = format;
- info.channels.r = XGL_CHANNEL_SWIZZLE_R;
- info.channels.g = XGL_CHANNEL_SWIZZLE_G;
- info.channels.b = XGL_CHANNEL_SWIZZLE_B;
- info.channels.a = XGL_CHANNEL_SWIZZLE_A;
+ info.channels.r = VK_CHANNEL_SWIZZLE_R;
+ info.channels.g = VK_CHANNEL_SWIZZLE_G;
+ info.channels.b = VK_CHANNEL_SWIZZLE_B;
+ info.channels.a = VK_CHANNEL_SWIZZLE_A;
info.subresourceRange.aspect = aspect;
info.subresourceRange.baseMipLevel = 0;
- info.subresourceRange.mipLevels = XGL_LAST_MIP_OR_SLICE;
+ info.subresourceRange.mipLevels = VK_LAST_MIP_OR_SLICE;
info.subresourceRange.baseArraySlice = 0;
- info.subresourceRange.arraySize = XGL_LAST_MIP_OR_SLICE;
+ info.subresourceRange.arraySize = VK_LAST_MIP_OR_SLICE;
ret = intel_img_view_create(cmd->dev, &info, &view);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
cmd_fail(cmd, ret);
return;
}
static void cmd_meta_set_dst_for_img(struct intel_cmd *cmd,
const struct intel_img *img,
- XGL_FORMAT format,
+ VK_FORMAT format,
uint32_t lod, uint32_t layer,
struct intel_cmd_meta *meta)
{
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO info;
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO info;
struct intel_rt_view *rt;
- XGL_RESULT ret;
+ VK_RESULT ret;
memset(&info, 0, sizeof(info));
- info.sType = XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
- info.image = (XGL_IMAGE) img;
+ info.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
+ info.image = (VK_IMAGE) img;
info.format = format;
info.mipLevel = lod;
info.baseArraySlice = layer;
info.arraySize = 1;
ret = intel_rt_view_create(cmd->dev, &info, &rt);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
cmd_fail(cmd, ret);
return;
}
static void cmd_meta_set_src_for_writer(struct intel_cmd *cmd,
enum intel_cmd_writer_type writer,
- XGL_GPU_SIZE size,
- XGL_FORMAT format,
+ VK_GPU_SIZE size,
+ VK_FORMAT format,
struct intel_cmd_meta *meta)
{
struct intel_buf_view *view;
- XGL_RESULT res;
+ VK_RESULT res;
- res = cmd_meta_create_buf_view(cmd, (XGL_BUFFER) XGL_NULL_HANDLE,
+ res = cmd_meta_create_buf_view(cmd, (VK_BUFFER) VK_NULL_HANDLE,
size, format, &view);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
cmd_fail(cmd, res);
return;
}
uint32_t lod, uint32_t layer,
struct intel_cmd_meta *meta)
{
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO info;
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO info;
struct intel_ds_view *ds;
- XGL_RESULT ret;
+ VK_RESULT ret;
memset(&info, 0, sizeof(info));
- info.sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO;
- info.image = (XGL_IMAGE) img;
+ info.sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ info.image = (VK_IMAGE) img;
info.mipLevel = lod;
info.baseArraySlice = layer;
info.arraySize = 1;
ret = intel_ds_view_create(cmd->dev, &info, &ds);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
cmd_fail(cmd, ret);
return;
}
}
static void cmd_meta_set_ds_state(struct intel_cmd *cmd,
- XGL_IMAGE_ASPECT aspect,
+ VK_IMAGE_ASPECT aspect,
uint32_t stencil_ref,
struct intel_cmd_meta *meta)
{
enum intel_dev_meta_shader shader_id;
switch (img->type) {
- case XGL_IMAGE_1D:
+ case VK_IMAGE_1D:
shader_id = (copy_array) ?
INTEL_DEV_META_FS_COPY_1D_ARRAY : INTEL_DEV_META_FS_COPY_1D;
break;
- case XGL_IMAGE_2D:
+ case VK_IMAGE_2D:
shader_id = (img->samples > 1) ? INTEL_DEV_META_FS_COPY_2D_MS :
(copy_array) ? INTEL_DEV_META_FS_COPY_2D_ARRAY :
INTEL_DEV_META_FS_COPY_2D;
break;
- case XGL_IMAGE_3D:
+ case VK_IMAGE_3D:
default:
shader_id = INTEL_DEV_META_FS_COPY_2D_ARRAY;
break;
}
static bool cmd_meta_mem_dword_aligned(const struct intel_cmd *cmd,
- XGL_GPU_SIZE src_offset,
- XGL_GPU_SIZE dst_offset,
- XGL_GPU_SIZE size)
+ VK_GPU_SIZE src_offset,
+ VK_GPU_SIZE dst_offset,
+ VK_GPU_SIZE size)
{
return !((src_offset | dst_offset | size) & 0x3);
}
-static XGL_FORMAT cmd_meta_img_raw_format(const struct intel_cmd *cmd,
- XGL_FORMAT format)
+static VK_FORMAT cmd_meta_img_raw_format(const struct intel_cmd *cmd,
+ VK_FORMAT format)
{
switch (icd_format_get_size(format)) {
case 1:
- format = XGL_FMT_R8_UINT;
+ format = VK_FMT_R8_UINT;
break;
case 2:
- format = XGL_FMT_R16_UINT;
+ format = VK_FMT_R16_UINT;
break;
case 4:
- format = XGL_FMT_R32_UINT;
+ format = VK_FMT_R32_UINT;
break;
case 8:
- format = XGL_FMT_R32G32_UINT;
+ format = VK_FMT_R32G32_UINT;
break;
case 16:
- format = XGL_FMT_R32G32B32A32_UINT;
+ format = VK_FMT_R32G32B32A32_UINT;
break;
default:
assert(!"unsupported image format for raw blit op");
- format = XGL_FMT_UNDEFINED;
+ format = VK_FMT_UNDEFINED;
break;
}
return format;
}
-ICD_EXPORT void XGLAPI xglCmdCopyBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_COPY* pRegions)
+ const VK_BUFFER_COPY* pRegions)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *src = intel_buf(srcBuffer);
struct intel_buf *dst = intel_buf(destBuffer);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
uint32_t i;
memset(&meta, 0, sizeof(meta));
meta.height = 1;
meta.samples = 1;
- format = XGL_FMT_UNDEFINED;
+ format = VK_FMT_UNDEFINED;
for (i = 0; i < regionCount; i++) {
- const XGL_BUFFER_COPY *region = &pRegions[i];
- XGL_FORMAT fmt;
+ const VK_BUFFER_COPY *region = &pRegions[i];
+ VK_FORMAT fmt;
meta.src.x = region->srcOffset;
meta.dst.x = region->destOffset;
* INTEL_DEV_META_VS_COPY_MEM is untyped but expects the stride to
* be 16
*/
- fmt = XGL_FMT_R32G32B32A32_UINT;
+ fmt = VK_FMT_R32G32B32A32_UINT;
} else {
if (cmd_gen(cmd) == INTEL_GEN(6)) {
- intel_dev_log(cmd->dev, XGL_DBG_MSG_ERROR,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
- "unaligned xglCmdCopyBuffer unsupported");
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ intel_dev_log(cmd->dev, VK_DBG_MSG_ERROR,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
+ "unaligned vkCmdCopyBuffer unsupported");
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
continue;
}
* INTEL_DEV_META_VS_COPY_MEM_UNALIGNED is untyped but expects the
* stride to be 4
*/
- fmt = XGL_FMT_R8G8B8A8_UINT;
+ fmt = VK_FMT_R8G8B8A8_UINT;
}
if (format != fmt) {
}
}
-ICD_EXPORT void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_COPY* pRegions)
+ const VK_IMAGE_COPY* pRegions)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *src = intel_img(srcImage);
struct intel_img *dst = intel_img(destImage);
struct intel_cmd_meta meta;
- XGL_FORMAT raw_format;
+ VK_FORMAT raw_format;
bool raw_copy = false;
uint32_t i;
if (src->type != dst->type) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
raw_format = cmd_meta_img_raw_format(cmd, src->layout.format);
} else if (icd_format_is_compressed(src->layout.format) ||
icd_format_is_compressed(dst->layout.format)) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
cmd_meta_set_src_for_img(cmd, src,
(raw_copy) ? raw_format : src->layout.format,
- XGL_IMAGE_ASPECT_COLOR, &meta);
+ VK_IMAGE_ASPECT_COLOR, &meta);
meta.samples = dst->samples;
for (i = 0; i < regionCount; i++) {
- const XGL_IMAGE_COPY *region = &pRegions[i];
+ const VK_IMAGE_COPY *region = &pRegions[i];
uint32_t j;
meta.shader_id = get_shader_id(cmd->dev, src,
}
}
-ICD_EXPORT void XGLAPI xglCmdBlitImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdBlitImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_BLIT* pRegions)
+ const VK_IMAGE_BLIT* pRegions)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
/*
* TODO: Implement actual blit function.
*/
- cmd_fail(cmd, XGL_ERROR_UNAVAILABLE);
+ cmd_fail(cmd, VK_ERROR_UNAVAILABLE);
}
-ICD_EXPORT void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyBufferToImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *buf = intel_buf(srcBuffer);
struct intel_img *img = intel_img(destImage);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
uint32_t block_width, i;
memset(&meta, 0, sizeof(meta));
cmd_meta_set_src_for_buf(cmd, buf, format, &meta);
for (i = 0; i < regionCount; i++) {
- const XGL_BUFFER_IMAGE_COPY *region = &pRegions[i];
+ const VK_BUFFER_IMAGE_COPY *region = &pRegions[i];
uint32_t j;
meta.src.x = region->bufferOffset / icd_format_get_size(format);
}
}
-ICD_EXPORT void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyImageToBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *img = intel_img(srcImage);
struct intel_buf *buf = intel_buf(destBuffer);
struct intel_cmd_meta meta;
- XGL_FORMAT img_format, buf_format;
+ VK_FORMAT img_format, buf_format;
uint32_t block_width, i;
memset(&meta, 0, sizeof(meta));
/* buf_format is ignored by hw, but we derive stride from it */
switch (img_format) {
- case XGL_FMT_R8_UINT:
+ case VK_FMT_R8_UINT:
meta.shader_id = INTEL_DEV_META_VS_COPY_R8_TO_MEM;
- buf_format = XGL_FMT_R8G8B8A8_UINT;
+ buf_format = VK_FMT_R8G8B8A8_UINT;
break;
- case XGL_FMT_R16_UINT:
+ case VK_FMT_R16_UINT:
meta.shader_id = INTEL_DEV_META_VS_COPY_R16_TO_MEM;
- buf_format = XGL_FMT_R8G8B8A8_UINT;
+ buf_format = VK_FMT_R8G8B8A8_UINT;
break;
- case XGL_FMT_R32_UINT:
+ case VK_FMT_R32_UINT:
meta.shader_id = INTEL_DEV_META_VS_COPY_R32_TO_MEM;
- buf_format = XGL_FMT_R32G32B32A32_UINT;
+ buf_format = VK_FMT_R32G32B32A32_UINT;
break;
- case XGL_FMT_R32G32_UINT:
+ case VK_FMT_R32G32_UINT:
meta.shader_id = INTEL_DEV_META_VS_COPY_R32G32_TO_MEM;
- buf_format = XGL_FMT_R32G32B32A32_UINT;
+ buf_format = VK_FMT_R32G32B32A32_UINT;
break;
- case XGL_FMT_R32G32B32A32_UINT:
+ case VK_FMT_R32G32B32A32_UINT:
meta.shader_id = INTEL_DEV_META_VS_COPY_R32G32B32A32_TO_MEM;
- buf_format = XGL_FMT_R32G32B32A32_UINT;
+ buf_format = VK_FMT_R32G32B32A32_UINT;
break;
default:
- img_format = XGL_FMT_UNDEFINED;
- buf_format = XGL_FMT_UNDEFINED;
+ img_format = VK_FMT_UNDEFINED;
+ buf_format = VK_FMT_UNDEFINED;
break;
}
- if (img_format == XGL_FMT_UNDEFINED ||
+ if (img_format == VK_FMT_UNDEFINED ||
(cmd_gen(cmd) == INTEL_GEN(6) &&
icd_format_get_size(img_format) < 4)) {
- intel_dev_log(cmd->dev, XGL_DBG_MSG_ERROR,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
- "xglCmdCopyImageToBuffer with bpp %d unsupported",
+ intel_dev_log(cmd->dev, VK_DBG_MSG_ERROR,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
+ "vkCmdCopyImageToBuffer with bpp %d unsupported",
icd_format_get_size(img->layout.format));
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
cmd_meta_set_src_for_img(cmd, img, img_format,
- XGL_IMAGE_ASPECT_COLOR, &meta);
+ VK_IMAGE_ASPECT_COLOR, &meta);
cmd_meta_set_dst_for_buf(cmd, buf, buf_format, &meta);
meta.samples = 1;
for (i = 0; i < regionCount; i++) {
- const XGL_BUFFER_IMAGE_COPY *region = &pRegions[i];
+ const VK_BUFFER_IMAGE_COPY *region = &pRegions[i];
uint32_t j;
meta.src.lod = region->imageSubresource.mipLevel;
}
}
-ICD_EXPORT void XGLAPI xglCmdCloneImageData(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout)
+ICD_EXPORT void VKAPI vkCmdCloneImageData(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *src = intel_img(srcImage);
struct intel_img *dst = intel_img(destImage);
struct intel_buf *src_buf, *dst_buf;
- XGL_BUFFER_CREATE_INFO buf_info;
- XGL_BUFFER_COPY buf_region;
- XGL_RESULT res;
+ VK_BUFFER_CREATE_INFO buf_info;
+ VK_BUFFER_COPY buf_region;
+ VK_RESULT res;
memset(&buf_info, 0, sizeof(buf_info));
- buf_info.sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
buf_info.size = src->obj.mem->size;
memset(&buf_region, 0, sizeof(buf_region));
buf_region.copySize = src->obj.mem->size;
res = intel_buf_create(cmd->dev, &buf_info, &src_buf);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
cmd_fail(cmd, res);
return;
}
res = intel_buf_create(cmd->dev, &buf_info, &dst_buf);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
intel_buf_destroy(src_buf);
cmd_fail(cmd, res);
return;
intel_obj_bind_mem(&dst_buf->obj, dst->obj.mem, 0);
cmd_batch_flush(cmd, GEN6_PIPE_CONTROL_RENDER_CACHE_FLUSH);
- xglCmdCopyBuffer(cmdBuffer, (XGL_BUFFER) src_buf,
- (XGL_BUFFER) dst_buf, 1, &buf_region);
+ vkCmdCopyBuffer(cmdBuffer, (VK_BUFFER) src_buf,
+ (VK_BUFFER) dst_buf, 1, &buf_region);
intel_buf_destroy(src_buf);
intel_buf_destroy(dst_buf);
}
-ICD_EXPORT void XGLAPI xglCmdUpdateBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE dataSize,
+ICD_EXPORT void VKAPI vkCmdUpdateBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE dataSize,
const uint32_t* pData)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *dst = intel_buf(destBuffer);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
uint32_t *ptr;
uint32_t offset;
/* must be 4-byte aligned */
if ((destOffset | dataSize) & 3) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
/*
* INTEL_DEV_META_VS_COPY_MEM is untyped but expects the stride to be 16
*/
- format = XGL_FMT_R32G32B32A32_UINT;
+ format = VK_FMT_R32G32B32A32_UINT;
cmd_meta_set_src_for_writer(cmd, INTEL_CMD_WRITER_STATE,
offset + dataSize, format, &meta);
cmd_draw_meta(cmd, &meta);
}
-ICD_EXPORT void XGLAPI xglCmdFillBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE fillSize,
+ICD_EXPORT void VKAPI vkCmdFillBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE fillSize,
uint32_t data)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *dst = intel_buf(destBuffer);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
/* must be 4-byte aligned */
if ((destOffset | fillSize) & 3) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
/*
* INTEL_DEV_META_VS_FILL_MEM is untyped but expects the stride to be 16
*/
- format = XGL_FMT_R32G32B32A32_UINT;
+ format = VK_FMT_R32G32B32A32_UINT;
cmd_meta_set_dst_for_buf(cmd, dst, format, &meta);
static void cmd_meta_clear_image(struct intel_cmd *cmd,
struct intel_img *img,
- XGL_FORMAT format,
+ VK_FORMAT format,
struct intel_cmd_meta *meta,
- const XGL_IMAGE_SUBRESOURCE_RANGE *range)
+ const VK_IMAGE_SUBRESOURCE_RANGE *range)
{
uint32_t mip_levels, array_size;
uint32_t i, j;
continue;
for (j = 0; j < array_size; j++) {
- if (range->aspect == XGL_IMAGE_ASPECT_COLOR) {
+ if (range->aspect == VK_IMAGE_ASPECT_COLOR) {
cmd_meta_set_dst_for_img(cmd, img, format,
meta->dst.lod, meta->dst.layer, meta);
void cmd_meta_ds_op(struct intel_cmd *cmd,
enum intel_cmd_meta_ds_op op,
struct intel_img *img,
- const XGL_IMAGE_SUBRESOURCE_RANGE *range)
+ const VK_IMAGE_SUBRESOURCE_RANGE *range)
{
struct intel_cmd_meta meta;
if (img->layout.aux != INTEL_LAYOUT_AUX_HIZ)
return;
- if (range->aspect != XGL_IMAGE_ASPECT_DEPTH)
+ if (range->aspect != VK_IMAGE_ASPECT_DEPTH)
return;
memset(&meta, 0, sizeof(meta));
meta.mode = INTEL_CMD_META_DEPTH_STENCIL_RECT;
meta.samples = img->samples;
- meta.ds.aspect = XGL_IMAGE_ASPECT_DEPTH;
+ meta.ds.aspect = VK_IMAGE_ASPECT_DEPTH;
meta.ds.op = op;
meta.ds.optimal = true;
cmd_meta_clear_image(cmd, img, img->layout.format, &meta, range);
}
-ICD_EXPORT void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR clearColor,
+ICD_EXPORT void VKAPI vkCmdClearColorImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR clearColor,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *img = intel_img(image);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
uint32_t i;
memset(&meta, 0, sizeof(meta));
}
}
-ICD_EXPORT void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
+ICD_EXPORT void VKAPI vkCmdClearDepthStencil(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
float depth,
uint32_t stencil,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *img = intel_img(image);
meta.clear_val[0] = u_fui(depth);
meta.clear_val[1] = stencil;
- if (imageLayout == XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL ||
- imageLayout == XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL ||
- imageLayout == XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL) {
+ if (imageLayout == VK_IMAGE_LAYOUT_CLEAR_OPTIMAL ||
+ imageLayout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL ||
+ imageLayout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL) {
meta.ds.optimal = true;
}
for (i = 0; i < rangeCount; i++) {
- const XGL_IMAGE_SUBRESOURCE_RANGE *range = &pRanges[i];
+ const VK_IMAGE_SUBRESOURCE_RANGE *range = &pRanges[i];
cmd_meta_clear_image(cmd, img, img->layout.format,
&meta, range);
}
}
-ICD_EXPORT void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdResolveImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t rectCount,
- const XGL_IMAGE_RESOLVE* pRects)
+ const VK_IMAGE_RESOLVE* pRects)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_img *src = intel_img(srcImage);
struct intel_img *dst = intel_img(destImage);
struct intel_cmd_meta meta;
- XGL_FORMAT format;
+ VK_FORMAT format;
uint32_t i;
if (src->samples <= 1 || dst->samples > 1 ||
src->layout.format != dst->layout.format) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
meta.samples = 1;
format = cmd_meta_img_raw_format(cmd, src->layout.format);
- cmd_meta_set_src_for_img(cmd, src, format, XGL_IMAGE_ASPECT_COLOR, &meta);
+ cmd_meta_set_src_for_img(cmd, src, format, VK_IMAGE_ASPECT_COLOR, &meta);
for (i = 0; i < rectCount; i++) {
- const XGL_IMAGE_RESOLVE *rect = &pRects[i];
+ const VK_IMAGE_RESOLVE *rect = &pRects[i];
meta.src.lod = rect->srcSubresource.mipLevel;
meta.src.layer = rect->srcSubresource.arraySlice;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
static void cmd_query_pipeline_statistics(struct intel_cmd *cmd,
struct intel_bo *bo,
- XGL_GPU_SIZE offset)
+ VK_GPU_SIZE offset)
{
const uint32_t regs[] = {
GEN6_REG_PS_INVOCATION_COUNT,
}
}
-ICD_EXPORT void XGLAPI xglCmdBeginQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdBeginQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot,
- XGL_FLAGS flags)
+ VK_FLAGS flags)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_query *query = intel_query(queryPool);
struct intel_bo *bo = query->obj.mem->bo;
- const XGL_GPU_SIZE offset = query->slot_stride * slot;
+ const VK_GPU_SIZE offset = query->slot_stride * slot;
switch (query->type) {
- case XGL_QUERY_OCCLUSION:
+ case VK_QUERY_OCCLUSION:
cmd_batch_depth_count(cmd, bo, offset);
break;
- case XGL_QUERY_PIPELINE_STATISTICS:
+ case VK_QUERY_PIPELINE_STATISTICS:
cmd_query_pipeline_statistics(cmd, bo, offset);
break;
default:
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
break;
}
}
-ICD_EXPORT void XGLAPI xglCmdEndQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdEndQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_query *query = intel_query(queryPool);
struct intel_bo *bo = query->obj.mem->bo;
- const XGL_GPU_SIZE offset = query->slot_stride * slot;
+ const VK_GPU_SIZE offset = query->slot_stride * slot;
switch (query->type) {
- case XGL_QUERY_OCCLUSION:
+ case VK_QUERY_OCCLUSION:
cmd_batch_depth_count(cmd, bo, offset + sizeof(uint64_t));
break;
- case XGL_QUERY_PIPELINE_STATISTICS:
+ case VK_QUERY_PIPELINE_STATISTICS:
cmd_query_pipeline_statistics(cmd, bo,
- offset + sizeof(XGL_PIPELINE_STATISTICS_DATA));
+ offset + sizeof(VK_PIPELINE_STATISTICS_DATA));
break;
default:
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
break;
}
}
-ICD_EXPORT void XGLAPI xglCmdResetQueryPool(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdResetQueryPool(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount)
{
}
static void cmd_write_event_value(struct intel_cmd *cmd, struct intel_event *event,
- XGL_PIPE_EVENT pipeEvent, uint32_t value)
+ VK_PIPE_EVENT pipeEvent, uint32_t value)
{
uint32_t pipe_control_flags;
/* Event setting is done with PIPE_CONTROL post-sync write immediate.
- * With no other PIPE_CONTROL flags set, it behaves as XGL_PIPE_EVENT_TOP_OF_PIPE.
- * All other pipeEvent values will behave as XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE.
+ * With no other PIPE_CONTROL flags set, it behaves as VK_PIPE_EVENT_TOP_OF_PIPE.
+ * All other pipeEvent values will behave as VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE.
*/
switch(pipeEvent)
{
- case XGL_PIPE_EVENT_TOP_OF_PIPE:
+ case VK_PIPE_EVENT_TOP_OF_PIPE:
pipe_control_flags = 0;
break;
- case XGL_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE:
- case XGL_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE:
- case XGL_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE:
- case XGL_PIPE_EVENT_TRANSFER_COMPLETE:
- case XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE:
+ case VK_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE:
+ case VK_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE:
+ case VK_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE:
+ case VK_PIPE_EVENT_TRANSFER_COMPLETE:
+ case VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE:
pipe_control_flags = GEN6_PIPE_CONTROL_CS_STALL;
break;
default:
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
break;
}
cmd_batch_immediate(cmd, pipe_control_flags, event->obj.mem->bo, 0, value);
}
-ICD_EXPORT void XGLAPI xglCmdSetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdSetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_event *event = intel_event(event_);
cmd_write_event_value(cmd, event, pipeEvent, 1);
}
-ICD_EXPORT void XGLAPI xglCmdResetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdResetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_event *event = intel_event(event_);
cmd_write_event_value(cmd, event, pipeEvent, 0);
}
-ICD_EXPORT void XGLAPI xglCmdWriteTimestamp(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_TIMESTAMP_TYPE timestampType,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ICD_EXPORT void VKAPI vkCmdWriteTimestamp(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_TIMESTAMP_TYPE timestampType,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *buf = intel_buf(destBuffer);
switch (timestampType) {
- case XGL_TIMESTAMP_TOP:
+ case VK_TIMESTAMP_TOP:
/* XXX we are not supposed to use two commands... */
gen6_MI_STORE_REGISTER_MEM(cmd, buf->obj.mem->bo,
destOffset, GEN6_REG_TIMESTAMP);
gen6_MI_STORE_REGISTER_MEM(cmd, buf->obj.mem->bo,
destOffset + 4, GEN6_REG_TIMESTAMP + 4);
break;
- case XGL_TIMESTAMP_BOTTOM:
+ case VK_TIMESTAMP_BOTTOM:
cmd_batch_timestamp(cmd, buf->obj.mem->bo, destOffset);
break;
default:
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
break;
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
return false;
switch (cmd->bind.index.type) {
- case XGL_INDEX_8:
+ case VK_INDEX_8:
supported = (p->primitive_restart_index != 0xffu);
break;
- case XGL_INDEX_16:
+ case VK_INDEX_16:
supported = (p->primitive_restart_index != 0xffffu);
break;
- case XGL_INDEX_32:
+ case VK_INDEX_32:
supported = (p->primitive_restart_index != 0xffffffffu);
break;
default:
static void gen6_3DSTATE_INDEX_BUFFER(struct intel_cmd *cmd,
const struct intel_buf *buf,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE type,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE type,
bool enable_cut_index)
{
const uint8_t cmd_len = 3;
dw0 |= GEN6_IB_DW0_CUT_INDEX_ENABLE;
switch (type) {
- case XGL_INDEX_8:
+ case VK_INDEX_8:
dw0 |= GEN6_IB_DW0_FORMAT_BYTE;
offset_align = 1;
break;
- case XGL_INDEX_16:
+ case VK_INDEX_16:
dw0 |= GEN6_IB_DW0_FORMAT_WORD;
offset_align = 2;
break;
- case XGL_INDEX_32:
+ case VK_INDEX_32:
dw0 |= GEN6_IB_DW0_FORMAT_DWORD;
offset_align = 4;
break;
default:
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
return;
break;
}
if (offset % offset_align) {
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
return;
}
int format;
switch (pipeline->db_format) {
- case XGL_FMT_D16_UNORM:
+ case VK_FMT_D16_UNORM:
format = GEN6_ZFORMAT_D16_UNORM;
break;
- case XGL_FMT_D32_SFLOAT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
format = GEN6_ZFORMAT_D32_FLOAT;
break;
default:
void cmd_batch_depth_count(struct intel_cmd *cmd,
struct intel_bo *bo,
- XGL_GPU_SIZE offset)
+ VK_GPU_SIZE offset)
{
cmd_wa_gen6_pre_depth_stall_write(cmd);
void cmd_batch_timestamp(struct intel_cmd *cmd,
struct intel_bo *bo,
- XGL_GPU_SIZE offset)
+ VK_GPU_SIZE offset)
{
/* need any WA or stall? */
gen6_PIPE_CONTROL(cmd, GEN6_PIPE_CONTROL_WRITE_TIMESTAMP, bo, offset, 0);
void cmd_batch_immediate(struct intel_cmd *cmd,
uint32_t pipe_control_flags,
struct intel_bo *bo,
- XGL_GPU_SIZE offset,
+ VK_GPU_SIZE offset,
uint64_t val)
{
/* need any WA or stall? */
static uint32_t emit_binding_table(struct intel_cmd *cmd,
const struct intel_pipeline_rmap *rmap,
- const XGL_PIPELINE_SHADER_STAGE stage)
+ const VK_PIPELINE_SHADER_STAGE stage)
{
const struct intel_desc_region *region = cmd->dev->desc_region;
const struct intel_cmd_dset_data *data = &cmd->bind.dset.graphics_data;
}
switch (pipeline->vb[i].stepRate) {
- case XGL_VERTEX_INPUT_STEP_RATE_VERTEX:
+ case VK_VERTEX_INPUT_STEP_RATE_VERTEX:
dw[0] |= GEN6_VB_DW0_ACCESS_VERTEXDATA;
dw[3] = 0;
break;
- case XGL_VERTEX_INPUT_STEP_RATE_INSTANCE:
+ case VK_VERTEX_INPUT_STEP_RATE_INSTANCE:
dw[0] |= GEN6_VB_DW0_ACCESS_INSTANCEDATA;
dw[3] = 1;
break;
- case XGL_VERTEX_INPUT_STEP_RATE_DRAW:
+ case VK_VERTEX_INPUT_STEP_RATE_DRAW:
dw[0] |= GEN6_VB_DW0_ACCESS_INSTANCEDATA;
dw[3] = 0;
break;
if (cmd->bind.vertex.buf[i]) {
const struct intel_buf *buf = cmd->bind.vertex.buf[i];
- const XGL_GPU_SIZE offset = cmd->bind.vertex.offset[i];
+ const VK_GPU_SIZE offset = cmd->bind.vertex.offset[i];
cmd_reserve_reloc(cmd, 2);
cmd_batch_reloc(cmd, pos + 1, buf->obj.mem->bo, offset, 0);
binding_tables[0] = emit_binding_table(cmd,
cmd->bind.pipeline.graphics->vs.rmap,
- XGL_SHADER_STAGE_VERTEX);
+ VK_SHADER_STAGE_VERTEX);
binding_tables[1] = emit_binding_table(cmd,
cmd->bind.pipeline.graphics->tcs.rmap,
- XGL_SHADER_STAGE_TESS_CONTROL);
+ VK_SHADER_STAGE_TESS_CONTROL);
binding_tables[2] = emit_binding_table(cmd,
cmd->bind.pipeline.graphics->tes.rmap,
- XGL_SHADER_STAGE_TESS_EVALUATION);
+ VK_SHADER_STAGE_TESS_EVALUATION);
binding_tables[3] = emit_binding_table(cmd,
cmd->bind.pipeline.graphics->gs.rmap,
- XGL_SHADER_STAGE_GEOMETRY);
+ VK_SHADER_STAGE_GEOMETRY);
binding_tables[4] = emit_binding_table(cmd,
cmd->bind.pipeline.graphics->fs.rmap,
- XGL_SHADER_STAGE_FRAGMENT);
+ VK_SHADER_STAGE_FRAGMENT);
samplers[0] = emit_samplers(cmd, cmd->bind.pipeline.graphics->vs.rmap);
samplers[1] = emit_samplers(cmd, cmd->bind.pipeline.graphics->tcs.rmap);
return;
if (fb->sample_count != cmd->bind.pipeline.graphics->sample_count)
- cmd->result = XGL_ERROR_UNKNOWN;
+ cmd->result = VK_ERROR_UNKNOWN;
cmd_wa_gen6_pre_multisample_depth_flush(cmd);
gen6_3DSTATE_MULTISAMPLE(cmd, fb->sample_count);
void *entries;
entries = intel_alloc(cmd, sizeof(cache->entries[0]) * count, 0,
- XGL_SYSTEM_ALLOC_INTERNAL);
+ VK_SYSTEM_ALLOC_INTERNAL);
if (entries) {
if (cache->entries) {
memcpy(entries, cache->entries,
CMD_ASSERT(cmd, 6, 7.5);
- if (meta->ds.aspect == XGL_IMAGE_ASPECT_DEPTH) {
+ if (meta->ds.aspect == VK_IMAGE_ASPECT_DEPTH) {
dw[0] = 0;
dw[1] = 0;
dw[2] = GEN6_COMPAREFUNCTION_ALWAYS << 27 |
GEN6_ZS_DW2_DEPTH_WRITE_ENABLE;
}
- } else if (meta->ds.aspect == XGL_IMAGE_ASPECT_STENCIL) {
+ } else if (meta->ds.aspect == VK_IMAGE_ASPECT_STENCIL) {
dw[0] = GEN6_ZS_DW0_STENCIL_TEST_ENABLE |
(GEN6_COMPAREFUNCTION_ALWAYS) << 28 |
(GEN6_STENCILOP_KEEP) << 25 |
}
if (meta->mode != INTEL_CMD_META_VS_POINTS) {
- if (meta->ds.aspect != XGL_IMAGE_ASPECT_COLOR) {
+ if (meta->ds.aspect != VK_IMAGE_ASPECT_COLOR) {
const uint32_t blend_color[4] = { 0, 0, 0, 0 };
uint32_t stencil_ref = (meta->ds.stencil_ref & 0xff) << 24 |
(meta->ds.stencil_ref & 0xff) << 16;
data->set_offsets = intel_alloc(cmd,
sizeof(data->set_offsets[0]) * chain->layout_count,
- sizeof(data->set_offsets[0]), XGL_SYSTEM_ALLOC_INTERNAL);
+ sizeof(data->set_offsets[0]), VK_SYSTEM_ALLOC_INTERNAL);
if (!data->set_offsets) {
- cmd_fail(cmd, XGL_ERROR_OUT_OF_MEMORY);
+ cmd_fail(cmd, VK_ERROR_OUT_OF_MEMORY);
data->set_offset_count = 0;
return false;
}
data->dynamic_offsets = intel_alloc(cmd,
sizeof(data->dynamic_offsets[0]) * chain->total_dynamic_desc_count,
- sizeof(data->dynamic_offsets[0]), XGL_SYSTEM_ALLOC_INTERNAL);
+ sizeof(data->dynamic_offsets[0]), VK_SYSTEM_ALLOC_INTERNAL);
if (!data->dynamic_offsets) {
- cmd_fail(cmd, XGL_ERROR_OUT_OF_MEMORY);
+ cmd_fail(cmd, VK_ERROR_OUT_OF_MEMORY);
data->dynamic_offset_count = 0;
return false;
}
static void cmd_bind_vertex_data(struct intel_cmd *cmd,
const struct intel_buf *buf,
- XGL_GPU_SIZE offset, uint32_t binding)
+ VK_GPU_SIZE offset, uint32_t binding)
{
if (binding >= ARRAY_SIZE(cmd->bind.vertex.buf)) {
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
return;
}
static void cmd_bind_index_data(struct intel_cmd *cmd,
const struct intel_buf *buf,
- XGL_GPU_SIZE offset, XGL_INDEX_TYPE type)
+ VK_GPU_SIZE offset, VK_INDEX_TYPE type)
{
cmd->bind.index.buf = buf;
cmd->bind.index.offset = offset;
if (indexed) {
if (p->primitive_restart && !gen6_can_primitive_restart(cmd))
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
if (cmd_gen(cmd) >= INTEL_GEN(7.5)) {
gen75_3DSTATE_VF(cmd, p->primitive_restart,
cmd_batch_flush_all(cmd);
}
-ICD_EXPORT void XGLAPI xglCmdBindPipeline(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_PIPELINE pipeline)
+ICD_EXPORT void VKAPI vkCmdBindPipeline(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_PIPELINE pipeline)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
switch (pipelineBindPoint) {
- case XGL_PIPELINE_BIND_POINT_COMPUTE:
+ case VK_PIPELINE_BIND_POINT_COMPUTE:
cmd_bind_compute_pipeline(cmd, intel_pipeline(pipeline));
break;
- case XGL_PIPELINE_BIND_POINT_GRAPHICS:
+ case VK_PIPELINE_BIND_POINT_GRAPHICS:
cmd_bind_graphics_pipeline(cmd, intel_pipeline(pipeline));
break;
default:
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
break;
}
}
-ICD_EXPORT void XGLAPI xglCmdBindDynamicStateObject(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_STATE_BIND_POINT stateBindPoint,
- XGL_DYNAMIC_STATE_OBJECT state)
+ICD_EXPORT void VKAPI vkCmdBindDynamicStateObject(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_STATE_BIND_POINT stateBindPoint,
+ VK_DYNAMIC_STATE_OBJECT state)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
switch (stateBindPoint) {
- case XGL_STATE_BIND_VIEWPORT:
+ case VK_STATE_BIND_VIEWPORT:
cmd_bind_viewport_state(cmd,
- intel_dynamic_vp((XGL_DYNAMIC_VP_STATE_OBJECT) state));
+ intel_dynamic_vp((VK_DYNAMIC_VP_STATE_OBJECT) state));
break;
- case XGL_STATE_BIND_RASTER:
+ case VK_STATE_BIND_RASTER:
cmd_bind_raster_state(cmd,
- intel_dynamic_rs((XGL_DYNAMIC_RS_STATE_OBJECT) state));
+ intel_dynamic_rs((VK_DYNAMIC_RS_STATE_OBJECT) state));
break;
- case XGL_STATE_BIND_DEPTH_STENCIL:
+ case VK_STATE_BIND_DEPTH_STENCIL:
cmd_bind_ds_state(cmd,
- intel_dynamic_ds((XGL_DYNAMIC_DS_STATE_OBJECT) state));
+ intel_dynamic_ds((VK_DYNAMIC_DS_STATE_OBJECT) state));
break;
- case XGL_STATE_BIND_COLOR_BLEND:
+ case VK_STATE_BIND_COLOR_BLEND:
cmd_bind_blend_state(cmd,
- intel_dynamic_cb((XGL_DYNAMIC_CB_STATE_OBJECT) state));
+ intel_dynamic_cb((VK_DYNAMIC_CB_STATE_OBJECT) state));
break;
default:
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
break;
}
}
-ICD_EXPORT void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+ICD_EXPORT void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
uint32_t i;
switch (pipelineBindPoint) {
- case XGL_PIPELINE_BIND_POINT_COMPUTE:
+ case VK_PIPELINE_BIND_POINT_COMPUTE:
cmd->bind.dset.compute = chain;
data = &cmd->bind.dset.compute_data;
break;
- case XGL_PIPELINE_BIND_POINT_GRAPHICS:
+ case VK_PIPELINE_BIND_POINT_GRAPHICS:
cmd->bind.dset.graphics = chain;
data = &cmd->bind.dset.graphics_data;
break;
default:
- cmd_fail(cmd, XGL_ERROR_INVALID_VALUE);
+ cmd_fail(cmd, VK_ERROR_INVALID_VALUE);
return;
break;
}
}
}
-ICD_EXPORT void XGLAPI xglCmdBindVertexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdBindVertexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t binding)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
cmd_bind_vertex_data(cmd, buf, offset, binding);
}
-ICD_EXPORT void XGLAPI xglCmdBindIndexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE indexType)
+ICD_EXPORT void VKAPI vkCmdBindIndexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE indexType)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
struct intel_buf *buf = intel_buf(buffer);
cmd_bind_index_data(cmd, buf, offset, indexType);
}
-ICD_EXPORT void XGLAPI xglCmdDraw(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDraw(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstVertex,
uint32_t vertexCount,
uint32_t firstInstance,
firstInstance, instanceCount, false, 0);
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexed(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDrawIndexed(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstIndex,
uint32_t indexCount,
int32_t vertexOffset,
firstInstance, instanceCount, true, vertexOffset);
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndexedIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
}
-ICD_EXPORT void XGLAPI xglCmdDispatch(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDispatch(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t x,
uint32_t y,
uint32_t z)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
}
-ICD_EXPORT void XGLAPI xglCmdDispatchIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset)
+ICD_EXPORT void VKAPI vkCmdDispatchIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
- cmd_fail(cmd, XGL_ERROR_UNKNOWN);
+ cmd_fail(cmd, VK_ERROR_UNKNOWN);
}
-ICD_EXPORT void XGLAPI xglCmdBeginRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_RENDER_PASS_BEGIN* pRenderPassBegin)
+ICD_EXPORT void VKAPI vkCmdBeginRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_RENDER_PASS_BEGIN* pRenderPassBegin)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
cmd_begin_render_pass(cmd, (struct intel_render_pass *) pRenderPassBegin->renderPass, pRenderPassBegin->framebuffer);
}
-ICD_EXPORT void XGLAPI xglCmdEndRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_RENDER_PASS renderPass)
+ICD_EXPORT void VKAPI vkCmdEndRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_RENDER_PASS renderPass)
{
struct intel_cmd *cmd = intel_cmd(cmdBuffer);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct {
struct intel_ds_view *view;
uint32_t stencil_ref;
- XGL_IMAGE_ASPECT aspect;
+ VK_IMAGE_ASPECT aspect;
enum intel_cmd_meta_ds_op op;
bool optimal;
return intel_gpu_gen(cmd->dev->gpu);
}
-static inline void cmd_fail(struct intel_cmd *cmd, XGL_RESULT result)
+static inline void cmd_fail(struct intel_cmd *cmd, VK_RESULT result)
{
- intel_dev_log(cmd->dev, XGL_DBG_MSG_ERROR,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ intel_dev_log(cmd->dev, VK_DBG_MSG_ERROR,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"command building error");
cmd->result = result;
/* fail silently */
if (cmd->reloc_used + reloc_len > cmd->reloc_count) {
cmd->reloc_used = 0;
- cmd_fail(cmd, XGL_ERROR_TOO_MANY_MEMORY_REFERENCES);
+ cmd_fail(cmd, VK_ERROR_TOO_MANY_MEMORY_REFERENCES);
}
assert(cmd->reloc_used + reloc_len <= cmd->reloc_count);
}
void cmd_batch_depth_count(struct intel_cmd *cmd,
struct intel_bo *bo,
- XGL_GPU_SIZE offset);
+ VK_GPU_SIZE offset);
void cmd_batch_timestamp(struct intel_cmd *cmd,
struct intel_bo *bo,
- XGL_GPU_SIZE offset);
+ VK_GPU_SIZE offset);
void cmd_batch_immediate(struct intel_cmd *cmd,
uint32_t pipe_control_flags,
struct intel_bo *bo,
- XGL_GPU_SIZE offset,
+ VK_GPU_SIZE offset,
uint64_t val);
void cmd_draw_meta(struct intel_cmd *cmd, const struct intel_cmd_meta *meta);
void cmd_meta_ds_op(struct intel_cmd *cmd,
enum intel_cmd_meta_ds_op op,
struct intel_img *img,
- const XGL_IMAGE_SUBRESOURCE_RANGE *range);
+ const VK_IMAGE_SUBRESOURCE_RANGE *range);
#endif /* CMD_PRIV_H */
- [GlassyMesa's GLSLIR and supporting infrastructure](shader)
- [GlassyMesa's DRI i965 backend](pipeline)
-For xglCreateShader, we primarily used the existing standalone device independent front end which can consume GLSL or BIL, and results in a separately linked shader object.
+For vkCreateShader, we primarily used the existing standalone device independent front end which can consume GLSL or BIL, and results in a separately linked shader object.
-For xglCreateGraphicsPipeline, we pulled over only the files needed to lower the shader object to ISA and supporting metadata. Much of the i965 DRI driver was removed or commented out for future use, and is still being actively bootstrapped.
+For vkCreateGraphicsPipeline, we pulled over only the files needed to lower the shader object to ISA and supporting metadata. Much of the i965 DRI driver was removed or commented out for future use, and is still being actively bootstrapped.
Currently only Vertex and Fragment shaders are supported. Any shader that fits within the IO parameters you see tested in compiler_render_tests.cpp should work. Buffers with bindings, samplers with bindings, interstage IO with locations, are all working. Vertex input locations work if they are sequential and start from 0. Fragment output locations only work for location 0.
We recommend using only buffers with bindings for uniforms, no global, non-block uniforms.
-Design decisions we made to get this stack working with current specified XGL and BIL. We know these are active areas of discussion, and we'll update when decisions are made:
+Design decisions we made to get this stack working with current specified VK and BIL. We know these are active areas of discussion, and we'll update when decisions are made:
- Samplers:
- - GLSL sampler bindings equate to a sampler/texture pair of the same number, as set up by the XGL application. i.e. the following sampler:
+ - GLSL sampler bindings equate to a sampler/texture pair of the same number, as set up by the VK application. i.e. the following sampler:
```
layout (binding = 2) uniform sampler2D surface;
```
-will read from XGL_SLOT_SHADER_SAMPLER entity 2 and XGL_SLOT_SHADER_RESOURCE entity 2.
+will read from VK_SLOT_SHADER_SAMPLER entity 2 and VK_SLOT_SHADER_RESOURCE entity 2.
- Buffers:
- GLSL buffer bindings equate to the buffer bound at the same slot. i.e. the following uniform buffer:
```
layout (std140, binding = 2) uniform foo { vec4 bar; } myBuffer;
```
-will be read from XGL_SHADER_RESOURCE entity 2.
+will be read from VK_SHADER_RESOURCE entity 2.
{
GLuint Id;
GLubyte *String; /**< Null-terminated program text */
- // LunarG: Remove - XGL does not use reference counts
+ // LunarG: Remove - VK does not use reference counts
// GLint RefCount;
GLenum Target; /**< GL_VERTEX/FRAGMENT_PROGRAM_ARB, GL_GEOMETRY_PROGRAM_NV */
GLenum Format; /**< String encoding format */
gl_shader_stage Stage;
GLuint Name; /**< AKA the handle */
GLchar *Label; /**< GL_KHR_debug */
- // LunarG: Remove - XGL does not use reference counts
+ // LunarG: Remove - VK does not use reference counts
// GLint RefCount;
GLboolean DeletePending;
GLboolean CompileStatus;
GLenum Type; /**< Always GL_SHADER_PROGRAM (internal token) */
GLuint Name; /**< aka handle or ID */
GLchar *Label; /**< GL_KHR_debug */
- // LunarG: Remove - XGL does not use reference counts
+ // LunarG: Remove - VK does not use reference counts
// GLint RefCount;
GLboolean DeletePending;
API_OPENGLES,
API_OPENGLES2,
API_OPENGL_CORE,
- API_XGL,
+ API_VK,
API_OPENGL_LAST = API_OPENGL_CORE
} gl_api;
case API_OPENGLES2:
compute_version_es2(ctx);
break;
- case API_XGL:
+ case API_VK:
break;
}
memset(prog, 0, sizeof(*prog));
prog->Id = id;
prog->Target = target;
- // LunarG: XGL does not use reference counts
+ // LunarG: VK does not use reference counts
// prog->RefCount = 1;
prog->Format = GL_PROGRAM_FORMAT_ASCII_ARB;
{
(void) ctx;
ASSERT(prog);
- // LunarG: XGL does not use reference counts
+ // LunarG: VK does not use reference counts
//ASSERT(prog->RefCount==0);
if (prog == &_mesa_DummyProgram)
struct gl_program **ptr,
struct gl_program *prog)
{
-// LunarG: XGL does not use reference counts
+// LunarG: VK does not use reference counts
#if 0
#ifndef NDEBUG
assert(ptr);
return NULL;
assert(clone->Target == prog->Target);
- // LunarG: XGL does not use reference counts
+ // LunarG: VK does not use reference counts
// assert(clone->RefCount == 1);
clone->String = (GLubyte *) _mesa_strdup((char *) prog->String);
ctx->ShaderCompilerOptions[MESA_SHADER_GEOMETRY].OptimizeForAOS = true;
/* ARB_viewport_array */
- if (brw->gen >= 7 && (ctx->API == API_OPENGL_CORE || ctx->API == API_XGL)) {
+ if (brw->gen >= 7 && (ctx->API == API_OPENGL_CORE || ctx->API == API_VK)) {
ctx->Const.MaxViewports = GEN7_NUM_VIEWPORTS;
ctx->Const.ViewportSubpixelBits = 0;
// bringing in shaderobj.c
//_mesa_init_shader_program(ctx, &prog->base);
prog->base.Type = GL_SHADER_PROGRAM_MESA;
- // LunarG: Remove - XGL does not use reference counts
+ // LunarG: Remove - VK does not use reference counts
// prog->base.RefCount = 1;
prog->base.AttributeBindings = new string_to_uint_map;
brw_vec4_setup_prog_key_for_precompile(ctx, &key.base, bvp->id, &vp->Base);
- // In XGL, user clipping is triggered solely from the shader.
+ // In VK, user clipping is triggered solely from the shader.
key.base.userclip_active = vp->Base.UsesClipDistanceOut;
struct brw_vs_compile c;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
uint32_t surface_count, i;
rmap = (struct intel_pipeline_rmap *)
- intel_alloc(gpu, sizeof(*rmap), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ intel_alloc(gpu, sizeof(*rmap), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!rmap)
return NULL;
rmap->slots = (struct intel_pipeline_rmap_slot *)
intel_alloc(gpu, sizeof(rmap->slots[0]) * rmap->slot_count,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!rmap->slots) {
intel_free(gpu, rmap);
return NULL;
}
// invoke backend compiler to generate ISA and supporting data structures
-XGL_RESULT intel_pipeline_shader_compile(struct intel_pipeline_shader *pipe_shader,
+VK_RESULT intel_pipeline_shader_compile(struct intel_pipeline_shader *pipe_shader,
const struct intel_gpu *gpu,
const struct intel_desc_layout_chain *chain,
- const XGL_PIPELINE_SHADER *info)
+ const VK_PIPELINE_SHADER *info)
{
const struct intel_ir *ir = intel_shader(info->shader)->ir;
/* XXX how about constness? */
struct gl_shader_program *sh_prog = (struct gl_shader_program *) ir;
- XGL_RESULT status = XGL_SUCCESS;
+ VK_RESULT status = VK_SUCCESS;
struct brw_binding_table bt;
struct brw_context *brw = intel_create_brw_context(gpu);
{
pipe_shader->codeSize = get_vs_program_size(brw->shader_prog);
- pipe_shader->pCode = intel_alloc(gpu, pipe_shader->codeSize, 0, XGL_SYSTEM_ALLOC_INTERNAL_SHADER);
+ pipe_shader->pCode = intel_alloc(gpu, pipe_shader->codeSize, 0, VK_SYSTEM_ALLOC_INTERNAL_SHADER);
if (!pipe_shader->pCode) {
- status = XGL_ERROR_OUT_OF_MEMORY;
+ status = VK_ERROR_OUT_OF_MEMORY;
break;
}
if (bt.ubo_count != sh_prog->_LinkedShaders[MESA_SHADER_VERTEX]->NumUniformBlocks) {
// If there is no UBO data to pull from, the shader is using a default uniform, which
- // will not work in XGL. We need a binding slot to pull from.
- intel_log(gpu, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ // will not work in VK. We need a binding slot to pull from.
+ intel_log(gpu, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"compile error: VS reads from global, non-block uniform");
assert(0);
- status = XGL_ERROR_BAD_PIPELINE_DATA;
+ status = VK_ERROR_BAD_PIPELINE_DATA;
break;
}
pipe_shader->codeSize = get_wm_program_size(brw->shader_prog);
- pipe_shader->pCode = intel_alloc(gpu, pipe_shader->codeSize, 0, XGL_SYSTEM_ALLOC_INTERNAL_SHADER);
+ pipe_shader->pCode = intel_alloc(gpu, pipe_shader->codeSize, 0, VK_SYSTEM_ALLOC_INTERNAL_SHADER);
if (!pipe_shader->pCode) {
- status = XGL_ERROR_OUT_OF_MEMORY;
+ status = VK_ERROR_OUT_OF_MEMORY;
break;
}
if (bt.ubo_count != sh_prog->_LinkedShaders[MESA_SHADER_FRAGMENT]->NumUniformBlocks) {
// If there is no UBO data to pull from, the shader is using a default uniform, which
- // will not work in XGL. We need a binding slot to pull from.
- intel_log(gpu, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ // will not work in VK. We need a binding slot to pull from.
+ intel_log(gpu, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"compile error: FS reads from global, non-block uniform");
assert(0);
- status = XGL_ERROR_BAD_PIPELINE_DATA;
+ status = VK_ERROR_BAD_PIPELINE_DATA;
break;
}
case GL_COMPUTE_SHADER:
default:
assert(0);
- status = XGL_ERROR_BAD_PIPELINE_DATA;
+ status = VK_ERROR_BAD_PIPELINE_DATA;
}
} else {
assert(0);
- status = XGL_ERROR_BAD_PIPELINE_DATA;
+ status = VK_ERROR_BAD_PIPELINE_DATA;
}
- if (status == XGL_SUCCESS) {
+ if (status == VK_SUCCESS) {
pipe_shader->rmap = rmap_create(gpu, chain, &bt);
if (!pipe_shader->rmap) {
intel_pipeline_shader_cleanup(pipe_shader, gpu);
- status = XGL_ERROR_OUT_OF_MEMORY;
+ status = VK_ERROR_OUT_OF_MEMORY;
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct brw_context *intel_create_brw_context(const struct intel_gpu *gpu);
void intel_destroy_brw_context(struct brw_context *brw);
-XGL_RESULT intel_pipeline_shader_compile(struct intel_pipeline_shader *ips,
+VK_RESULT intel_pipeline_shader_compile(struct intel_pipeline_shader *ips,
const struct intel_gpu *gpu,
const struct intel_desc_layout_chain *chain,
- const XGL_PIPELINE_SHADER *info);
+ const VK_PIPELINE_SHADER *info);
void intel_pipeline_shader_cleanup(struct intel_pipeline_shader *sh,
const struct intel_gpu *gpu);
-XGL_RESULT intel_pipeline_shader_compile_meta(struct intel_pipeline_shader *sh,
+VK_RESULT intel_pipeline_shader_compile_meta(struct intel_pipeline_shader *sh,
const struct intel_gpu *gpu,
enum intel_dev_meta_shader id);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
prog = get_program(&prog_size, stderr);
- code = intel_alloc(gpu, prog_size, 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ code = intel_alloc(gpu, prog_size, 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!code)
return NULL;
extern "C" {
-XGL_RESULT intel_pipeline_shader_compile_meta(struct intel_pipeline_shader *sh,
+VK_RESULT intel_pipeline_shader_compile_meta(struct intel_pipeline_shader *sh,
const struct intel_gpu *gpu,
enum intel_dev_meta_shader id)
{
ralloc_free(brw->shader_prog);
ralloc_free(brw);
- return (sh->pCode) ? XGL_SUCCESS : XGL_ERROR_UNKNOWN;
+ return (sh->pCode) ? VK_SUCCESS : VK_ERROR_UNKNOWN;
}
} // extern "C"
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
ctx->Const.MaxVertexStreams = 1;
/* GL 3.2 */
- ctx->Const.ProfileMask = (ctx->API == API_OPENGL_CORE || ctx->API == API_XGL)
+ ctx->Const.ProfileMask = (ctx->API == API_OPENGL_CORE || ctx->API == API_VK)
? GL_CONTEXT_CORE_PROFILE_BIT
: GL_CONTEXT_COMPATIBILITY_PROFILE_BIT;
{
memset(ctx, 0, sizeof(*ctx));
- ctx->API = API_XGL;
+ ctx->API = API_VK;
ctx->Extensions.dummy_false = false;
ctx->Extensions.dummy_true = true;
shader->Source = (const char *) code + sizeof(header);
switch(header.gen_magic) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
shader->Type = GL_VERTEX_SHADER;
break;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
shader->Type = GL_FRAGMENT_SHADER;
break;
default:
assert(shader_program->NumShaders == 1);
- // for XGL, we are independently compiling and linking individual
+ // for VK, we are independently compiling and linking individual
// shaders, which matches this frontend's concept of SSO
shader_program->SeparateShader = true;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
-----------------
A file that ends with a function-like macro name as the last
non-whitespace token will result in a parse error, (where it should be
-passed through as is).
\ No newline at end of file
+passed through as is).
case API_OPENGLES2:
this->language_version = 100;
break;
- case API_XGL:
+ case API_VK:
break;
}
}
*/
unsigned shader_shadow_samplers;
- bool isXGL;
+ bool isVK;
};
/**
if ((status == EXIT_SUCCESS) && do_link) {
assert(whole_program->NumShaders == 1);
- // for XGL, we are independently compiling and linking individual
+ // for VK, we are independently compiling and linking individual
// shaders, which matches this frontend's concept of SSO
whole_program->SeparateShader = true;
* GLES2, because they are not available there.
*/
if (ctx->API == API_OPENGL_CORE ||
- ctx->API == API_XGL ||
+ ctx->API == API_VK ||
ctx->API == API_OPENGLES2) {
return;
}
shader->Type = type;
shader->Stage = _mesa_shader_enum_to_shader_stage(type);
shader->Name = name;
- // LunarG: XGL does not use reference counts
+ // LunarG: VK does not use reference counts
// shader->RefCount = 1;
}
return shader;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
static bool desc_iter_init_for_update(struct intel_desc_iter *iter,
const struct intel_desc_set *set,
- XGL_DESCRIPTOR_TYPE type,
+ VK_DESCRIPTOR_TYPE type,
uint32_t binding_index, uint32_t array_base)
{
if (!intel_desc_iter_init_for_binding(iter, set->layout,
return true;
}
-XGL_RESULT intel_desc_region_create(struct intel_dev *dev,
+VK_RESULT intel_desc_region_create(struct intel_dev *dev,
struct intel_desc_region **region_ret)
{
const uint32_t surface_count = 16384;
const uint32_t sampler_count = 16384;
struct intel_desc_region *region;
- region = intel_alloc(dev, sizeof(*region), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ region = intel_alloc(dev, sizeof(*region), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!region)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(region, 0, sizeof(*region));
if (!desc_region_init_desc_sizes(region, dev->gpu)) {
intel_free(dev, region);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
intel_desc_offset_set(®ion->size,
region->sampler_desc_size * sampler_count);
region->surfaces = intel_alloc(dev, region->size.surface,
- 64, XGL_SYSTEM_ALLOC_INTERNAL);
+ 64, VK_SYSTEM_ALLOC_INTERNAL);
if (!region->surfaces) {
intel_free(dev, region);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
region->samplers = intel_alloc(dev, region->size.sampler,
- 64, XGL_SYSTEM_ALLOC_INTERNAL);
+ 64, VK_SYSTEM_ALLOC_INTERNAL);
if (!region->samplers) {
intel_free(dev, region->surfaces);
intel_free(dev, region);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
*region_ret = region;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_region_destroy(struct intel_dev *dev,
/**
* Get the size of a descriptor in the region.
*/
-static XGL_RESULT desc_region_get_desc_size(const struct intel_desc_region *region,
- XGL_DESCRIPTOR_TYPE type,
+static VK_RESULT desc_region_get_desc_size(const struct intel_desc_region *region,
+ VK_DESCRIPTOR_TYPE type,
struct intel_desc_offset *size)
{
uint32_t surface_size = 0, sampler_size = 0;
switch (type) {
- case XGL_DESCRIPTOR_TYPE_SAMPLER:
+ case VK_DESCRIPTOR_TYPE_SAMPLER:
sampler_size = region->sampler_desc_size;
break;
- case XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE:
+ case VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE:
surface_size = region->surface_desc_size;
sampler_size = region->sampler_desc_size;
break;
- case XGL_DESCRIPTOR_TYPE_TEXTURE:
- case XGL_DESCRIPTOR_TYPE_TEXTURE_BUFFER:
- case XGL_DESCRIPTOR_TYPE_IMAGE:
- case XGL_DESCRIPTOR_TYPE_IMAGE_BUFFER:
- case XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
- case XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER:
- case XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
- case XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_TEXTURE:
+ case VK_DESCRIPTOR_TYPE_TEXTURE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_IMAGE:
+ case VK_DESCRIPTOR_TYPE_IMAGE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
+ case VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC:
surface_size = region->surface_desc_size;
break;
default:
assert(!"unknown descriptor type");
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
break;
}
intel_desc_offset_set(size, surface_size, sampler_size);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_desc_region_alloc(struct intel_desc_region *region,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+VK_RESULT intel_desc_region_alloc(struct intel_desc_region *region,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct intel_desc_offset *begin,
struct intel_desc_offset *end)
{
/* calculate sizes needed */
for (i = 0; i < info->count; i++) {
- const XGL_DESCRIPTOR_TYPE_COUNT *tc = &info->pTypeCount[i];
+ const VK_DESCRIPTOR_TYPE_COUNT *tc = &info->pTypeCount[i];
struct intel_desc_offset size;
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = desc_region_get_desc_size(region, tc->type, &size);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
surface_size += size.surface * tc->count;
intel_desc_offset_add(end, ®ion->cur, &alloc);
if (!intel_desc_offset_within(end, ®ion->size))
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
/* increment the writer pointer */
region->cur = *end;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void desc_region_validate_begin_end(const struct intel_desc_region *region,
/* is it ok not to reclaim? */
}
-XGL_RESULT intel_desc_region_begin_update(struct intel_desc_region *region,
- XGL_DESCRIPTOR_UPDATE_MODE mode)
+VK_RESULT intel_desc_region_begin_update(struct intel_desc_region *region,
+ VK_DESCRIPTOR_UPDATE_MODE mode)
{
/* no-op */
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_desc_region_end_update(struct intel_desc_region *region,
+VK_RESULT intel_desc_region_end_update(struct intel_desc_region *region,
struct intel_cmd *cmd)
{
/* No pipelined update. cmd_draw() will do the work. */
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_region_clear(struct intel_desc_region *region,
void intel_desc_region_read_surface(const struct intel_desc_region *region,
const struct intel_desc_offset *offset,
- XGL_PIPELINE_SHADER_STAGE stage,
+ VK_PIPELINE_SHADER_STAGE stage,
const struct intel_mem **mem,
bool *read_only,
const uint32_t **cmd,
*read_only = desc->read_only;
switch (desc->type) {
case INTEL_DESC_SURFACE_BUF:
- *cmd = (stage == XGL_SHADER_STAGE_FRAGMENT) ?
+ *cmd = (stage == VK_SHADER_STAGE_FRAGMENT) ?
desc->u.buf->fs_cmd : desc->u.buf->cmd;
*cmd_len = desc->u.buf->cmd_len;
break;
intel_desc_pool_destroy(pool);
}
-XGL_RESULT intel_desc_pool_create(struct intel_dev *dev,
- XGL_DESCRIPTOR_POOL_USAGE usage,
+VK_RESULT intel_desc_pool_create(struct intel_dev *dev,
+ VK_DESCRIPTOR_POOL_USAGE usage,
uint32_t max_sets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct intel_desc_pool **pool_ret)
{
struct intel_desc_pool *pool;
- XGL_RESULT ret;
+ VK_RESULT ret;
pool = (struct intel_desc_pool *) intel_base_create(&dev->base.handle,
- sizeof(*pool), dev->base.dbg, XGL_DBG_OBJECT_DESCRIPTOR_POOL,
+ sizeof(*pool), dev->base.dbg, VK_DBG_OBJECT_DESCRIPTOR_POOL,
info, 0);
if (!pool)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pool->dev = dev;
ret = intel_desc_region_alloc(dev->desc_region, info,
&pool->region_begin, &pool->region_end);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_base_destroy(&pool->obj.base);
return ret;
}
*pool_ret = pool;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_pool_destroy(struct intel_desc_pool *pool)
intel_base_destroy(&pool->obj.base);
}
-XGL_RESULT intel_desc_pool_alloc(struct intel_desc_pool *pool,
+VK_RESULT intel_desc_pool_alloc(struct intel_desc_pool *pool,
const struct intel_desc_layout *layout,
struct intel_desc_offset *begin,
struct intel_desc_offset *end)
intel_desc_offset_add(end, &pool->cur, &layout->region_size);
if (!intel_desc_offset_within(end, &pool->region_end))
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
/* increment the writer pointer */
pool->cur = *end;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_pool_reset(struct intel_desc_pool *pool)
intel_desc_set_destroy(set);
}
-XGL_RESULT intel_desc_set_create(struct intel_dev *dev,
+VK_RESULT intel_desc_set_create(struct intel_dev *dev,
struct intel_desc_pool *pool,
- XGL_DESCRIPTOR_SET_USAGE usage,
+ VK_DESCRIPTOR_SET_USAGE usage,
const struct intel_desc_layout *layout,
struct intel_desc_set **set_ret)
{
struct intel_desc_set *set;
- XGL_RESULT ret;
+ VK_RESULT ret;
set = (struct intel_desc_set *) intel_base_create(&dev->base.handle,
- sizeof(*set), dev->base.dbg, XGL_DBG_OBJECT_DESCRIPTOR_SET,
+ sizeof(*set), dev->base.dbg, VK_DBG_OBJECT_DESCRIPTOR_SET,
NULL, 0);
if (!set)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
set->region = dev->desc_region;
ret = intel_desc_pool_alloc(pool, layout,
&set->region_begin, &set->region_end);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_base_destroy(&set->obj.base);
return ret;
}
*set_ret = set;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_set_destroy(struct intel_desc_set *set)
intel_base_destroy(&set->obj.base);
}
-static bool desc_set_img_layout_read_only(XGL_IMAGE_LAYOUT layout)
+static bool desc_set_img_layout_read_only(VK_IMAGE_LAYOUT layout)
{
switch (layout) {
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
- case XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
- case XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
return true;
default:
return false;
}
void intel_desc_set_update_samplers(struct intel_desc_set *set,
- const XGL_UPDATE_SAMPLERS *update)
+ const VK_UPDATE_SAMPLERS *update)
{
struct intel_desc_iter iter;
uint32_t i;
- if (!desc_iter_init_for_update(&iter, set, XGL_DESCRIPTOR_TYPE_SAMPLER,
+ if (!desc_iter_init_for_update(&iter, set, VK_DESCRIPTOR_TYPE_SAMPLER,
update->binding, update->arrayIndex))
return;
for (i = 0; i < update->count; i++) {
const struct intel_sampler *sampler =
- intel_sampler((XGL_SAMPLER) update->pSamplers[i]);
+ intel_sampler((VK_SAMPLER) update->pSamplers[i]);
struct intel_desc_sampler desc;
desc.sampler = sampler;
}
void intel_desc_set_update_sampler_textures(struct intel_desc_set *set,
- const XGL_UPDATE_SAMPLER_TEXTURES *update)
+ const VK_UPDATE_SAMPLER_TEXTURES *update)
{
struct intel_desc_iter iter;
const struct intel_desc_layout_binding *binding;
uint32_t i;
- if (!desc_iter_init_for_update(&iter, set, XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ if (!desc_iter_init_for_update(&iter, set, VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
update->binding, update->arrayIndex))
return;
const struct intel_sampler *sampler = (binding->immutable_samplers) ?
binding->immutable_samplers[update->arrayIndex + i] :
intel_sampler(update->pSamplerImageViews[i].sampler);
- const XGL_IMAGE_VIEW_ATTACH_INFO *info =
+ const VK_IMAGE_VIEW_ATTACH_INFO *info =
update->pSamplerImageViews[i].pImageView;
const struct intel_img_view *view = intel_img_view(info->view);
struct intel_desc_surface view_desc;
}
void intel_desc_set_update_images(struct intel_desc_set *set,
- const XGL_UPDATE_IMAGES *update)
+ const VK_UPDATE_IMAGES *update)
{
struct intel_desc_iter iter;
uint32_t i;
return;
for (i = 0; i < update->count; i++) {
- const XGL_IMAGE_VIEW_ATTACH_INFO *info = &update->pImageViews[i];
+ const VK_IMAGE_VIEW_ATTACH_INFO *info = &update->pImageViews[i];
const struct intel_img_view *view = intel_img_view(info->view);
struct intel_desc_surface desc;
}
void intel_desc_set_update_buffers(struct intel_desc_set *set,
- const XGL_UPDATE_BUFFERS *update)
+ const VK_UPDATE_BUFFERS *update)
{
struct intel_desc_iter iter;
uint32_t i;
return;
for (i = 0; i < update->count; i++) {
- const XGL_BUFFER_VIEW_ATTACH_INFO *info = &update->pBufferViews[i];
+ const VK_BUFFER_VIEW_ATTACH_INFO *info = &update->pBufferViews[i];
const struct intel_buf_view *view = intel_buf_view(info->view);
struct intel_desc_surface desc;
}
void intel_desc_set_update_as_copy(struct intel_desc_set *set,
- const XGL_UPDATE_AS_COPY *update)
+ const VK_UPDATE_AS_COPY *update)
{
const struct intel_desc_set *src_set =
intel_desc_set(update->descriptorSet);
uint32_t i;
/* disallow combined sampler textures */
- if (update->descriptorType == XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE)
+ if (update->descriptorType == VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE)
return;
if (!desc_iter_init_for_update(&iter, set, update->descriptorType,
intel_desc_layout_destroy(layout);
}
-static XGL_RESULT desc_layout_init_bindings(struct intel_desc_layout *layout,
+static VK_RESULT desc_layout_init_bindings(struct intel_desc_layout *layout,
const struct intel_desc_region *region,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info)
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info)
{
struct intel_desc_offset offset;
uint32_t i;
- XGL_RESULT ret;
+ VK_RESULT ret;
intel_desc_offset_set(&offset, 0, 0);
/* allocate bindings */
layout->bindings = intel_alloc(layout, sizeof(layout->bindings[0]) *
- info->count, 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ info->count, 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!layout->bindings)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(layout->bindings, 0, sizeof(layout->bindings[0]) * info->count);
layout->binding_count = info->count;
/* initialize bindings */
for (i = 0; i < info->count; i++) {
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING *lb = &info->pBinding[i];
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING *lb = &info->pBinding[i];
struct intel_desc_layout_binding *binding = &layout->bindings[i];
struct intel_desc_offset size;
switch (lb->descriptorType) {
- case XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
- case XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC:
+ case VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC:
layout->dynamic_desc_count += lb->count;
break;
default:
ret = desc_region_get_desc_size(region,
lb->descriptorType, &size);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
binding->increment = size;
if (shared) {
binding->shared_immutable_sampler =
- intel_sampler((XGL_SAMPLER) lb->pImmutableSamplers[0]);
+ intel_sampler((VK_SAMPLER) lb->pImmutableSamplers[0]);
/* set sampler offset increment to 0 */
intel_desc_offset_set(&binding->increment,
binding->increment.surface, 0);
} else {
binding->immutable_samplers = intel_alloc(layout,
sizeof(binding->immutable_samplers[0]) * lb->count,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!binding->immutable_samplers)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
for (j = 0; j < lb->count; j++) {
binding->immutable_samplers[j] =
- intel_sampler((XGL_SAMPLER) lb->pImmutableSamplers[j]);
+ intel_sampler((VK_SAMPLER) lb->pImmutableSamplers[j]);
}
}
}
layout->region_size = offset;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_desc_layout_create(struct intel_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
+VK_RESULT intel_desc_layout_create(struct intel_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
struct intel_desc_layout **layout_ret)
{
struct intel_desc_layout *layout;
- XGL_RESULT ret;
+ VK_RESULT ret;
layout = (struct intel_desc_layout *) intel_base_create(&dev->base.handle,
sizeof(*layout), dev->base.dbg,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT, info, 0);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT, info, 0);
if (!layout)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
ret = desc_layout_init_bindings(layout, dev->desc_region, info);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_desc_layout_destroy(layout);
return ret;
}
*layout_ret = layout;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_layout_destroy(struct intel_desc_layout *layout)
intel_desc_layout_chain_destroy(chain);
}
-XGL_RESULT intel_desc_layout_chain_create(struct intel_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT *layouts,
+VK_RESULT intel_desc_layout_chain_create(struct intel_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT *layouts,
uint32_t count,
struct intel_desc_layout_chain **chain_ret)
{
chain = (struct intel_desc_layout_chain *)
intel_base_create(&dev->base.handle, sizeof(*chain), dev->base.dbg,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN, NULL, 0);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN, NULL, 0);
if (!chain)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
chain->layouts = intel_alloc(chain, sizeof(chain->layouts[0]) * count,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!chain) {
intel_desc_layout_chain_destroy(chain);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
chain->dynamic_desc_indices = intel_alloc(chain,
sizeof(chain->dynamic_desc_indices[0]) * count,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!chain->dynamic_desc_indices) {
intel_desc_layout_chain_destroy(chain);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
for (i = 0; i < count; i++) {
*chain_ret = chain;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_desc_layout_chain_destroy(struct intel_desc_layout_chain *chain)
intel_base_destroy(&chain->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(
- XGL_DEVICE device,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(
+ VK_DEVICE device,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_desc_layout **) pSetLayout);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(
+ VK_DEVICE device,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_desc_layout_chain **) pLayoutChain);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+ICD_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
struct intel_dev *dev = intel_dev(device);
struct intel_desc_region *region = dev->desc_region;
return intel_desc_region_begin_update(region, updateMode);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_CMD_BUFFER cmd_)
+ICD_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_CMD_BUFFER cmd_)
{
struct intel_dev *dev = intel_dev(device);
struct intel_desc_region *region = dev->desc_region;
return intel_desc_region_end_update(region, cmd);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_POOL_USAGE poolUsage,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_POOL_USAGE poolUsage,
uint32_t maxSets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_POOL* pDescriptorPool)
+ const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_POOL* pDescriptorPool)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_desc_pool **) pDescriptorPool);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(
- XGL_DESCRIPTOR_POOL descriptorPool)
+ICD_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(
+ VK_DESCRIPTOR_POOL descriptorPool)
{
struct intel_desc_pool *pool = intel_desc_pool(descriptorPool);
intel_desc_pool_reset(pool);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
- XGL_DESCRIPTOR_SET_USAGE setUsage,
+ICD_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
+ VK_DESCRIPTOR_SET_USAGE setUsage,
uint32_t count,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
- XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
+ VK_DESCRIPTOR_SET* pDescriptorSets,
uint32_t* pCount)
{
struct intel_desc_pool *pool = intel_desc_pool(descriptorPool);
struct intel_dev *dev = pool->dev;
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
uint32_t i;
for (i = 0; i < count; i++) {
const struct intel_desc_layout *layout =
- intel_desc_layout((XGL_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
+ intel_desc_layout((VK_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
ret = intel_desc_set_create(dev, pool, setUsage, layout,
(struct intel_desc_set **) &pDescriptorSets[i]);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
break;
}
return ret;
}
-ICD_EXPORT void XGLAPI xglClearDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
+ICD_EXPORT void VKAPI vkClearDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets)
+ const VK_DESCRIPTOR_SET* pDescriptorSets)
{
uint32_t i;
for (i = 0; i < count; i++) {
struct intel_desc_set *set =
- intel_desc_set((XGL_DESCRIPTOR_SET) pDescriptorSets[i]);
+ intel_desc_set((VK_DESCRIPTOR_SET) pDescriptorSets[i]);
intel_desc_region_clear(set->region, &set->region_begin, &set->region_end);
}
}
-ICD_EXPORT void XGLAPI xglUpdateDescriptors(
- XGL_DESCRIPTOR_SET descriptorSet,
+ICD_EXPORT void VKAPI vkUpdateDescriptors(
+ VK_DESCRIPTOR_SET descriptorSet,
uint32_t updateCount,
const void** ppUpdateArray)
{
for (i = 0; i < updateCount; i++) {
const union {
struct {
- XGL_STRUCTURE_TYPE sType;
+ VK_STRUCTURE_TYPE sType;
const void* pNext;
} common;
- XGL_UPDATE_SAMPLERS samplers;
- XGL_UPDATE_SAMPLER_TEXTURES sampler_textures;
- XGL_UPDATE_IMAGES images;
- XGL_UPDATE_BUFFERS buffers;
- XGL_UPDATE_AS_COPY as_copy;
+ VK_UPDATE_SAMPLERS samplers;
+ VK_UPDATE_SAMPLER_TEXTURES sampler_textures;
+ VK_UPDATE_IMAGES images;
+ VK_UPDATE_BUFFERS buffers;
+ VK_UPDATE_AS_COPY as_copy;
} *u = ppUpdateArray[i];
switch (u->common.sType) {
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
intel_desc_set_update_samplers(set, &u->samplers);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
intel_desc_set_update_sampler_textures(set, &u->sampler_textures);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
intel_desc_set_update_images(set, &u->images);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
intel_desc_set_update_buffers(set, &u->buffers);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
intel_desc_set_update_as_copy(set, &u->as_copy);
break;
default:
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
};
struct intel_desc_iter {
- XGL_DESCRIPTOR_TYPE type;
+ VK_DESCRIPTOR_TYPE type;
struct intel_desc_offset increment;
uint32_t size;
/* homogeneous bindings in this layout */
struct intel_desc_layout_binding {
- XGL_DESCRIPTOR_TYPE type;
+ VK_DESCRIPTOR_TYPE type;
uint32_t array_size;
const struct intel_sampler **immutable_samplers;
const struct intel_sampler *shared_immutable_sampler;
uint32_t total_dynamic_desc_count;
};
-static inline struct intel_desc_pool *intel_desc_pool(XGL_DESCRIPTOR_POOL pool)
+static inline struct intel_desc_pool *intel_desc_pool(VK_DESCRIPTOR_POOL pool)
{
return (struct intel_desc_pool *) pool;
}
return (struct intel_desc_pool *) obj;
}
-static inline struct intel_desc_set *intel_desc_set(XGL_DESCRIPTOR_SET set)
+static inline struct intel_desc_set *intel_desc_set(VK_DESCRIPTOR_SET set)
{
return (struct intel_desc_set *) set;
}
return (struct intel_desc_set *) obj;
}
-static inline struct intel_desc_layout *intel_desc_layout(XGL_DESCRIPTOR_SET_LAYOUT layout)
+static inline struct intel_desc_layout *intel_desc_layout(VK_DESCRIPTOR_SET_LAYOUT layout)
{
return (struct intel_desc_layout *) layout;
}
return (struct intel_desc_layout *) obj;
}
-static inline struct intel_desc_layout_chain *intel_desc_layout_chain(XGL_DESCRIPTOR_SET_LAYOUT_CHAIN chain)
+static inline struct intel_desc_layout_chain *intel_desc_layout_chain(VK_DESCRIPTOR_SET_LAYOUT_CHAIN chain)
{
return (struct intel_desc_layout_chain *) chain;
}
bool intel_desc_iter_advance(struct intel_desc_iter *iter);
-XGL_RESULT intel_desc_region_create(struct intel_dev *dev,
+VK_RESULT intel_desc_region_create(struct intel_dev *dev,
struct intel_desc_region **region_ret);
void intel_desc_region_destroy(struct intel_dev *dev,
struct intel_desc_region *region);
-XGL_RESULT intel_desc_region_alloc(struct intel_desc_region *region,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+VK_RESULT intel_desc_region_alloc(struct intel_desc_region *region,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct intel_desc_offset *begin,
struct intel_desc_offset *end);
void intel_desc_region_free(struct intel_desc_region *region,
const struct intel_desc_offset *begin,
const struct intel_desc_offset *end);
-XGL_RESULT intel_desc_region_begin_update(struct intel_desc_region *region,
- XGL_DESCRIPTOR_UPDATE_MODE mode);
-XGL_RESULT intel_desc_region_end_update(struct intel_desc_region *region,
+VK_RESULT intel_desc_region_begin_update(struct intel_desc_region *region,
+ VK_DESCRIPTOR_UPDATE_MODE mode);
+VK_RESULT intel_desc_region_end_update(struct intel_desc_region *region,
struct intel_cmd *cmd);
void intel_desc_region_clear(struct intel_desc_region *region,
void intel_desc_region_read_surface(const struct intel_desc_region *region,
const struct intel_desc_offset *offset,
- XGL_PIPELINE_SHADER_STAGE stage,
+ VK_PIPELINE_SHADER_STAGE stage,
const struct intel_mem **mem,
bool *read_only,
const uint32_t **cmd,
const struct intel_desc_offset *offset,
const struct intel_sampler **sampler);
-XGL_RESULT intel_desc_pool_create(struct intel_dev *dev,
- XGL_DESCRIPTOR_POOL_USAGE usage,
+VK_RESULT intel_desc_pool_create(struct intel_dev *dev,
+ VK_DESCRIPTOR_POOL_USAGE usage,
uint32_t max_sets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct intel_desc_pool **pool_ret);
void intel_desc_pool_destroy(struct intel_desc_pool *pool);
-XGL_RESULT intel_desc_pool_alloc(struct intel_desc_pool *pool,
+VK_RESULT intel_desc_pool_alloc(struct intel_desc_pool *pool,
const struct intel_desc_layout *layout,
struct intel_desc_offset *begin,
struct intel_desc_offset *end);
void intel_desc_pool_reset(struct intel_desc_pool *pool);
-XGL_RESULT intel_desc_set_create(struct intel_dev *dev,
+VK_RESULT intel_desc_set_create(struct intel_dev *dev,
struct intel_desc_pool *pool,
- XGL_DESCRIPTOR_SET_USAGE usage,
+ VK_DESCRIPTOR_SET_USAGE usage,
const struct intel_desc_layout *layout,
struct intel_desc_set **set_ret);
void intel_desc_set_destroy(struct intel_desc_set *set);
void intel_desc_set_update_samplers(struct intel_desc_set *set,
- const XGL_UPDATE_SAMPLERS *update);
+ const VK_UPDATE_SAMPLERS *update);
void intel_desc_set_update_sampler_textures(struct intel_desc_set *set,
- const XGL_UPDATE_SAMPLER_TEXTURES *update);
+ const VK_UPDATE_SAMPLER_TEXTURES *update);
void intel_desc_set_update_images(struct intel_desc_set *set,
- const XGL_UPDATE_IMAGES *update);
+ const VK_UPDATE_IMAGES *update);
void intel_desc_set_update_buffers(struct intel_desc_set *set,
- const XGL_UPDATE_BUFFERS *update);
+ const VK_UPDATE_BUFFERS *update);
void intel_desc_set_update_as_copy(struct intel_desc_set *set,
- const XGL_UPDATE_AS_COPY *update);
+ const VK_UPDATE_AS_COPY *update);
-XGL_RESULT intel_desc_layout_create(struct intel_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
+VK_RESULT intel_desc_layout_create(struct intel_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
struct intel_desc_layout **layout_ret);
void intel_desc_layout_destroy(struct intel_desc_layout *layout);
-XGL_RESULT intel_desc_layout_chain_create(struct intel_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT *layouts,
+VK_RESULT intel_desc_layout_chain_create(struct intel_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT *layouts,
uint32_t count,
struct intel_desc_layout_chain **chain_ret);
void intel_desc_layout_chain_destroy(struct intel_desc_layout_chain *chain);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
return true;
}
-static XGL_RESULT dev_create_queues(struct intel_dev *dev,
- const XGL_DEVICE_QUEUE_CREATE_INFO *queues,
+static VK_RESULT dev_create_queues(struct intel_dev *dev,
+ const VK_DEVICE_QUEUE_CREATE_INFO *queues,
uint32_t count)
{
uint32_t i;
if (!count)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
for (i = 0; i < count; i++) {
- const XGL_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
- XGL_RESULT ret = XGL_SUCCESS;
+ const VK_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
+ VK_RESULT ret = VK_SUCCESS;
if (q->queueNodeIndex < INTEL_GPU_ENGINE_COUNT &&
q->queueCount == 1 && !dev->queues[q->queueNodeIndex]) {
&dev->queues[q->queueNodeIndex]);
}
else {
- ret = XGL_ERROR_INVALID_POINTER;
+ ret = VK_ERROR_INVALID_POINTER;
}
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
uint32_t j;
for (j = 0; j < i; j++)
intel_queue_destroy(dev->queues[j]);
}
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_dev_create(struct intel_gpu *gpu,
- const XGL_DEVICE_CREATE_INFO *info,
+VK_RESULT intel_dev_create(struct intel_gpu *gpu,
+ const VK_DEVICE_CREATE_INFO *info,
struct intel_dev **dev_ret)
{
struct intel_dev *dev;
uint32_t i;
- XGL_RESULT ret;
+ VK_RESULT ret;
if (gpu->winsys)
- return XGL_ERROR_DEVICE_ALREADY_CREATED;
+ return VK_ERROR_DEVICE_ALREADY_CREATED;
dev = (struct intel_dev *) intel_base_create(&gpu->handle,
sizeof(*dev), info->flags,
- XGL_DBG_OBJECT_DEVICE, info, sizeof(struct intel_dev_dbg));
+ VK_DBG_OBJECT_DEVICE, info, sizeof(struct intel_dev_dbg));
if (!dev)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
for (i = 0; i < info->extensionCount; i++) {
const enum intel_ext_type ext = intel_gpu_lookup_extension(gpu,
dev->gpu = gpu;
ret = intel_gpu_init_winsys(gpu);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_dev_destroy(dev);
return ret;
}
"command buffer scratch", 4096, false);
if (!dev->cmd_scratch_bo) {
intel_dev_destroy(dev);
- return XGL_ERROR_OUT_OF_GPU_MEMORY;
+ return VK_ERROR_OUT_OF_GPU_MEMORY;
}
if (!dev_create_meta_shaders(dev)) {
intel_dev_destroy(dev);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
ret = intel_desc_region_create(dev, &dev->desc_region);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_dev_destroy(dev);
return ret;
}
ret = dev_create_queues(dev, info->pRequestedQueues,
info->queueRecordCount);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_dev_destroy(dev);
return ret;
}
*dev_ret = dev;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void dev_clear_msg_filters(struct intel_dev *dev)
intel_gpu_cleanup_winsys(gpu);
}
-XGL_RESULT intel_dev_add_msg_filter(struct intel_dev *dev,
+VK_RESULT intel_dev_add_msg_filter(struct intel_dev *dev,
int32_t msg_code,
- XGL_DBG_MSG_FILTER filter)
+ VK_DBG_MSG_FILTER filter)
{
struct intel_dev_dbg *dbg = intel_dev_dbg(dev);
struct intel_dev_dbg_msg_filter *f = dbg->filters;
- assert(filter != XGL_DBG_MSG_FILTER_NONE);
+ assert(filter != VK_DBG_MSG_FILTER_NONE);
while (f) {
if (f->msg_code == msg_code)
f->triggered = false;
}
} else {
- f = intel_alloc(dev, sizeof(*f), 0, XGL_SYSTEM_ALLOC_DEBUG);
+ f = intel_alloc(dev, sizeof(*f), 0, VK_SYSTEM_ALLOC_DEBUG);
if (!f)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
f->msg_code = msg_code;
f->filter = filter;
dbg->filters = f;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_dev_remove_msg_filter(struct intel_dev *dev,
continue;
}
- if (filter->filter == XGL_DBG_MSG_FILTER_ALL)
+ if (filter->filter == VK_DBG_MSG_FILTER_ALL)
return true;
- if (filter->filter == XGL_DBG_MSG_FILTER_REPEATED &&
+ if (filter->filter == VK_DBG_MSG_FILTER_REPEATED &&
filter->triggered)
return true;
}
void intel_dev_log(struct intel_dev *dev,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
struct intel_base *src_object,
size_t location,
int32_t msg_code,
return;
va_start(ap, format);
- intel_logv(dev, msg_type, validation_level, (XGL_BASE_OBJECT) src_object,
+ intel_logv(dev, msg_type, validation_level, (VK_BASE_OBJECT) src_object,
location, msg_code, format, ap);
va_end(ap);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDevice(
- XGL_PHYSICAL_GPU gpu_,
- const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDevice(
+ VK_PHYSICAL_GPU gpu_,
+ const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
struct intel_gpu *gpu = intel_gpu(gpu_);
return intel_dev_create(gpu, pCreateInfo, (struct intel_dev **) pDevice);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyDevice(
+ VK_DEVICE device)
{
struct intel_dev *dev = intel_dev(device);
intel_dev_destroy(dev);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(
+ VK_DEVICE device,
uint32_t queueNodeIndex,
uint32_t queueIndex,
- XGL_QUEUE* pQueue)
+ VK_QUEUE* pQueue)
{
struct intel_dev *dev = intel_dev(device);
if (queueNodeIndex >= INTEL_GPU_ENGINE_COUNT) {
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
if (queueIndex > 0)
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
*pQueue = dev->queues[queueNodeIndex];
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(
+ VK_DEVICE device)
{
struct intel_dev *dev = intel_dev(device);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
uint32_t i;
for (i = 0; i < ARRAY_SIZE(dev->queues); i++) {
if (dev->queues[i]) {
- const XGL_RESULT r = intel_queue_wait(dev->queues[i], -1);
- if (r != XGL_SUCCESS)
+ const VK_RESULT r = intel_queue_wait(dev->queues[i], -1);
+ if (r != VK_SUCCESS)
ret = r;
}
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(
- XGL_DEVICE device,
- XGL_VALIDATION_LEVEL validationLevel)
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(
+ VK_DEVICE device,
+ VK_VALIDATION_LEVEL validationLevel)
{
struct intel_dev *dev = intel_dev(device);
struct intel_dev_dbg *dbg = intel_dev_dbg(dev);
if (dbg)
dbg->validation_level = validationLevel;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(
+ VK_DEVICE device,
int32_t msgCode,
- XGL_DBG_MSG_FILTER filter)
+ VK_DBG_MSG_FILTER filter)
{
struct intel_dev *dev = intel_dev(device);
if (!dev->base.dbg)
- return XGL_SUCCESS;
+ return VK_SUCCESS;
- if (filter == XGL_DBG_MSG_FILTER_NONE) {
+ if (filter == VK_DBG_MSG_FILTER_NONE) {
intel_dev_remove_msg_filter(dev, msgCode);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
return intel_dev_add_msg_filter(dev, msgCode, filter);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(
- XGL_DEVICE device,
- XGL_DBG_DEVICE_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(
+ VK_DEVICE device,
+ VK_DBG_DEVICE_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
struct intel_dev *dev = intel_dev(device);
struct intel_dev_dbg *dbg = intel_dev_dbg(dev);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
if (dataSize == 0)
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
switch (dbgOption) {
- case XGL_DBG_OPTION_DISABLE_PIPELINE_LOADS:
+ case VK_DBG_OPTION_DISABLE_PIPELINE_LOADS:
if (dbg)
dbg->disable_pipeline_loads = *((const bool *) pData);
break;
- case XGL_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS:
+ case VK_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS:
if (dbg)
dbg->force_object_memory_reqs = *((const bool *) pData);
break;
- case XGL_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT:
+ case VK_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT:
if (dbg)
dbg->force_large_image_alignment = *((const bool *) pData);
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_dev_dbg_msg_filter {
int32_t msg_code;
- XGL_DBG_MSG_FILTER filter;
+ VK_DBG_MSG_FILTER filter;
bool triggered;
struct intel_dev_dbg_msg_filter *next;
struct intel_dev_dbg {
struct intel_base_dbg base;
- XGL_VALIDATION_LEVEL validation_level;
+ VK_VALIDATION_LEVEL validation_level;
bool disable_pipeline_loads;
bool force_object_memory_reqs;
bool force_large_image_alignment;
struct intel_queue *queues[INTEL_GPU_ENGINE_COUNT];
};
-static inline struct intel_dev *intel_dev(XGL_DEVICE dev)
+static inline struct intel_dev *intel_dev(VK_DEVICE dev)
{
return (struct intel_dev *) dev;
}
return (struct intel_dev_dbg *) dev->base.dbg;
}
-XGL_RESULT intel_dev_create(struct intel_gpu *gpu,
- const XGL_DEVICE_CREATE_INFO *info,
+VK_RESULT intel_dev_create(struct intel_gpu *gpu,
+ const VK_DEVICE_CREATE_INFO *info,
struct intel_dev **dev_ret);
void intel_dev_destroy(struct intel_dev *dev);
-XGL_RESULT intel_dev_add_msg_filter(struct intel_dev *dev,
+VK_RESULT intel_dev_add_msg_filter(struct intel_dev *dev,
int32_t msg_code,
- XGL_DBG_MSG_FILTER filter);
+ VK_DBG_MSG_FILTER filter);
void intel_dev_remove_msg_filter(struct intel_dev *dev,
int32_t msg_code);
void intel_dev_log(struct intel_dev *dev,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
struct intel_base *src_object,
size_t location,
int32_t msg_code,
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "mem.h"
#include "event.h"
-static XGL_RESULT event_map(struct intel_event *event, uint32_t **ptr_ret)
+static VK_RESULT event_map(struct intel_event *event, uint32_t **ptr_ret)
{
void *ptr;
if (!event->obj.mem)
- return XGL_ERROR_MEMORY_NOT_BOUND;
+ return VK_ERROR_MEMORY_NOT_BOUND;
/*
* This is an unsynchronous mapping. It doesn't look like we want a
*/
ptr = intel_mem_map(event->obj.mem, 0);
if (!ptr)
- return XGL_ERROR_MEMORY_MAP_FAILED;
+ return VK_ERROR_MEMORY_MAP_FAILED;
*ptr_ret = (uint32_t *) ((uint8_t *) ptr + event->obj.offset);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void event_unmap(struct intel_event *event)
intel_mem_unmap(event->obj.mem);
}
-static XGL_RESULT event_write(struct intel_event *event, uint32_t val)
+static VK_RESULT event_write(struct intel_event *event, uint32_t val)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
uint32_t *ptr;
ret = event_map(event, &ptr);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
*ptr = val;
event_unmap(event);
}
return ret;
}
-static XGL_RESULT event_read(struct intel_event *event, uint32_t *val)
+static VK_RESULT event_read(struct intel_event *event, uint32_t *val)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
uint32_t *ptr;
ret = event_map(event, &ptr);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
*val = *ptr;
event_unmap(event);
}
intel_event_destroy(event);
}
-static XGL_RESULT event_get_info(struct intel_base *base, int type,
+static VK_RESULT event_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
/* use dword aligned to 64-byte boundaries */
mem_req->size = 4;
mem_req->alignment = 64;
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
}
break;
default:
return ret;
}
-XGL_RESULT intel_event_create(struct intel_dev *dev,
- const XGL_EVENT_CREATE_INFO *info,
+VK_RESULT intel_event_create(struct intel_dev *dev,
+ const VK_EVENT_CREATE_INFO *info,
struct intel_event **event_ret)
{
struct intel_event *event;
event = (struct intel_event *) intel_base_create(&dev->base.handle,
- sizeof(*event), dev->base.dbg, XGL_DBG_OBJECT_EVENT, info, 0);
+ sizeof(*event), dev->base.dbg, VK_DBG_OBJECT_EVENT, info, 0);
if (!event)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
event->obj.base.get_info = event_get_info;
event->obj.destroy = event_destroy;
*event_ret = event;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_event_destroy(struct intel_event *event)
intel_base_destroy(&event->obj.base);
}
-XGL_RESULT intel_event_set(struct intel_event *event)
+VK_RESULT intel_event_set(struct intel_event *event)
{
return event_write(event, 1);
}
-XGL_RESULT intel_event_reset(struct intel_event *event)
+VK_RESULT intel_event_reset(struct intel_event *event)
{
return event_write(event, 0);
}
-XGL_RESULT intel_event_get_status(struct intel_event *event)
+VK_RESULT intel_event_get_status(struct intel_event *event)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
uint32_t val;
ret = event_read(event, &val);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
- return (val) ? XGL_EVENT_SET : XGL_EVENT_RESET;
+ return (val) ? VK_EVENT_SET : VK_EVENT_RESET;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateEvent(
- XGL_DEVICE device,
- const XGL_EVENT_CREATE_INFO* pCreateInfo,
- XGL_EVENT* pEvent)
+ICD_EXPORT VK_RESULT VKAPI vkCreateEvent(
+ VK_DEVICE device,
+ const VK_EVENT_CREATE_INFO* pCreateInfo,
+ VK_EVENT* pEvent)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_event **) pEvent);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkGetEventStatus(
+ VK_EVENT event_)
{
struct intel_event *event = intel_event(event_);
return intel_event_get_status(event);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkSetEvent(
+ VK_EVENT event_)
{
struct intel_event *event = intel_event(event_);
return intel_event_set(event);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkResetEvent(
+ VK_EVENT event_)
{
struct intel_event *event = intel_event(event_);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_obj obj;
};
-static inline struct intel_event *intel_event(XGL_EVENT event)
+static inline struct intel_event *intel_event(VK_EVENT event)
{
return (struct intel_event *) event;
}
return (struct intel_event *) obj;
}
-XGL_RESULT intel_event_create(struct intel_dev *dev,
- const XGL_EVENT_CREATE_INFO *info,
+VK_RESULT intel_event_create(struct intel_dev *dev,
+ const VK_EVENT_CREATE_INFO *info,
struct intel_event **event_ret);
void intel_event_destroy(struct intel_event *event);
-XGL_RESULT intel_event_set(struct intel_event *event);
-XGL_RESULT intel_event_reset(struct intel_event *event);
-XGL_RESULT intel_event_get_status(struct intel_event *event);
+VK_RESULT intel_event_set(struct intel_event *event);
+VK_RESULT intel_event_reset(struct intel_event *event);
+VK_RESULT intel_event_get_status(struct intel_event *event);
#endif /* EVENT_H */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_fb_destroy(fb);
}
-XGL_RESULT intel_fb_create(struct intel_dev *dev,
- const XGL_FRAMEBUFFER_CREATE_INFO *info,
+VK_RESULT intel_fb_create(struct intel_dev *dev,
+ const VK_FRAMEBUFFER_CREATE_INFO *info,
struct intel_fb **fb_ret)
{
struct intel_fb *fb;
uint32_t width, height, array_size, i;
if (info->colorAttachmentCount > INTEL_MAX_RENDER_TARGETS)
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
fb = (struct intel_fb *) intel_base_create(&dev->base.handle,
- sizeof(*fb), dev->base.dbg, XGL_DBG_OBJECT_FRAMEBUFFER, info, 0);
+ sizeof(*fb), dev->base.dbg, VK_DBG_OBJECT_FRAMEBUFFER, info, 0);
if (!fb)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
width = info->width;
height = info->height;
array_size = info->layers;
for (i = 0; i < info->colorAttachmentCount; i++) {
- const XGL_COLOR_ATTACHMENT_BIND_INFO *att =
+ const VK_COLOR_ATTACHMENT_BIND_INFO *att =
&info->pColorAttachments[i];
const struct intel_rt_view *rt = intel_rt_view(att->view);
const struct intel_layout *layout = &rt->img->layout;
if (rt->img->samples != info->sampleCount) {
intel_fb_destroy(fb);
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
}
fb->rt[i] = rt;
fb->rt_count = info->colorAttachmentCount;
if (info->pDepthStencilAttachment) {
- const XGL_DEPTH_STENCIL_BIND_INFO *att =
+ const VK_DEPTH_STENCIL_BIND_INFO *att =
info->pDepthStencilAttachment;
const struct intel_ds_view *ds = intel_ds_view(att->view);
const struct intel_layout *layout = &ds->img->layout;
if (ds->img->samples != info->sampleCount) {
intel_fb_destroy(fb);
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
}
fb->ds = ds;
switch (att->layout) {
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
- case XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL:
+ case VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL:
fb->optimal_ds = true;
break;
default:
*fb_ret = fb;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_fb_destroy(struct intel_fb *fb)
intel_render_pass_destroy(rp);
}
-XGL_RESULT intel_render_pass_create(struct intel_dev *dev,
- const XGL_RENDER_PASS_CREATE_INFO *info,
+VK_RESULT intel_render_pass_create(struct intel_dev *dev,
+ const VK_RENDER_PASS_CREATE_INFO *info,
struct intel_render_pass **rp_ret)
{
struct intel_render_pass *rp;
uint32_t i;
rp = (struct intel_render_pass *) intel_base_create(&dev->base.handle,
- sizeof(*rp), dev->base.dbg, XGL_DBG_OBJECT_RENDER_PASS, info, 0);
+ sizeof(*rp), dev->base.dbg, VK_DBG_OBJECT_RENDER_PASS, info, 0);
if (!rp)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
rp->obj.destroy = render_pass_destroy;
/* TODO add any clear color ops */
for (i = 0; i < info->colorAttachmentCount; i++)
- assert(info->pColorLoadOps[i] != XGL_ATTACHMENT_LOAD_OP_CLEAR);
- assert(info->depthLoadOp != XGL_ATTACHMENT_LOAD_OP_CLEAR);
- assert(info->stencilLoadOp != XGL_ATTACHMENT_LOAD_OP_CLEAR);
+ assert(info->pColorLoadOps[i] != VK_ATTACHMENT_LOAD_OP_CLEAR);
+ assert(info->depthLoadOp != VK_ATTACHMENT_LOAD_OP_CLEAR);
+ assert(info->stencilLoadOp != VK_ATTACHMENT_LOAD_OP_CLEAR);
*rp_ret = rp;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_render_pass_destroy(struct intel_render_pass *rp)
intel_base_destroy(&rp->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(
- XGL_DEVICE device,
- const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
- XGL_FRAMEBUFFER* pFramebuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(
+ VK_DEVICE device,
+ const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
+ VK_FRAMEBUFFER* pFramebuffer)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_fb **) pFramebuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(
- XGL_DEVICE device,
- const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo,
- XGL_RENDER_PASS* pRenderPass)
+ICD_EXPORT VK_RESULT VKAPI vkCreateRenderPass(
+ VK_DEVICE device,
+ const VK_RENDER_PASS_CREATE_INFO* pCreateInfo,
+ VK_RENDER_PASS* pRenderPass)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_obj obj;
};
-static inline struct intel_fb *intel_fb(XGL_FRAMEBUFFER fb)
+static inline struct intel_fb *intel_fb(VK_FRAMEBUFFER fb)
{
return (struct intel_fb *) fb;
}
return (struct intel_fb *) obj;
}
-static inline struct intel_render_pass *intel_render_pass(XGL_RENDER_PASS rp)
+static inline struct intel_render_pass *intel_render_pass(VK_RENDER_PASS rp)
{
return (struct intel_render_pass *) rp;
}
return (struct intel_render_pass *) obj;
}
-XGL_RESULT intel_fb_create(struct intel_dev *dev,
- const XGL_FRAMEBUFFER_CREATE_INFO *pInfo,
+VK_RESULT intel_fb_create(struct intel_dev *dev,
+ const VK_FRAMEBUFFER_CREATE_INFO *pInfo,
struct intel_fb **fb_ret);
void intel_fb_destroy(struct intel_fb *fb);
-XGL_RESULT intel_render_pass_create(struct intel_dev *dev,
- const XGL_RENDER_PASS_CREATE_INFO *pInfo,
+VK_RESULT intel_render_pass_create(struct intel_dev *dev,
+ const VK_RENDER_PASS_CREATE_INFO *pInfo,
struct intel_render_pass **rp_ret);
void intel_render_pass_destroy(struct intel_render_pass *rp);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_fence_destroy(fence);
}
-XGL_RESULT intel_fence_create(struct intel_dev *dev,
- const XGL_FENCE_CREATE_INFO *info,
+VK_RESULT intel_fence_create(struct intel_dev *dev,
+ const VK_FENCE_CREATE_INFO *info,
struct intel_fence **fence_ret)
{
struct intel_fence *fence;
fence = (struct intel_fence *) intel_base_create(&dev->base.handle,
- sizeof(*fence), dev->base.dbg, XGL_DBG_OBJECT_FENCE, info, 0);
+ sizeof(*fence), dev->base.dbg, VK_DBG_OBJECT_FENCE, info, 0);
if (!fence)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
if (dev->exts[INTEL_EXT_WSI_X11]) {
- XGL_RESULT ret = intel_wsi_fence_init(fence);
- if (ret != XGL_SUCCESS) {
+ VK_RESULT ret = intel_wsi_fence_init(fence);
+ if (ret != VK_SUCCESS) {
intel_fence_destroy(fence);
return ret;
}
fence->obj.destroy = fence_destroy;
*fence_ret = fence;
- fence->signaled = (info->flags & XGL_FENCE_CREATE_SIGNALED_BIT);
+ fence->signaled = (info->flags & VK_FENCE_CREATE_SIGNALED_BIT);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_fence_destroy(struct intel_fence *fence)
fence->signaled = false;
}
-XGL_RESULT intel_fence_wait(struct intel_fence *fence, int64_t timeout_ns)
+VK_RESULT intel_fence_wait(struct intel_fence *fence, int64_t timeout_ns)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = intel_wsi_fence_wait(fence, timeout_ns);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
if (fence->signaled) {
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
if (fence->seqno_bo) {
ret = (intel_bo_wait(fence->seqno_bo, timeout_ns)) ?
- XGL_NOT_READY : XGL_SUCCESS;
- if (ret == XGL_SUCCESS) {
+ VK_NOT_READY : VK_SUCCESS;
+ if (ret == VK_SUCCESS) {
fence->signaled = true;
}
return ret;
}
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFence(
- XGL_DEVICE device,
- const XGL_FENCE_CREATE_INFO* pCreateInfo,
- XGL_FENCE* pFence)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFence(
+ VK_DEVICE device,
+ const VK_FENCE_CREATE_INFO* pCreateInfo,
+ VK_FENCE* pFence)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_fence **) pFence);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(
- XGL_FENCE fence_)
+ICD_EXPORT VK_RESULT VKAPI vkGetFenceStatus(
+ VK_FENCE fence_)
{
struct intel_fence *fence = intel_fence(fence_);
return intel_fence_wait(fence, 0);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWaitForFences(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkWaitForFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- const XGL_FENCE* pFences,
+ const VK_FENCE* pFences,
bool32_t waitAll,
uint64_t timeout)
{
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
uint32_t i;
for (i = 0; i < fenceCount; i++) {
struct intel_fence *fence = intel_fence(pFences[i]);
int64_t ns;
- XGL_RESULT r;
+ VK_RESULT r;
/* timeout in nano seconds */
ns = (timeout <= (uint64_t) INT64_MAX) ? ns : -1;
r = intel_fence_wait(fence, ns);
- if (!waitAll && r == XGL_SUCCESS)
- return XGL_SUCCESS;
+ if (!waitAll && r == VK_SUCCESS)
+ return VK_SUCCESS;
- if (r != XGL_SUCCESS)
+ if (r != VK_SUCCESS)
ret = r;
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetFences(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkResetFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- XGL_FENCE* pFences)
+ VK_FENCE* pFences)
{
uint32_t i;
fence->signaled = false;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
void *wsi_data;
};
-static inline struct intel_fence *intel_fence(XGL_FENCE fence)
+static inline struct intel_fence *intel_fence(VK_FENCE fence)
{
return (struct intel_fence *) fence;
}
return (struct intel_fence *) obj;
}
-XGL_RESULT intel_fence_create(struct intel_dev *dev,
- const XGL_FENCE_CREATE_INFO *info,
+VK_RESULT intel_fence_create(struct intel_dev *dev,
+ const VK_FENCE_CREATE_INFO *info,
struct intel_fence **fence_ret);
void intel_fence_destroy(struct intel_fence *fence);
-XGL_RESULT intel_fence_wait(struct intel_fence *fence, int64_t timeout_ns);
+VK_RESULT intel_fence_wait(struct intel_fence *fence, int64_t timeout_ns);
void intel_fence_copy(struct intel_fence *fence,
const struct intel_fence *src);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#undef CAP
};
-static const int intel_color_mapping[XGL_NUM_FMT] = {
- [XGL_FMT_R4G4_UNORM] = 0,
- [XGL_FMT_R4G4_USCALED] = 0,
- [XGL_FMT_R4G4B4A4_UNORM] = 0,
- [XGL_FMT_R4G4B4A4_USCALED] = 0,
- [XGL_FMT_R5G6B5_UNORM] = 0,
- [XGL_FMT_R5G6B5_USCALED] = 0,
- [XGL_FMT_R5G5B5A1_UNORM] = 0,
- [XGL_FMT_R5G5B5A1_USCALED] = 0,
- [XGL_FMT_R8_UNORM] = GEN6_FORMAT_R8_UNORM,
- [XGL_FMT_R8_SNORM] = GEN6_FORMAT_R8_SNORM,
- [XGL_FMT_R8_USCALED] = GEN6_FORMAT_R8_USCALED,
- [XGL_FMT_R8_SSCALED] = GEN6_FORMAT_R8_SSCALED,
- [XGL_FMT_R8_UINT] = GEN6_FORMAT_R8_UINT,
- [XGL_FMT_R8_SINT] = GEN6_FORMAT_R8_SINT,
- [XGL_FMT_R8_SRGB] = 0,
- [XGL_FMT_R8G8_UNORM] = GEN6_FORMAT_R8G8_UNORM,
- [XGL_FMT_R8G8_SNORM] = GEN6_FORMAT_R8G8_SNORM,
- [XGL_FMT_R8G8_USCALED] = GEN6_FORMAT_R8G8_USCALED,
- [XGL_FMT_R8G8_SSCALED] = GEN6_FORMAT_R8G8_SSCALED,
- [XGL_FMT_R8G8_UINT] = GEN6_FORMAT_R8G8_UINT,
- [XGL_FMT_R8G8_SINT] = GEN6_FORMAT_R8G8_SINT,
- [XGL_FMT_R8G8_SRGB] = 0,
- [XGL_FMT_R8G8B8_UNORM] = GEN6_FORMAT_R8G8B8_UNORM,
- [XGL_FMT_R8G8B8_SNORM] = GEN6_FORMAT_R8G8B8_SNORM,
- [XGL_FMT_R8G8B8_USCALED] = GEN6_FORMAT_R8G8B8_USCALED,
- [XGL_FMT_R8G8B8_SSCALED] = GEN6_FORMAT_R8G8B8_SSCALED,
- [XGL_FMT_R8G8B8_UINT] = GEN6_FORMAT_R8G8B8_UINT,
- [XGL_FMT_R8G8B8_SINT] = GEN6_FORMAT_R8G8B8_SINT,
- [XGL_FMT_R8G8B8_SRGB] = GEN6_FORMAT_R8G8B8_UNORM_SRGB,
- [XGL_FMT_R8G8B8A8_UNORM] = GEN6_FORMAT_R8G8B8A8_UNORM,
- [XGL_FMT_R8G8B8A8_SNORM] = GEN6_FORMAT_R8G8B8A8_SNORM,
- [XGL_FMT_R8G8B8A8_USCALED] = GEN6_FORMAT_R8G8B8A8_USCALED,
- [XGL_FMT_R8G8B8A8_SSCALED] = GEN6_FORMAT_R8G8B8A8_SSCALED,
- [XGL_FMT_R8G8B8A8_UINT] = GEN6_FORMAT_R8G8B8A8_UINT,
- [XGL_FMT_R8G8B8A8_SINT] = GEN6_FORMAT_R8G8B8A8_SINT,
- [XGL_FMT_R8G8B8A8_SRGB] = GEN6_FORMAT_R8G8B8A8_UNORM_SRGB,
- [XGL_FMT_R10G10B10A2_UNORM] = GEN6_FORMAT_R10G10B10A2_UNORM,
- [XGL_FMT_R10G10B10A2_SNORM] = GEN6_FORMAT_R10G10B10A2_SNORM,
- [XGL_FMT_R10G10B10A2_USCALED] = GEN6_FORMAT_R10G10B10A2_USCALED,
- [XGL_FMT_R10G10B10A2_SSCALED] = GEN6_FORMAT_R10G10B10A2_SSCALED,
- [XGL_FMT_R10G10B10A2_UINT] = GEN6_FORMAT_R10G10B10A2_UINT,
- [XGL_FMT_R10G10B10A2_SINT] = GEN6_FORMAT_R10G10B10A2_SINT,
- [XGL_FMT_R16_UNORM] = GEN6_FORMAT_R16_UNORM,
- [XGL_FMT_R16_SNORM] = GEN6_FORMAT_R16_SNORM,
- [XGL_FMT_R16_USCALED] = GEN6_FORMAT_R16_USCALED,
- [XGL_FMT_R16_SSCALED] = GEN6_FORMAT_R16_SSCALED,
- [XGL_FMT_R16_UINT] = GEN6_FORMAT_R16_UINT,
- [XGL_FMT_R16_SINT] = GEN6_FORMAT_R16_SINT,
- [XGL_FMT_R16_SFLOAT] = GEN6_FORMAT_R16_FLOAT,
- [XGL_FMT_R16G16_UNORM] = GEN6_FORMAT_R16G16_UNORM,
- [XGL_FMT_R16G16_SNORM] = GEN6_FORMAT_R16G16_SNORM,
- [XGL_FMT_R16G16_USCALED] = GEN6_FORMAT_R16G16_USCALED,
- [XGL_FMT_R16G16_SSCALED] = GEN6_FORMAT_R16G16_SSCALED,
- [XGL_FMT_R16G16_UINT] = GEN6_FORMAT_R16G16_UINT,
- [XGL_FMT_R16G16_SINT] = GEN6_FORMAT_R16G16_SINT,
- [XGL_FMT_R16G16_SFLOAT] = GEN6_FORMAT_R16G16_FLOAT,
- [XGL_FMT_R16G16B16_UNORM] = GEN6_FORMAT_R16G16B16_UNORM,
- [XGL_FMT_R16G16B16_SNORM] = GEN6_FORMAT_R16G16B16_SNORM,
- [XGL_FMT_R16G16B16_USCALED] = GEN6_FORMAT_R16G16B16_USCALED,
- [XGL_FMT_R16G16B16_SSCALED] = GEN6_FORMAT_R16G16B16_SSCALED,
- [XGL_FMT_R16G16B16_UINT] = GEN6_FORMAT_R16G16B16_UINT,
- [XGL_FMT_R16G16B16_SINT] = GEN6_FORMAT_R16G16B16_SINT,
- [XGL_FMT_R16G16B16_SFLOAT] = 0,
- [XGL_FMT_R16G16B16A16_UNORM] = GEN6_FORMAT_R16G16B16A16_UNORM,
- [XGL_FMT_R16G16B16A16_SNORM] = GEN6_FORMAT_R16G16B16A16_SNORM,
- [XGL_FMT_R16G16B16A16_USCALED] = GEN6_FORMAT_R16G16B16A16_USCALED,
- [XGL_FMT_R16G16B16A16_SSCALED] = GEN6_FORMAT_R16G16B16A16_SSCALED,
- [XGL_FMT_R16G16B16A16_UINT] = GEN6_FORMAT_R16G16B16A16_UINT,
- [XGL_FMT_R16G16B16A16_SINT] = GEN6_FORMAT_R16G16B16A16_SINT,
- [XGL_FMT_R16G16B16A16_SFLOAT] = GEN6_FORMAT_R16G16B16A16_FLOAT,
- [XGL_FMT_R32_UINT] = GEN6_FORMAT_R32_UINT,
- [XGL_FMT_R32_SINT] = GEN6_FORMAT_R32_SINT,
- [XGL_FMT_R32_SFLOAT] = GEN6_FORMAT_R32_FLOAT,
- [XGL_FMT_R32G32_UINT] = GEN6_FORMAT_R32G32_UINT,
- [XGL_FMT_R32G32_SINT] = GEN6_FORMAT_R32G32_SINT,
- [XGL_FMT_R32G32_SFLOAT] = GEN6_FORMAT_R32G32_FLOAT,
- [XGL_FMT_R32G32B32_UINT] = GEN6_FORMAT_R32G32B32_UINT,
- [XGL_FMT_R32G32B32_SINT] = GEN6_FORMAT_R32G32B32_SINT,
- [XGL_FMT_R32G32B32_SFLOAT] = GEN6_FORMAT_R32G32B32_FLOAT,
- [XGL_FMT_R32G32B32A32_UINT] = GEN6_FORMAT_R32G32B32A32_UINT,
- [XGL_FMT_R32G32B32A32_SINT] = GEN6_FORMAT_R32G32B32A32_SINT,
- [XGL_FMT_R32G32B32A32_SFLOAT] = GEN6_FORMAT_R32G32B32A32_FLOAT,
- [XGL_FMT_R64_SFLOAT] = GEN6_FORMAT_R64_FLOAT,
- [XGL_FMT_R64G64_SFLOAT] = GEN6_FORMAT_R64G64_FLOAT,
- [XGL_FMT_R64G64B64_SFLOAT] = GEN6_FORMAT_R64G64B64_FLOAT,
- [XGL_FMT_R64G64B64A64_SFLOAT] = GEN6_FORMAT_R64G64B64A64_FLOAT,
- [XGL_FMT_R11G11B10_UFLOAT] = GEN6_FORMAT_R11G11B10_FLOAT,
- [XGL_FMT_R9G9B9E5_UFLOAT] = GEN6_FORMAT_R9G9B9E5_SHAREDEXP,
- [XGL_FMT_BC1_RGB_UNORM] = GEN6_FORMAT_BC1_UNORM,
- [XGL_FMT_BC1_RGB_SRGB] = GEN6_FORMAT_BC1_UNORM_SRGB,
- [XGL_FMT_BC2_UNORM] = GEN6_FORMAT_BC2_UNORM,
- [XGL_FMT_BC2_SRGB] = GEN6_FORMAT_BC2_UNORM_SRGB,
- [XGL_FMT_BC3_UNORM] = GEN6_FORMAT_BC3_UNORM,
- [XGL_FMT_BC3_SRGB] = GEN6_FORMAT_BC3_UNORM_SRGB,
- [XGL_FMT_BC4_UNORM] = GEN6_FORMAT_BC4_UNORM,
- [XGL_FMT_BC4_SNORM] = GEN6_FORMAT_BC4_SNORM,
- [XGL_FMT_BC5_UNORM] = GEN6_FORMAT_BC5_UNORM,
- [XGL_FMT_BC5_SNORM] = GEN6_FORMAT_BC5_SNORM,
- [XGL_FMT_BC6H_UFLOAT] = GEN6_FORMAT_BC6H_UF16,
- [XGL_FMT_BC6H_SFLOAT] = GEN6_FORMAT_BC6H_SF16,
- [XGL_FMT_BC7_UNORM] = GEN6_FORMAT_BC7_UNORM,
- [XGL_FMT_BC7_SRGB] = GEN6_FORMAT_BC7_UNORM_SRGB,
+static const int intel_color_mapping[VK_NUM_FMT] = {
+ [VK_FMT_R4G4_UNORM] = 0,
+ [VK_FMT_R4G4_USCALED] = 0,
+ [VK_FMT_R4G4B4A4_UNORM] = 0,
+ [VK_FMT_R4G4B4A4_USCALED] = 0,
+ [VK_FMT_R5G6B5_UNORM] = 0,
+ [VK_FMT_R5G6B5_USCALED] = 0,
+ [VK_FMT_R5G5B5A1_UNORM] = 0,
+ [VK_FMT_R5G5B5A1_USCALED] = 0,
+ [VK_FMT_R8_UNORM] = GEN6_FORMAT_R8_UNORM,
+ [VK_FMT_R8_SNORM] = GEN6_FORMAT_R8_SNORM,
+ [VK_FMT_R8_USCALED] = GEN6_FORMAT_R8_USCALED,
+ [VK_FMT_R8_SSCALED] = GEN6_FORMAT_R8_SSCALED,
+ [VK_FMT_R8_UINT] = GEN6_FORMAT_R8_UINT,
+ [VK_FMT_R8_SINT] = GEN6_FORMAT_R8_SINT,
+ [VK_FMT_R8_SRGB] = 0,
+ [VK_FMT_R8G8_UNORM] = GEN6_FORMAT_R8G8_UNORM,
+ [VK_FMT_R8G8_SNORM] = GEN6_FORMAT_R8G8_SNORM,
+ [VK_FMT_R8G8_USCALED] = GEN6_FORMAT_R8G8_USCALED,
+ [VK_FMT_R8G8_SSCALED] = GEN6_FORMAT_R8G8_SSCALED,
+ [VK_FMT_R8G8_UINT] = GEN6_FORMAT_R8G8_UINT,
+ [VK_FMT_R8G8_SINT] = GEN6_FORMAT_R8G8_SINT,
+ [VK_FMT_R8G8_SRGB] = 0,
+ [VK_FMT_R8G8B8_UNORM] = GEN6_FORMAT_R8G8B8_UNORM,
+ [VK_FMT_R8G8B8_SNORM] = GEN6_FORMAT_R8G8B8_SNORM,
+ [VK_FMT_R8G8B8_USCALED] = GEN6_FORMAT_R8G8B8_USCALED,
+ [VK_FMT_R8G8B8_SSCALED] = GEN6_FORMAT_R8G8B8_SSCALED,
+ [VK_FMT_R8G8B8_UINT] = GEN6_FORMAT_R8G8B8_UINT,
+ [VK_FMT_R8G8B8_SINT] = GEN6_FORMAT_R8G8B8_SINT,
+ [VK_FMT_R8G8B8_SRGB] = GEN6_FORMAT_R8G8B8_UNORM_SRGB,
+ [VK_FMT_R8G8B8A8_UNORM] = GEN6_FORMAT_R8G8B8A8_UNORM,
+ [VK_FMT_R8G8B8A8_SNORM] = GEN6_FORMAT_R8G8B8A8_SNORM,
+ [VK_FMT_R8G8B8A8_USCALED] = GEN6_FORMAT_R8G8B8A8_USCALED,
+ [VK_FMT_R8G8B8A8_SSCALED] = GEN6_FORMAT_R8G8B8A8_SSCALED,
+ [VK_FMT_R8G8B8A8_UINT] = GEN6_FORMAT_R8G8B8A8_UINT,
+ [VK_FMT_R8G8B8A8_SINT] = GEN6_FORMAT_R8G8B8A8_SINT,
+ [VK_FMT_R8G8B8A8_SRGB] = GEN6_FORMAT_R8G8B8A8_UNORM_SRGB,
+ [VK_FMT_R10G10B10A2_UNORM] = GEN6_FORMAT_R10G10B10A2_UNORM,
+ [VK_FMT_R10G10B10A2_SNORM] = GEN6_FORMAT_R10G10B10A2_SNORM,
+ [VK_FMT_R10G10B10A2_USCALED] = GEN6_FORMAT_R10G10B10A2_USCALED,
+ [VK_FMT_R10G10B10A2_SSCALED] = GEN6_FORMAT_R10G10B10A2_SSCALED,
+ [VK_FMT_R10G10B10A2_UINT] = GEN6_FORMAT_R10G10B10A2_UINT,
+ [VK_FMT_R10G10B10A2_SINT] = GEN6_FORMAT_R10G10B10A2_SINT,
+ [VK_FMT_R16_UNORM] = GEN6_FORMAT_R16_UNORM,
+ [VK_FMT_R16_SNORM] = GEN6_FORMAT_R16_SNORM,
+ [VK_FMT_R16_USCALED] = GEN6_FORMAT_R16_USCALED,
+ [VK_FMT_R16_SSCALED] = GEN6_FORMAT_R16_SSCALED,
+ [VK_FMT_R16_UINT] = GEN6_FORMAT_R16_UINT,
+ [VK_FMT_R16_SINT] = GEN6_FORMAT_R16_SINT,
+ [VK_FMT_R16_SFLOAT] = GEN6_FORMAT_R16_FLOAT,
+ [VK_FMT_R16G16_UNORM] = GEN6_FORMAT_R16G16_UNORM,
+ [VK_FMT_R16G16_SNORM] = GEN6_FORMAT_R16G16_SNORM,
+ [VK_FMT_R16G16_USCALED] = GEN6_FORMAT_R16G16_USCALED,
+ [VK_FMT_R16G16_SSCALED] = GEN6_FORMAT_R16G16_SSCALED,
+ [VK_FMT_R16G16_UINT] = GEN6_FORMAT_R16G16_UINT,
+ [VK_FMT_R16G16_SINT] = GEN6_FORMAT_R16G16_SINT,
+ [VK_FMT_R16G16_SFLOAT] = GEN6_FORMAT_R16G16_FLOAT,
+ [VK_FMT_R16G16B16_UNORM] = GEN6_FORMAT_R16G16B16_UNORM,
+ [VK_FMT_R16G16B16_SNORM] = GEN6_FORMAT_R16G16B16_SNORM,
+ [VK_FMT_R16G16B16_USCALED] = GEN6_FORMAT_R16G16B16_USCALED,
+ [VK_FMT_R16G16B16_SSCALED] = GEN6_FORMAT_R16G16B16_SSCALED,
+ [VK_FMT_R16G16B16_UINT] = GEN6_FORMAT_R16G16B16_UINT,
+ [VK_FMT_R16G16B16_SINT] = GEN6_FORMAT_R16G16B16_SINT,
+ [VK_FMT_R16G16B16_SFLOAT] = 0,
+ [VK_FMT_R16G16B16A16_UNORM] = GEN6_FORMAT_R16G16B16A16_UNORM,
+ [VK_FMT_R16G16B16A16_SNORM] = GEN6_FORMAT_R16G16B16A16_SNORM,
+ [VK_FMT_R16G16B16A16_USCALED] = GEN6_FORMAT_R16G16B16A16_USCALED,
+ [VK_FMT_R16G16B16A16_SSCALED] = GEN6_FORMAT_R16G16B16A16_SSCALED,
+ [VK_FMT_R16G16B16A16_UINT] = GEN6_FORMAT_R16G16B16A16_UINT,
+ [VK_FMT_R16G16B16A16_SINT] = GEN6_FORMAT_R16G16B16A16_SINT,
+ [VK_FMT_R16G16B16A16_SFLOAT] = GEN6_FORMAT_R16G16B16A16_FLOAT,
+ [VK_FMT_R32_UINT] = GEN6_FORMAT_R32_UINT,
+ [VK_FMT_R32_SINT] = GEN6_FORMAT_R32_SINT,
+ [VK_FMT_R32_SFLOAT] = GEN6_FORMAT_R32_FLOAT,
+ [VK_FMT_R32G32_UINT] = GEN6_FORMAT_R32G32_UINT,
+ [VK_FMT_R32G32_SINT] = GEN6_FORMAT_R32G32_SINT,
+ [VK_FMT_R32G32_SFLOAT] = GEN6_FORMAT_R32G32_FLOAT,
+ [VK_FMT_R32G32B32_UINT] = GEN6_FORMAT_R32G32B32_UINT,
+ [VK_FMT_R32G32B32_SINT] = GEN6_FORMAT_R32G32B32_SINT,
+ [VK_FMT_R32G32B32_SFLOAT] = GEN6_FORMAT_R32G32B32_FLOAT,
+ [VK_FMT_R32G32B32A32_UINT] = GEN6_FORMAT_R32G32B32A32_UINT,
+ [VK_FMT_R32G32B32A32_SINT] = GEN6_FORMAT_R32G32B32A32_SINT,
+ [VK_FMT_R32G32B32A32_SFLOAT] = GEN6_FORMAT_R32G32B32A32_FLOAT,
+ [VK_FMT_R64_SFLOAT] = GEN6_FORMAT_R64_FLOAT,
+ [VK_FMT_R64G64_SFLOAT] = GEN6_FORMAT_R64G64_FLOAT,
+ [VK_FMT_R64G64B64_SFLOAT] = GEN6_FORMAT_R64G64B64_FLOAT,
+ [VK_FMT_R64G64B64A64_SFLOAT] = GEN6_FORMAT_R64G64B64A64_FLOAT,
+ [VK_FMT_R11G11B10_UFLOAT] = GEN6_FORMAT_R11G11B10_FLOAT,
+ [VK_FMT_R9G9B9E5_UFLOAT] = GEN6_FORMAT_R9G9B9E5_SHAREDEXP,
+ [VK_FMT_BC1_RGB_UNORM] = GEN6_FORMAT_BC1_UNORM,
+ [VK_FMT_BC1_RGB_SRGB] = GEN6_FORMAT_BC1_UNORM_SRGB,
+ [VK_FMT_BC2_UNORM] = GEN6_FORMAT_BC2_UNORM,
+ [VK_FMT_BC2_SRGB] = GEN6_FORMAT_BC2_UNORM_SRGB,
+ [VK_FMT_BC3_UNORM] = GEN6_FORMAT_BC3_UNORM,
+ [VK_FMT_BC3_SRGB] = GEN6_FORMAT_BC3_UNORM_SRGB,
+ [VK_FMT_BC4_UNORM] = GEN6_FORMAT_BC4_UNORM,
+ [VK_FMT_BC4_SNORM] = GEN6_FORMAT_BC4_SNORM,
+ [VK_FMT_BC5_UNORM] = GEN6_FORMAT_BC5_UNORM,
+ [VK_FMT_BC5_SNORM] = GEN6_FORMAT_BC5_SNORM,
+ [VK_FMT_BC6H_UFLOAT] = GEN6_FORMAT_BC6H_UF16,
+ [VK_FMT_BC6H_SFLOAT] = GEN6_FORMAT_BC6H_SF16,
+ [VK_FMT_BC7_UNORM] = GEN6_FORMAT_BC7_UNORM,
+ [VK_FMT_BC7_SRGB] = GEN6_FORMAT_BC7_UNORM_SRGB,
/* TODO: Implement for remaining compressed formats. */
- [XGL_FMT_ETC2_R8G8B8_UNORM] = 0,
- [XGL_FMT_ETC2_R8G8B8A1_UNORM] = 0,
- [XGL_FMT_ETC2_R8G8B8A8_UNORM] = 0,
- [XGL_FMT_EAC_R11_UNORM] = 0,
- [XGL_FMT_EAC_R11_SNORM] = 0,
- [XGL_FMT_EAC_R11G11_UNORM] = 0,
- [XGL_FMT_EAC_R11G11_SNORM] = 0,
- [XGL_FMT_ASTC_4x4_UNORM] = 0,
- [XGL_FMT_ASTC_4x4_SRGB] = 0,
- [XGL_FMT_ASTC_5x4_UNORM] = 0,
- [XGL_FMT_ASTC_5x4_SRGB] = 0,
- [XGL_FMT_ASTC_5x5_UNORM] = 0,
- [XGL_FMT_ASTC_5x5_SRGB] = 0,
- [XGL_FMT_ASTC_6x5_UNORM] = 0,
- [XGL_FMT_ASTC_6x5_SRGB] = 0,
- [XGL_FMT_ASTC_6x6_UNORM] = 0,
- [XGL_FMT_ASTC_6x6_SRGB] = 0,
- [XGL_FMT_ASTC_8x5_UNORM] = 0,
- [XGL_FMT_ASTC_8x5_SRGB] = 0,
- [XGL_FMT_ASTC_8x6_UNORM] = 0,
- [XGL_FMT_ASTC_8x6_SRGB] = 0,
- [XGL_FMT_ASTC_8x8_UNORM] = 0,
- [XGL_FMT_ASTC_8x8_SRGB] = 0,
- [XGL_FMT_ASTC_10x5_UNORM] = 0,
- [XGL_FMT_ASTC_10x5_SRGB] = 0,
- [XGL_FMT_ASTC_10x6_UNORM] = 0,
- [XGL_FMT_ASTC_10x6_SRGB] = 0,
- [XGL_FMT_ASTC_10x8_UNORM] = 0,
- [XGL_FMT_ASTC_10x8_SRGB] = 0,
- [XGL_FMT_ASTC_10x10_UNORM] = 0,
- [XGL_FMT_ASTC_10x10_SRGB] = 0,
- [XGL_FMT_ASTC_12x10_UNORM] = 0,
- [XGL_FMT_ASTC_12x10_SRGB] = 0,
- [XGL_FMT_ASTC_12x12_UNORM] = 0,
- [XGL_FMT_ASTC_12x12_SRGB] = 0,
- [XGL_FMT_B5G6R5_UNORM] = GEN6_FORMAT_B5G6R5_UNORM,
- [XGL_FMT_B5G6R5_USCALED] = 0,
- [XGL_FMT_B8G8R8_UNORM] = 0,
- [XGL_FMT_B8G8R8_SNORM] = 0,
- [XGL_FMT_B8G8R8_USCALED] = 0,
- [XGL_FMT_B8G8R8_SSCALED] = 0,
- [XGL_FMT_B8G8R8_UINT] = 0,
- [XGL_FMT_B8G8R8_SINT] = 0,
- [XGL_FMT_B8G8R8_SRGB] = GEN6_FORMAT_B5G6R5_UNORM_SRGB,
- [XGL_FMT_B8G8R8A8_UNORM] = GEN6_FORMAT_B8G8R8A8_UNORM,
- [XGL_FMT_B8G8R8A8_SNORM] = 0,
- [XGL_FMT_B8G8R8A8_USCALED] = 0,
- [XGL_FMT_B8G8R8A8_SSCALED] = 0,
- [XGL_FMT_B8G8R8A8_UINT] = 0,
- [XGL_FMT_B8G8R8A8_SINT] = 0,
- [XGL_FMT_B8G8R8A8_SRGB] = GEN6_FORMAT_B8G8R8A8_UNORM_SRGB,
- [XGL_FMT_B10G10R10A2_UNORM] = GEN6_FORMAT_B10G10R10A2_UNORM,
- [XGL_FMT_B10G10R10A2_SNORM] = GEN6_FORMAT_B10G10R10A2_SNORM,
- [XGL_FMT_B10G10R10A2_USCALED] = GEN6_FORMAT_B10G10R10A2_USCALED,
- [XGL_FMT_B10G10R10A2_SSCALED] = GEN6_FORMAT_B10G10R10A2_SSCALED,
- [XGL_FMT_B10G10R10A2_UINT] = GEN6_FORMAT_B10G10R10A2_UINT,
- [XGL_FMT_B10G10R10A2_SINT] = GEN6_FORMAT_B10G10R10A2_SINT
+ [VK_FMT_ETC2_R8G8B8_UNORM] = 0,
+ [VK_FMT_ETC2_R8G8B8A1_UNORM] = 0,
+ [VK_FMT_ETC2_R8G8B8A8_UNORM] = 0,
+ [VK_FMT_EAC_R11_UNORM] = 0,
+ [VK_FMT_EAC_R11_SNORM] = 0,
+ [VK_FMT_EAC_R11G11_UNORM] = 0,
+ [VK_FMT_EAC_R11G11_SNORM] = 0,
+ [VK_FMT_ASTC_4x4_UNORM] = 0,
+ [VK_FMT_ASTC_4x4_SRGB] = 0,
+ [VK_FMT_ASTC_5x4_UNORM] = 0,
+ [VK_FMT_ASTC_5x4_SRGB] = 0,
+ [VK_FMT_ASTC_5x5_UNORM] = 0,
+ [VK_FMT_ASTC_5x5_SRGB] = 0,
+ [VK_FMT_ASTC_6x5_UNORM] = 0,
+ [VK_FMT_ASTC_6x5_SRGB] = 0,
+ [VK_FMT_ASTC_6x6_UNORM] = 0,
+ [VK_FMT_ASTC_6x6_SRGB] = 0,
+ [VK_FMT_ASTC_8x5_UNORM] = 0,
+ [VK_FMT_ASTC_8x5_SRGB] = 0,
+ [VK_FMT_ASTC_8x6_UNORM] = 0,
+ [VK_FMT_ASTC_8x6_SRGB] = 0,
+ [VK_FMT_ASTC_8x8_UNORM] = 0,
+ [VK_FMT_ASTC_8x8_SRGB] = 0,
+ [VK_FMT_ASTC_10x5_UNORM] = 0,
+ [VK_FMT_ASTC_10x5_SRGB] = 0,
+ [VK_FMT_ASTC_10x6_UNORM] = 0,
+ [VK_FMT_ASTC_10x6_SRGB] = 0,
+ [VK_FMT_ASTC_10x8_UNORM] = 0,
+ [VK_FMT_ASTC_10x8_SRGB] = 0,
+ [VK_FMT_ASTC_10x10_UNORM] = 0,
+ [VK_FMT_ASTC_10x10_SRGB] = 0,
+ [VK_FMT_ASTC_12x10_UNORM] = 0,
+ [VK_FMT_ASTC_12x10_SRGB] = 0,
+ [VK_FMT_ASTC_12x12_UNORM] = 0,
+ [VK_FMT_ASTC_12x12_SRGB] = 0,
+ [VK_FMT_B5G6R5_UNORM] = GEN6_FORMAT_B5G6R5_UNORM,
+ [VK_FMT_B5G6R5_USCALED] = 0,
+ [VK_FMT_B8G8R8_UNORM] = 0,
+ [VK_FMT_B8G8R8_SNORM] = 0,
+ [VK_FMT_B8G8R8_USCALED] = 0,
+ [VK_FMT_B8G8R8_SSCALED] = 0,
+ [VK_FMT_B8G8R8_UINT] = 0,
+ [VK_FMT_B8G8R8_SINT] = 0,
+ [VK_FMT_B8G8R8_SRGB] = GEN6_FORMAT_B5G6R5_UNORM_SRGB,
+ [VK_FMT_B8G8R8A8_UNORM] = GEN6_FORMAT_B8G8R8A8_UNORM,
+ [VK_FMT_B8G8R8A8_SNORM] = 0,
+ [VK_FMT_B8G8R8A8_USCALED] = 0,
+ [VK_FMT_B8G8R8A8_SSCALED] = 0,
+ [VK_FMT_B8G8R8A8_UINT] = 0,
+ [VK_FMT_B8G8R8A8_SINT] = 0,
+ [VK_FMT_B8G8R8A8_SRGB] = GEN6_FORMAT_B8G8R8A8_UNORM_SRGB,
+ [VK_FMT_B10G10R10A2_UNORM] = GEN6_FORMAT_B10G10R10A2_UNORM,
+ [VK_FMT_B10G10R10A2_SNORM] = GEN6_FORMAT_B10G10R10A2_SNORM,
+ [VK_FMT_B10G10R10A2_USCALED] = GEN6_FORMAT_B10G10R10A2_USCALED,
+ [VK_FMT_B10G10R10A2_SSCALED] = GEN6_FORMAT_B10G10R10A2_SSCALED,
+ [VK_FMT_B10G10R10A2_UINT] = GEN6_FORMAT_B10G10R10A2_UINT,
+ [VK_FMT_B10G10R10A2_SINT] = GEN6_FORMAT_B10G10R10A2_SINT
};
int intel_format_translate_color(const struct intel_gpu *gpu,
- XGL_FORMAT format)
+ VK_FORMAT format)
{
int fmt;
/* TODO: Implement for remaining compressed formats. */
/* GEN6_FORMAT_R32G32B32A32_FLOAT happens to be 0 */
- if (format == XGL_FMT_R32G32B32A32_SFLOAT)
+ if (format == VK_FMT_R32G32B32A32_SFLOAT)
assert(fmt == 0);
else if (!fmt)
fmt = -1;
return fmt;
}
-static XGL_FLAGS intel_format_get_color_features(const struct intel_dev *dev,
- XGL_FORMAT format)
+static VK_FLAGS intel_format_get_color_features(const struct intel_dev *dev,
+ VK_FORMAT format)
{
const int fmt = intel_format_translate_color(dev->gpu, format);
const struct intel_vf_cap *vf;
const struct intel_sampler_cap *sampler;
const struct intel_dp_cap *dp;
- XGL_FLAGS features;
+ VK_FLAGS features;
if (fmt < 0)
return 0;
vf = (fmt < ARRAY_SIZE(intel_vf_caps)) ? &intel_vf_caps[fmt] : NULL;
dp = (fmt < ARRAY_SIZE(intel_dp_caps)) ? &intel_dp_caps[fmt] : NULL;
- features = XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT;
+ features = VK_FORMAT_MEMORY_SHADER_ACCESS_BIT;
#define TEST(dev, func, cap) ((func) && (func)->cap && \
intel_gpu_gen((dev)->gpu) >= (func)->cap)
if (TEST(dev, sampler, sampling)) {
if (icd_format_is_int(format) ||
TEST(dev, sampler, filtering))
- features |= XGL_FORMAT_IMAGE_SHADER_READ_BIT;
+ features |= VK_FORMAT_IMAGE_SHADER_READ_BIT;
}
if (TEST(dev, dp, typed_write))
- features |= XGL_FORMAT_IMAGE_SHADER_WRITE_BIT;
+ features |= VK_FORMAT_IMAGE_SHADER_WRITE_BIT;
if (TEST(dev, dp, rt_write)) {
- features |= XGL_FORMAT_COLOR_ATTACHMENT_WRITE_BIT;
+ features |= VK_FORMAT_COLOR_ATTACHMENT_WRITE_BIT;
if (TEST(dev, dp, rt_write_blending))
- features |= XGL_FORMAT_COLOR_ATTACHMENT_BLEND_BIT;
+ features |= VK_FORMAT_COLOR_ATTACHMENT_BLEND_BIT;
- if (features & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
- features |= XGL_FORMAT_IMAGE_COPY_BIT |
- XGL_FORMAT_CONVERSION_BIT;
+ if (features & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
+ features |= VK_FORMAT_IMAGE_COPY_BIT |
+ VK_FORMAT_CONVERSION_BIT;
}
}
#undef TEST
return features;
}
-static XGL_FLAGS intel_format_get_ds_features(const struct intel_dev *dev,
- XGL_FORMAT format)
+static VK_FLAGS intel_format_get_ds_features(const struct intel_dev *dev,
+ VK_FORMAT format)
{
- XGL_FLAGS features;
+ VK_FLAGS features;
assert(icd_format_is_ds(format));
switch (format) {
- case XGL_FMT_S8_UINT:
- features = XGL_FORMAT_STENCIL_ATTACHMENT_BIT;;
+ case VK_FMT_S8_UINT:
+ features = VK_FORMAT_STENCIL_ATTACHMENT_BIT;;
break;
- case XGL_FMT_D16_UNORM:
- case XGL_FMT_D24_UNORM:
- case XGL_FMT_D32_SFLOAT:
- features = XGL_FORMAT_DEPTH_ATTACHMENT_BIT;
+ case VK_FMT_D16_UNORM:
+ case VK_FMT_D24_UNORM:
+ case VK_FMT_D32_SFLOAT:
+ features = VK_FORMAT_DEPTH_ATTACHMENT_BIT;
break;
- case XGL_FMT_D16_UNORM_S8_UINT:
- case XGL_FMT_D24_UNORM_S8_UINT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
- features = XGL_FORMAT_DEPTH_ATTACHMENT_BIT |
- XGL_FORMAT_STENCIL_ATTACHMENT_BIT;
+ case VK_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D24_UNORM_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
+ features = VK_FORMAT_DEPTH_ATTACHMENT_BIT |
+ VK_FORMAT_STENCIL_ATTACHMENT_BIT;
break;
default:
features = 0;
return features;
}
-static XGL_FLAGS intel_format_get_raw_features(const struct intel_dev *dev,
- XGL_FORMAT format)
+static VK_FLAGS intel_format_get_raw_features(const struct intel_dev *dev,
+ VK_FORMAT format)
{
- return (format == XGL_FMT_UNDEFINED) ?
- XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT : 0;
+ return (format == VK_FMT_UNDEFINED) ?
+ VK_FORMAT_MEMORY_SHADER_ACCESS_BIT : 0;
}
static void intel_format_get_props(const struct intel_dev *dev,
- XGL_FORMAT format,
- XGL_FORMAT_PROPERTIES *props)
+ VK_FORMAT format,
+ VK_FORMAT_PROPERTIES *props)
{
if (icd_format_is_undef(format)) {
props->linearTilingFeatures =
}
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(
- XGL_DEVICE device,
- XGL_FORMAT format,
- XGL_FORMAT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetFormatInfo(
+ VK_DEVICE device,
+ VK_FORMAT format,
+ VK_FORMAT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
const struct intel_dev *dev = intel_dev(device);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (infoType) {
- case XGL_INFO_TYPE_FORMAT_PROPERTIES:
- *pDataSize = sizeof(XGL_FORMAT_PROPERTIES);
+ case VK_INFO_TYPE_FORMAT_PROPERTIES:
+ *pDataSize = sizeof(VK_FORMAT_PROPERTIES);
if (pData == NULL)
return ret;
intel_format_get_props(dev, format, pData);
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_gpu;
static inline bool intel_format_has_depth(const struct intel_gpu *gpu,
- XGL_FORMAT format)
+ VK_FORMAT format)
{
bool has_depth = false;
switch (format) {
- case XGL_FMT_D16_UNORM:
- case XGL_FMT_D24_UNORM:
- case XGL_FMT_D32_SFLOAT:
- /* XGL_FMT_D16_UNORM_S8_UINT is unsupported */
- case XGL_FMT_D24_UNORM_S8_UINT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D16_UNORM:
+ case VK_FMT_D24_UNORM:
+ case VK_FMT_D32_SFLOAT:
+ /* VK_FMT_D16_UNORM_S8_UINT is unsupported */
+ case VK_FMT_D24_UNORM_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
has_depth = true;
break;
default:
}
int intel_format_translate_color(const struct intel_gpu *gpu,
- XGL_FORMAT format);
+ VK_FORMAT format);
#endif /* FORMAT_H */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "wsi.h"
static const char * const intel_gpu_exts[INTEL_EXT_COUNT] = {
- [INTEL_EXT_WSI_X11] = "XGL_WSI_X11",
+ [INTEL_EXT_WSI_X11] = "VK_WSI_X11",
};
static int gpu_open_primary_node(struct intel_gpu *gpu)
if (gpu->render_fd_internal < 0 && gpu->render_node) {
gpu->render_fd_internal = open(gpu->render_node, O_RDWR);
if (gpu->render_fd_internal < 0) {
- intel_log(gpu, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0,
+ intel_log(gpu, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0,
0, "failed to open %s", gpu->render_node);
}
}
return gen;
}
-XGL_RESULT intel_gpu_create(const struct intel_instance *instance, int devid,
+VK_RESULT intel_gpu_create(const struct intel_instance *instance, int devid,
const char *primary_node, const char *render_node,
struct intel_gpu **gpu_ret)
{
struct intel_gpu *gpu;
if (gen < 0) {
- intel_log(instance, XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "unsupported device id 0x%04x", devid);
- return XGL_ERROR_INITIALIZATION_FAILED;
+ intel_log(instance, VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "unsupported device id 0x%04x", devid);
+ return VK_ERROR_INITIALIZATION_FAILED;
}
- gpu = intel_alloc(instance, sizeof(*gpu), 0, XGL_SYSTEM_ALLOC_API_OBJECT);
+ gpu = intel_alloc(instance, sizeof(*gpu), 0, VK_SYSTEM_ALLOC_API_OBJECT);
if (!gpu)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(gpu, 0, sizeof(*gpu));
- /* there is no XGL_DBG_OBJECT_GPU */
- intel_handle_init(&gpu->handle, XGL_DBG_OBJECT_UNKNOWN, instance->icd);
+ /* there is no VK_DBG_OBJECT_GPU */
+ intel_handle_init(&gpu->handle, VK_DBG_OBJECT_UNKNOWN, instance->icd);
gpu->devid = devid;
render_len = (render_node) ? strlen(render_node) : 0;
gpu->primary_node = intel_alloc(gpu, primary_len + 1 +
- ((render_len) ? (render_len + 1) : 0), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ ((render_len) ? (render_len + 1) : 0), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!gpu->primary_node) {
intel_free(instance, gpu);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
memcpy(gpu->primary_node, primary_node, primary_len + 1);
*gpu_ret = gpu;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_gpu_get_props(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_PROPERTIES *props)
+ VK_PHYSICAL_GPU_PROPERTIES *props)
{
const char *name;
size_t name_len;
props->vendorId = 0x8086;
props->deviceId = gpu->devid;
- props->gpuType = XGL_GPU_TYPE_INTEGRATED;
+ props->gpuType = VK_GPU_TYPE_INTEGRATED;
/* copy GPU name */
name = gpu_get_name(gpu);
}
void intel_gpu_get_perf(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_PERFORMANCE *perf)
+ VK_PHYSICAL_GPU_PERFORMANCE *perf)
{
/* TODO */
perf->maxGpuClock = 1.0f;
void intel_gpu_get_queue_props(const struct intel_gpu *gpu,
enum intel_gpu_engine_type engine,
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *props)
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *props)
{
switch (engine) {
case INTEL_GPU_ENGINE_3D:
- props->queueFlags = XGL_QUEUE_GRAPHICS_BIT | XGL_QUEUE_COMPUTE_BIT;
+ props->queueFlags = VK_QUEUE_GRAPHICS_BIT | VK_QUEUE_COMPUTE_BIT;
props->queueCount = 1;
props->maxAtomicCounters = INTEL_QUEUE_ATOMIC_COUNTER_COUNT;
props->supportsTimestamps = true;
}
void intel_gpu_get_memory_props(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_MEMORY_PROPERTIES *props)
+ VK_PHYSICAL_GPU_MEMORY_PROPERTIES *props)
{
props->supportsMigration = false;
props->supportsPinning = true;
}
int intel_gpu_get_max_threads(const struct intel_gpu *gpu,
- XGL_PIPELINE_SHADER_STAGE stage)
+ VK_PIPELINE_SHADER_STAGE stage)
{
switch (intel_gpu_gen(gpu)) {
case INTEL_GEN(7.5):
switch (stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
return (gpu->gt >= 2) ? 280 : 70;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
return (gpu->gt == 3) ? 408 :
(gpu->gt == 2) ? 204 : 102;
default:
break;
case INTEL_GEN(7):
switch (stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
return (gpu->gt == 2) ? 128 : 36;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
return (gpu->gt == 2) ? 172 : 48;
default:
break;
break;
case INTEL_GEN(6):
switch (stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
return (gpu->gt == 2) ? 60 : 24;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
return (gpu->gt == 2) ? 80 : 40;
default:
break;
break;
}
- intel_log(gpu, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE,
+ intel_log(gpu, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE,
0, 0, "unknown Gen or shader stage");
switch (stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
return 1;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
return 4;
default:
return 1;
return gpu_open_primary_node(gpu);
}
-XGL_RESULT intel_gpu_init_winsys(struct intel_gpu *gpu)
+VK_RESULT intel_gpu_init_winsys(struct intel_gpu *gpu)
{
int fd;
fd = gpu_open_render_node(gpu);
if (fd < 0)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
gpu->winsys = intel_winsys_create_for_fd(gpu->handle.icd, fd);
if (!gpu->winsys) {
- intel_log(gpu, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "failed to create GPU winsys");
+ intel_log(gpu, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "failed to create GPU winsys");
gpu_close_render_node(gpu);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_gpu_cleanup_winsys(struct intel_gpu *gpu)
return type;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(
- XGL_PHYSICAL_GPU gpu,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateLayers(
+ VK_PHYSICAL_GPU gpu,
size_t maxLayerCount,
size_t maxStringSize,
size_t* pOutLayerCount,
void* pReserved)
{
if (!pOutLayerCount)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
*pOutLayerCount = 0;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(
- XGL_PHYSICAL_GPU gpu_,
- XGL_PHYSICAL_GPU_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetGpuInfo(
+ VK_PHYSICAL_GPU gpu_,
+ VK_PHYSICAL_GPU_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
struct intel_gpu *gpu = intel_gpu(gpu_);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (infoType) {
- case XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES:
- *pDataSize = sizeof(XGL_PHYSICAL_GPU_PROPERTIES);
+ case VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES:
+ *pDataSize = sizeof(VK_PHYSICAL_GPU_PROPERTIES);
if (pData == NULL) {
return ret;
}
intel_gpu_get_props(gpu, pData);
break;
- case XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE:
- *pDataSize = sizeof(XGL_PHYSICAL_GPU_PERFORMANCE);
+ case VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE:
+ *pDataSize = sizeof(VK_PHYSICAL_GPU_PERFORMANCE);
if (pData == NULL) {
return ret;
}
intel_gpu_get_perf(gpu, pData);
break;
- case XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES:
+ case VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES:
/*
- * XGL Programmers guide, page 33:
+ * Vulkan Programmers guide, page 33:
* to determine the data size an application calls
- * xglGetGpuInfo() with a NULL data pointer. The
+ * vkGetGpuInfo() with a NULL data pointer. The
* expected data size for all queue property structures
* is returned in pDataSize
*/
- *pDataSize = sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES) *
+ *pDataSize = sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES) *
INTEL_GPU_ENGINE_COUNT;
if (pData != NULL) {
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *dst = pData;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *dst = pData;
int engine;
for (engine = 0; engine < INTEL_GPU_ENGINE_COUNT; engine++) {
}
break;
- case XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES:
- *pDataSize = sizeof(XGL_PHYSICAL_GPU_MEMORY_PROPERTIES);
+ case VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES:
+ *pDataSize = sizeof(VK_PHYSICAL_GPU_MEMORY_PROPERTIES);
if (pData == NULL) {
return ret;
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(
- XGL_PHYSICAL_GPU gpu_,
+ICD_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(
+ VK_PHYSICAL_GPU gpu_,
const char* pExtName)
{
struct intel_gpu *gpu = intel_gpu(gpu_);
const enum intel_ext_type ext = intel_gpu_lookup_extension(gpu, pExtName);
return (ext != INTEL_EXT_INVALID) ?
- XGL_SUCCESS : XGL_ERROR_INVALID_EXTENSION;
+ VK_SUCCESS : VK_ERROR_INVALID_EXTENSION;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(
- XGL_PHYSICAL_GPU gpu0_,
- XGL_PHYSICAL_GPU gpu1_,
- XGL_GPU_COMPATIBILITY_INFO* pInfo)
+ICD_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(
+ VK_PHYSICAL_GPU gpu0_,
+ VK_PHYSICAL_GPU gpu1_,
+ VK_GPU_COMPATIBILITY_INFO* pInfo)
{
const struct intel_gpu *gpu0 = intel_gpu(gpu0_);
const struct intel_gpu *gpu1 = intel_gpu(gpu1_);
- XGL_FLAGS compat = XGL_GPU_COMPAT_IQ_MATCH_BIT |
- XGL_GPU_COMPAT_PEER_TRANSFER_BIT |
- XGL_GPU_COMPAT_SHARED_MEMORY_BIT |
- XGL_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT |
- XGL_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT;
+ VK_FLAGS compat = VK_GPU_COMPAT_IQ_MATCH_BIT |
+ VK_GPU_COMPAT_PEER_TRANSFER_BIT |
+ VK_GPU_COMPAT_SHARED_MEMORY_BIT |
+ VK_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT |
+ VK_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT;
if (intel_gpu_gen(gpu0) == intel_gpu_gen(gpu1))
- compat |= XGL_GPU_COMPAT_ASIC_FEATURES_BIT;
+ compat |= VK_GPU_COMPAT_ASIC_FEATURES_BIT;
pInfo->compatibilityFlags = compat;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
int gen_opaque; /* always read this with intel_gpu_gen() */
int gt;
- XGL_GPU_SIZE max_batch_buffer_size;
+ VK_GPU_SIZE max_batch_buffer_size;
uint32_t batch_buffer_reloc_count;
/*
uint32_t display_count;
};
-static inline struct intel_gpu *intel_gpu(XGL_PHYSICAL_GPU gpu)
+static inline struct intel_gpu *intel_gpu(VK_PHYSICAL_GPU gpu)
{
return (struct intel_gpu *) gpu;
}
#endif
}
-XGL_RESULT intel_gpu_create(const struct intel_instance *instance, int devid,
+VK_RESULT intel_gpu_create(const struct intel_instance *instance, int devid,
const char *primary_node, const char *render_node,
struct intel_gpu **gpu_ret);
void intel_gpu_destroy(struct intel_gpu *gpu);
void intel_gpu_get_props(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_PROPERTIES *props);
+ VK_PHYSICAL_GPU_PROPERTIES *props);
void intel_gpu_get_perf(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_PERFORMANCE *perf);
+ VK_PHYSICAL_GPU_PERFORMANCE *perf);
void intel_gpu_get_queue_props(const struct intel_gpu *gpu,
enum intel_gpu_engine_type engine,
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *props);
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *props);
void intel_gpu_get_memory_props(const struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_MEMORY_PROPERTIES *props);
+ VK_PHYSICAL_GPU_MEMORY_PROPERTIES *props);
int intel_gpu_get_max_threads(const struct intel_gpu *gpu,
- XGL_PIPELINE_SHADER_STAGE stage);
+ VK_PIPELINE_SHADER_STAGE stage);
int intel_gpu_get_primary_fd(struct intel_gpu *gpu);
-XGL_RESULT intel_gpu_init_winsys(struct intel_gpu *gpu);
+VK_RESULT intel_gpu_init_winsys(struct intel_gpu *gpu);
void intel_gpu_cleanup_winsys(struct intel_gpu *gpu);
enum intel_ext_type intel_gpu_lookup_extension(const struct intel_gpu *gpu,
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_img_destroy(img);
}
-static XGL_RESULT img_get_info(struct intel_base *base, int type,
+static VK_RESULT img_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
struct intel_img *img = intel_img_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = img->total_size;
mem_req->alignment = 4096;
- if (img->format_class == XGL_IMAGE_FORMAT_CLASS_LINEAR) {
- mem_req->memType = XGL_MEMORY_TYPE_BUFFER;
+ if (img->format_class == VK_IMAGE_FORMAT_CLASS_LINEAR) {
+ mem_req->memType = VK_MEMORY_TYPE_BUFFER;
} else {
- mem_req->memType = XGL_MEMORY_TYPE_IMAGE;
+ mem_req->memType = VK_MEMORY_TYPE_IMAGE;
}
}
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
{
- XGL_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
+ VK_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
- *size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
img_req->usage = img->usage;
img_req->samples = img->samples;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = img->usage;
return ret;
}
-XGL_RESULT intel_img_create(struct intel_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+VK_RESULT intel_img_create(struct intel_dev *dev,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout,
struct intel_img **img_ret)
{
struct intel_layout *layout;
img = (struct intel_img *) intel_base_create(&dev->base.handle,
- sizeof(*img), dev->base.dbg, XGL_DBG_OBJECT_IMAGE, info, 0);
+ sizeof(*img), dev->base.dbg, VK_DBG_OBJECT_IMAGE, info, 0);
if (!img)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
layout = &img->layout;
img->mip_levels = info->mipLevels;
img->array_size = info->arraySize;
img->usage = info->usage;
- if (info->tiling == XGL_LINEAR_TILING)
- img->format_class = XGL_IMAGE_FORMAT_CLASS_LINEAR;
+ if (info->tiling == VK_LINEAR_TILING)
+ img->format_class = VK_IMAGE_FORMAT_CLASS_LINEAR;
else
img->format_class = icd_format_get_class(info->format);
img->samples = info->samples;
intel_layout_init(layout, dev, info, scanout);
if (layout->bo_stride > intel_max_resource_size / layout->bo_height) {
- intel_dev_log(dev, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "image too big");
+ intel_dev_log(dev, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "image too big");
intel_img_destroy(img);
- return XGL_ERROR_INVALID_MEMORY_SIZE;
+ return VK_ERROR_INVALID_MEMORY_SIZE;
}
img->total_size = img->layout.bo_stride * img->layout.bo_height;
}
if (layout->separate_stencil) {
- XGL_IMAGE_CREATE_INFO s8_info;
+ VK_IMAGE_CREATE_INFO s8_info;
img->s8_layout = intel_alloc(img, sizeof(*img->s8_layout), 0,
- XGL_SYSTEM_ALLOC_INTERNAL);
+ VK_SYSTEM_ALLOC_INTERNAL);
if (!img->s8_layout) {
intel_img_destroy(img);
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
s8_info = *info;
- s8_info.format = XGL_FMT_S8_UINT;
+ s8_info.format = VK_FMT_S8_UINT;
/* no stencil texturing */
- s8_info.usage &= ~XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT;
+ s8_info.usage &= ~VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT;
assert(icd_format_is_ds(info->format));
intel_layout_init(img->s8_layout, dev, &s8_info, scanout);
}
if (scanout) {
- XGL_RESULT ret = intel_wsi_img_init(img);
- if (ret != XGL_SUCCESS) {
+ VK_RESULT ret = intel_wsi_img_init(img);
+ if (ret != VK_SUCCESS) {
intel_img_destroy(img);
return ret;
}
*img_ret = img;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_img_destroy(struct intel_img *img)
intel_base_destroy(&img->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(
- XGL_DEVICE device,
- const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerImage(
+ VK_DEVICE device,
+ const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImage(
- XGL_DEVICE device,
- const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImage(
+ VK_DEVICE device,
+ const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_img **) pImage);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
- XGL_IMAGE image,
- const XGL_IMAGE_SUBRESOURCE* pSubresource,
- XGL_SUBRESOURCE_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(
+ VK_IMAGE image,
+ const VK_IMAGE_SUBRESOURCE* pSubresource,
+ VK_SUBRESOURCE_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
const struct intel_img *img = intel_img(image);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (infoType) {
- case XGL_INFO_TYPE_SUBRESOURCE_LAYOUT:
+ case VK_INFO_TYPE_SUBRESOURCE_LAYOUT:
{
- XGL_SUBRESOURCE_LAYOUT *layout = (XGL_SUBRESOURCE_LAYOUT *) pData;
+ VK_SUBRESOURCE_LAYOUT *layout = (VK_SUBRESOURCE_LAYOUT *) pData;
unsigned x, y;
intel_layout_get_slice_pos(&img->layout, pSubresource->mipLevel,
pSubresource->arraySlice, &x, &y);
intel_layout_pos_to_mem(&img->layout, x, y, &x, &y);
- *pDataSize = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ *pDataSize = sizeof(VK_SUBRESOURCE_LAYOUT);
if (pData == NULL)
return ret;
}
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_img {
struct intel_obj obj;
- XGL_IMAGE_TYPE type;
+ VK_IMAGE_TYPE type;
int32_t depth;
uint32_t mip_levels;
uint32_t array_size;
- XGL_FLAGS usage;
- XGL_IMAGE_FORMAT_CLASS format_class; // should this be integrated into intel_layout?
+ VK_FLAGS usage;
+ VK_IMAGE_FORMAT_CLASS format_class; // should this be integrated into intel_layout?
uint32_t samples;
struct intel_layout layout;
void *wsi_data;
};
-static inline struct intel_img *intel_img(XGL_IMAGE image)
+static inline struct intel_img *intel_img(VK_IMAGE image)
{
return (struct intel_img *) image;
}
return intel_img_from_base(&obj->base);
}
-XGL_RESULT intel_img_create(struct intel_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+VK_RESULT intel_img_create(struct intel_dev *dev,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout,
struct intel_img **img_ret);
/*
- * XGL 3-D graphics library
+ * Vulkan 3-D graphics library
*
* Copyright (C) 2014 LunarG, Inc.
*
icd_instance_destroy(icd);
}
-static struct intel_instance *intel_instance_create(const XGL_INSTANCE_CREATE_INFO* info)
+static struct intel_instance *intel_instance_create(const VK_INSTANCE_CREATE_INFO* info)
{
struct intel_instance *instance;
struct icd_instance *icd;
return NULL;
instance = icd_instance_alloc(icd, sizeof(*instance), 0,
- XGL_SYSTEM_ALLOC_API_OBJECT);
+ VK_SYSTEM_ALLOC_API_OBJECT);
if (!instance) {
icd_instance_destroy(icd);
return NULL;
}
memset(instance, 0, sizeof(*instance));
- intel_handle_init(&instance->handle, XGL_DBG_OBJECT_INSTANCE, icd);
+ intel_handle_init(&instance->handle, VK_DBG_OBJECT_INSTANCE, icd);
instance->icd = icd;
return instance;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance)
{
struct intel_instance *instance;
instance = intel_instance_create(pCreateInfo);
if (!instance)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
- *pInstance = (XGL_INSTANCE) instance;
+ *pInstance = (VK_INSTANCE) instance;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE pInstance)
{
struct intel_instance *instance = intel_instance(pInstance);
intel_instance_destroy(instance);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
- XGL_INSTANCE instance_,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
+ VK_INSTANCE instance_,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus)
+ VK_PHYSICAL_GPU* pGpus)
{
struct intel_instance *instance = intel_instance(instance_);
struct icd_drm_device *devices, *dev;
- XGL_RESULT ret;
+ VK_RESULT ret;
uint32_t count;
intel_instance_remove_gpus(instance);
if (!maxGpus) {
*pGpuCount = 0;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
devices = icd_drm_enumerate(instance->icd, 0x8086);
devid = (intel_devid_override) ? intel_devid_override : dev->devid;
ret = intel_gpu_create(instance, devid,
primary_node, render_node, &gpu);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
intel_instance_add_gpu(instance, gpu);
- pGpus[count++] = (XGL_PHYSICAL_GPU) gpu;
+ pGpus[count++] = (VK_PHYSICAL_GPU) gpu;
if (count >= maxGpus)
break;
}
*pGpuCount = count;
- return (count > 0) ? XGL_SUCCESS : XGL_ERROR_UNAVAILABLE;
+ return (count > 0) ? VK_SUCCESS : VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(
- XGL_INSTANCE instance_,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
+ICD_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(
+ VK_INSTANCE instance_,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
void* pUserData)
{
struct intel_instance *instance = intel_instance(instance_);
return icd_instance_add_logger(instance->icd, pfnMsgCallback, pUserData);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(
- XGL_INSTANCE instance_,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+ICD_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(
+ VK_INSTANCE instance_,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
struct intel_instance *instance = intel_instance(instance_);
return icd_instance_remove_logger(instance->icd, pfnMsgCallback);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(
- XGL_INSTANCE instance_,
- XGL_DBG_GLOBAL_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(
+ VK_INSTANCE instance_,
+ VK_DBG_GLOBAL_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
struct intel_instance *instance = intel_instance(instance_);
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
if (dataSize == 0)
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
switch (dbgOption) {
- case XGL_DBG_OPTION_DEBUG_ECHO_ENABLE:
- case XGL_DBG_OPTION_BREAK_ON_ERROR:
- case XGL_DBG_OPTION_BREAK_ON_WARNING:
+ case VK_DBG_OPTION_DEBUG_ECHO_ENABLE:
+ case VK_DBG_OPTION_BREAK_ON_ERROR:
+ case VK_DBG_OPTION_BREAK_ON_WARNING:
res = icd_instance_set_bool(instance->icd, dbgOption,
*((const bool *) pData));
break;
default:
- res = XGL_ERROR_INVALID_VALUE;
+ res = VK_ERROR_INVALID_VALUE;
break;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
bool exts[INTEL_EXT_COUNT];
};
-static inline struct intel_instance *intel_instance(XGL_INSTANCE instance)
+static inline struct intel_instance *intel_instance(VK_INSTANCE instance)
{
return (struct intel_instance *) instance;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <string.h>
#include <assert.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglWsiX11Ext.h>
-#include <xglIcd.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkWsiX11Ext.h>
+#include <vkIcd.h>
#include "icd.h"
#include "icd-spv.h"
#include "icd-instance.h"
#include "icd-utils.h"
-#define INTEL_API_VERSION XGL_API_VERSION
+#define INTEL_API_VERSION VK_API_VERSION
#define INTEL_DRIVER_VERSION 0
#define INTEL_GEN(gen) ((int) ((gen) * 100))
static const uint32_t intel_handle_magic = 0x494e544c;
static inline void intel_handle_init(struct intel_handle *handle,
- XGL_DBG_OBJECT_TYPE type,
+ VK_DBG_OBJECT_TYPE type,
const struct icd_instance *icd)
{
set_loader_magic_value(handle);
const uint32_t handle_type =
((const struct intel_handle *) handle)->magic - intel_handle_magic;
- return (handle_type <= XGL_DBG_OBJECT_TYPE_END_RANGE);
+ return (handle_type <= VK_DBG_OBJECT_TYPE_END_RANGE);
}
/**
* \see intel_handle_validate().
*/
static inline bool intel_handle_validate_type(const void *handle,
- XGL_DBG_OBJECT_TYPE type)
+ VK_DBG_OBJECT_TYPE type)
{
const uint32_t handle_type =
((const struct intel_handle *) handle)->magic - intel_handle_magic;
static inline void *intel_alloc(const void *handle,
size_t size, size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE type)
+ VK_SYSTEM_ALLOC_TYPE type)
{
assert(intel_handle_validate(handle));
return icd_instance_alloc(((const struct intel_handle *) handle)->icd,
}
static inline void intel_logv(const void *handle,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *format, va_list ap)
{
}
static inline void intel_log(const void *handle,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *format, ...)
{
struct intel_winsys *winsys;
winsys = icd_instance_alloc(instance, sizeof(*winsys), 0,
- XGL_SYSTEM_ALLOC_INTERNAL);
+ VK_SYSTEM_ALLOC_INTERNAL);
if (!winsys)
return NULL;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_dev *dev;
const struct intel_gpu *gpu;
- const XGL_IMAGE_CREATE_INFO *info;
+ const VK_IMAGE_CREATE_INFO *info;
bool scanout;
bool compressed;
const struct intel_layout_params *params,
unsigned level, unsigned *width, unsigned *height)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
unsigned w, h;
w = u_minify(layout->width0, level);
layout_get_num_layers(const struct intel_layout *layout,
const struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
unsigned num_layers = info->arraySize;
/* samples of the same index are stored in a layer */
layout_init_layer_height(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
unsigned num_layers;
if (layout->walk != INTEL_LAYOUT_WALK_LAYER)
layout_init_lods(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
unsigned cur_x, cur_y;
unsigned lv;
/* every LOD begins at tile boundaries */
if (info->mipLevels > 1) {
- assert(layout->format == XGL_FMT_S8_UINT);
+ assert(layout->format == VK_FMT_S8_UINT);
cur_x = u_align(cur_x, 64);
cur_y = u_align(cur_y, 64);
}
layout_init_alignments(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
/*
* From the Sandy Bridge PRM, volume 1 part 1, page 113:
/* this happens to be the case */
layout->align_i = layout->block_width;
layout->align_j = layout->block_height;
- } else if (info->usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
+ } else if (info->usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
if (intel_gpu_gen(params->gpu) >= INTEL_GEN(7)) {
switch (layout->format) {
- case XGL_FMT_D16_UNORM:
+ case VK_FMT_D16_UNORM:
layout->align_i = 8;
layout->align_j = 4;
break;
- case XGL_FMT_S8_UINT:
+ case VK_FMT_S8_UINT:
layout->align_i = 8;
layout->align_j = 8;
break;
}
} else {
switch (layout->format) {
- case XGL_FMT_S8_UINT:
+ case VK_FMT_S8_UINT:
layout->align_i = 4;
layout->align_j = 2;
break;
(intel_gpu_gen(params->gpu) >= INTEL_GEN(8)) ||
(intel_gpu_gen(params->gpu) >= INTEL_GEN(7) &&
layout->tiling == GEN6_TILING_Y &&
- (info->usage & XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT));
+ (info->usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT));
if (intel_gpu_gen(params->gpu) >= INTEL_GEN(7) &&
intel_gpu_gen(params->gpu) <= INTEL_GEN(7.5) && valign_4)
- assert(layout->format != XGL_FMT_R32G32B32_SFLOAT);
+ assert(layout->format != VK_FMT_R32G32B32_SFLOAT);
layout->align_i = 4;
layout->align_j = (valign_4) ? 4 : 2;
layout_get_valid_tilings(const struct intel_layout *layout,
const struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
- const XGL_FORMAT format = layout->format;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
+ const VK_FORMAT format = layout->format;
unsigned valid_tilings = LAYOUT_TILING_ALL;
/*
if (params->scanout)
valid_tilings &= LAYOUT_TILING_X;
- if (info->tiling == XGL_LINEAR_TILING)
+ if (info->tiling == VK_LINEAR_TILING)
valid_tilings &= LAYOUT_TILING_NONE;
/*
*
* "W-Major Tile Format is used for separate stencil."
*/
- if (info->usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
switch (format) {
- case XGL_FMT_S8_UINT:
+ case VK_FMT_S8_UINT:
valid_tilings &= LAYOUT_TILING_W;
break;
default:
}
}
- if (info->usage & XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) {
/*
* From the Sandy Bridge PRM, volume 1 part 2, page 32:
*
*/
if (intel_gpu_gen(params->gpu) >= INTEL_GEN(7) &&
intel_gpu_gen(params->gpu) <= INTEL_GEN(7.5) &&
- layout->format == XGL_FMT_R32G32B32_SFLOAT)
+ layout->format == VK_FMT_R32G32B32_SFLOAT)
valid_tilings &= ~LAYOUT_TILING_Y;
valid_tilings &= ~LAYOUT_TILING_W;
}
- if (info->usage & XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
if (intel_gpu_gen(params->gpu) < INTEL_GEN(8))
valid_tilings &= ~LAYOUT_TILING_W;
}
layout_init_tiling(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
unsigned preferred_tilings;
layout->valid_tilings = layout_get_valid_tilings(layout, params);
if (preferred_tilings & ~LAYOUT_TILING_W)
preferred_tilings &= ~LAYOUT_TILING_W;
- if (info->usage & (XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
- XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT)) {
+ if (info->usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
+ VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT)) {
/*
* heuristically set a minimum width/height for enabling tiling
*/
layout_init_walk_gen7(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
/*
* It is not explicitly states, but render targets are expected to be
*
* See "Multisampled Surface Storage Format" field of SURFACE_STATE.
*/
- if (info->usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
/*
* From the Ivy Bridge PRM, volume 1 part 1, page 111:
*
* "note that the depth buffer and stencil buffer have an implied
* value of ARYSPC_FULL"
*/
- layout->walk = (info->imageType == XGL_IMAGE_3D) ?
+ layout->walk = (info->imageType == VK_IMAGE_3D) ?
INTEL_LAYOUT_WALK_3D : INTEL_LAYOUT_WALK_LAYER;
layout->interleaved_samples = true;
assert(info->mipLevels == 1);
layout->walk =
- (info->imageType == XGL_IMAGE_3D) ? INTEL_LAYOUT_WALK_3D :
+ (info->imageType == VK_IMAGE_3D) ? INTEL_LAYOUT_WALK_3D :
(info->mipLevels > 1) ? INTEL_LAYOUT_WALK_LAYER :
INTEL_LAYOUT_WALK_LOD;
* GEN6 does not support compact spacing otherwise.
*/
layout->walk =
- (params->info->imageType == XGL_IMAGE_3D) ? INTEL_LAYOUT_WALK_3D :
- (layout->format == XGL_FMT_S8_UINT) ? INTEL_LAYOUT_WALK_LOD :
+ (params->info->imageType == VK_IMAGE_3D) ? INTEL_LAYOUT_WALK_3D :
+ (layout->format == VK_FMT_S8_UINT) ? INTEL_LAYOUT_WALK_LOD :
INTEL_LAYOUT_WALK_LAYER;
/* GEN6 supports only interleaved samples */
layout_init_size_and_format(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
- XGL_FORMAT format = info->format;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
+ VK_FORMAT format = info->format;
bool require_separate_stencil = false;
layout->width0 = info->extent.width;
*
* GEN7+ requires separate stencil buffers.
*/
- if (info->usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
if (intel_gpu_gen(params->gpu) >= INTEL_GEN(7))
require_separate_stencil = true;
else
}
switch (format) {
- case XGL_FMT_D24_UNORM_S8_UINT:
+ case VK_FMT_D24_UNORM_S8_UINT:
if (require_separate_stencil) {
- format = XGL_FMT_D24_UNORM;
+ format = VK_FMT_D24_UNORM;
layout->separate_stencil = true;
}
break;
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
if (require_separate_stencil) {
- format = XGL_FMT_D32_SFLOAT;
+ format = VK_FMT_D32_SFLOAT;
layout->separate_stencil = true;
}
break;
layout_want_mcs(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
bool want_mcs = false;
/* MCS is for RT on GEN7+ */
if (intel_gpu_gen(params->gpu) < INTEL_GEN(7))
return false;
- if (info->imageType != XGL_IMAGE_2D ||
- !(info->usage & XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT))
+ if (info->imageType != VK_IMAGE_2D ||
+ !(info->usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT))
return false;
/*
layout_want_hiz(const struct intel_layout *layout,
const struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
if (intel_debug & INTEL_DEBUG_NOHIZ)
return false;
- if (!(info->usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT))
+ if (!(info->usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT))
return false;
if (!intel_format_has_depth(params->gpu, info->format))
static void
layout_align(struct intel_layout *layout, struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
int align_w = 1, align_h = 1, pad_h = 0;
/*
* padding purposes. The value of 4 for j still applies for mip level
* alignment and QPitch calculation."
*/
- if (info->usage & XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
+ if (info->usage & VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
if (align_w < layout->align_i)
align_w = layout->align_i;
if (align_h < layout->align_j)
align_h = layout->align_j;
/* in case it is used as a cube */
- if (info->imageType == XGL_IMAGE_2D)
+ if (info->imageType == VK_IMAGE_2D)
pad_h += 2;
if (params->compressed && align_h < layout->align_j * 2)
* "If the surface contains an odd number of rows of data, a final row
* below the surface must be allocated."
*/
- if ((info->usage & XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) && align_h < 2)
+ if ((info->usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT) && align_h < 2)
align_h = 2;
/*
* required above."
*/
if (intel_gpu_gen(params->gpu) >= INTEL_GEN(7.5) &&
- (params->info->usage & XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) &&
+ (params->info->usage & VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) &&
layout->tiling == GEN6_TILING_NONE)
h += (64 + layout->bo_stride - 1) / layout->bo_stride;
layout_calculate_hiz_size(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
const unsigned hz_align_j = 8;
enum intel_layout_walk_type hz_walk;
unsigned hz_width, hz_height, lv;
layout_calculate_mcs_size(struct intel_layout *layout,
struct intel_layout_params *params)
{
- const XGL_IMAGE_CREATE_INFO *info = params->info;
+ const VK_IMAGE_CREATE_INFO *info = params->info;
int mcs_width, mcs_height, mcs_cpp;
int downscale_x, downscale_y;
*/
void intel_layout_init(struct intel_layout *layout,
struct intel_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout)
{
struct intel_layout_params params;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/* physical width0, height0, and format */
unsigned width0;
unsigned height0;
- XGL_FORMAT format;
+ VK_FORMAT format;
bool separate_stencil;
/*
void intel_layout_init(struct intel_layout *layout,
struct intel_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout);
/**
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "dev.h"
#include "mem.h"
-XGL_RESULT intel_mem_alloc(struct intel_dev *dev,
- const XGL_MEMORY_ALLOC_INFO *info,
+VK_RESULT intel_mem_alloc(struct intel_dev *dev,
+ const VK_MEMORY_ALLOC_INFO *info,
struct intel_mem **mem_ret)
{
struct intel_mem *mem;
/* ignore any IMAGE_INFO and BUFFER_INFO usage: they don't alter allocations */
mem = (struct intel_mem *) intel_base_create(&dev->base.handle,
- sizeof(*mem), dev->base.dbg, XGL_DBG_OBJECT_GPU_MEMORY, info, 0);
+ sizeof(*mem), dev->base.dbg, VK_DBG_OBJECT_GPU_MEMORY, info, 0);
if (!mem)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
mem->bo = intel_winsys_alloc_bo(dev->winsys,
- "xgl-gpu-memory", info->allocationSize, 0);
+ "vk-gpu-memory", info->allocationSize, 0);
if (!mem->bo) {
intel_mem_free(mem);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
mem->size = info->allocationSize;
*mem_ret = mem;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_mem_free(struct intel_mem *mem)
intel_base_destroy(&mem->base);
}
-XGL_RESULT intel_mem_import_userptr(struct intel_dev *dev,
+VK_RESULT intel_mem_import_userptr(struct intel_dev *dev,
const void *userptr,
size_t size,
struct intel_mem **mem_ret)
struct intel_mem *mem;
if ((uintptr_t) userptr % alignment || size % alignment)
- return XGL_ERROR_INVALID_ALIGNMENT;
+ return VK_ERROR_INVALID_ALIGNMENT;
mem = (struct intel_mem *) intel_base_create(&dev->base.handle,
- sizeof(*mem), dev->base.dbg, XGL_DBG_OBJECT_GPU_MEMORY, NULL, 0);
+ sizeof(*mem), dev->base.dbg, VK_DBG_OBJECT_GPU_MEMORY, NULL, 0);
if (!mem)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
mem->bo = intel_winsys_import_userptr(dev->winsys,
- "xgl-gpu-memory-userptr", (void *) userptr, size, 0);
+ "vk-gpu-memory-userptr", (void *) userptr, size, 0);
if (!mem->bo) {
intel_mem_free(mem);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
mem->size = size;
*mem_ret = mem;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT intel_mem_set_priority(struct intel_mem *mem,
- XGL_MEMORY_PRIORITY priority)
+VK_RESULT intel_mem_set_priority(struct intel_mem *mem,
+ VK_MEMORY_PRIORITY priority)
{
- /* pin the bo when XGL_MEMORY_PRIORITY_VERY_HIGH? */
- return XGL_SUCCESS;
+ /* pin the bo when VK_MEMORY_PRIORITY_VERY_HIGH? */
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkAllocMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ VK_GPU_MEMORY* pMem)
{
struct intel_dev *dev = intel_dev(device);
return intel_mem_alloc(dev, pAllocInfo, (struct intel_mem **) pMem);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglFreeMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkFreeMemory(
+ VK_GPU_MEMORY mem_)
{
struct intel_mem *mem = intel_mem(mem_);
intel_mem_free(mem);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(
- XGL_GPU_MEMORY mem_,
- XGL_MEMORY_PRIORITY priority)
+ICD_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(
+ VK_GPU_MEMORY mem_,
+ VK_MEMORY_PRIORITY priority)
{
struct intel_mem *mem = intel_mem(mem_);
return intel_mem_set_priority(mem, priority);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglMapMemory(
- XGL_GPU_MEMORY mem_,
- XGL_FLAGS flags,
+ICD_EXPORT VK_RESULT VKAPI vkMapMemory(
+ VK_GPU_MEMORY mem_,
+ VK_FLAGS flags,
void** ppData)
{
struct intel_mem *mem = intel_mem(mem_);
*ppData = ptr;
- return (ptr) ? XGL_SUCCESS : XGL_ERROR_UNKNOWN;
+ return (ptr) ? VK_SUCCESS : VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkUnmapMemory(
+ VK_GPU_MEMORY mem_)
{
struct intel_mem *mem = intel_mem(mem_);
intel_mem_unmap(mem);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkPinSystemMemory(
+ VK_DEVICE device,
const void* pSysMem,
size_t memSize,
- XGL_GPU_MEMORY* pMem)
+ VK_GPU_MEMORY* pMem)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_mem **) pMem);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(
- XGL_DEVICE device,
- const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(
+ VK_DEVICE device,
+ const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_base base;
struct intel_bo *bo;
- XGL_GPU_SIZE size;
+ VK_GPU_SIZE size;
};
-XGL_RESULT intel_mem_alloc(struct intel_dev *dev,
- const XGL_MEMORY_ALLOC_INFO *info,
+VK_RESULT intel_mem_alloc(struct intel_dev *dev,
+ const VK_MEMORY_ALLOC_INFO *info,
struct intel_mem **mem_ret);
void intel_mem_free(struct intel_mem *mem);
-XGL_RESULT intel_mem_import_userptr(struct intel_dev *dev,
+VK_RESULT intel_mem_import_userptr(struct intel_dev *dev,
const void *userptr,
size_t size,
struct intel_mem **mem_ret);
-XGL_RESULT intel_mem_set_priority(struct intel_mem *mem,
- XGL_MEMORY_PRIORITY priority);
+VK_RESULT intel_mem_set_priority(struct intel_mem *mem,
+ VK_MEMORY_PRIORITY priority);
-static inline void *intel_mem_map(struct intel_mem *mem, XGL_FLAGS flags)
+static inline void *intel_mem_map(struct intel_mem *mem, VK_FLAGS flags)
{
return intel_bo_map_async(mem->bo);
}
return intel_bo_is_busy(mem->bo);
}
-static inline struct intel_mem *intel_mem(XGL_GPU_MEMORY mem)
+static inline struct intel_mem *intel_mem(VK_GPU_MEMORY mem)
{
return (struct intel_mem *) mem;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "mem.h"
#include "obj.h"
-XGL_RESULT intel_base_get_info(struct intel_base *base, int type,
+VK_RESULT intel_base_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
size_t s;
uint32_t *count;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
- s = sizeof(XGL_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
+ s = sizeof(VK_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
break;
}
- case XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
+ case VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
*size = sizeof(uint32_t);
if (data == NULL)
return ret;
count = (uint32_t *) data;
*count = 1;
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
const union {
const void *ptr;
const struct {
- XGL_STRUCTURE_TYPE struct_type;
+ VK_STRUCTURE_TYPE struct_type;
void *next;
} *header;
} info = { .ptr = create_info };
return true;
switch (dbg->type) {
- case XGL_DBG_OBJECT_DEVICE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO);
+ case VK_DBG_OBJECT_DEVICE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_GPU_MEMORY:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO);
+ case VK_DBG_OBJECT_GPU_MEMORY:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO);
break;
- case XGL_DBG_OBJECT_EVENT:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO);
- shallow_copy = sizeof(XGL_EVENT_CREATE_INFO);
+ case VK_DBG_OBJECT_EVENT:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_EVENT_CREATE_INFO);
+ shallow_copy = sizeof(VK_EVENT_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_FENCE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO);
- shallow_copy = sizeof(XGL_FENCE_CREATE_INFO);
+ case VK_DBG_OBJECT_FENCE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_FENCE_CREATE_INFO);
+ shallow_copy = sizeof(VK_FENCE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_QUERY_POOL:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO);
- shallow_copy = sizeof(XGL_QUERY_POOL_CREATE_INFO);
+ case VK_DBG_OBJECT_QUERY_POOL:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO);
+ shallow_copy = sizeof(VK_QUERY_POOL_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_BUFFER:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO);
- shallow_copy = sizeof(XGL_BUFFER_CREATE_INFO);
+ case VK_DBG_OBJECT_BUFFER:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO);
+ shallow_copy = sizeof(VK_BUFFER_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_BUFFER_VIEW:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO);
- shallow_copy = sizeof(XGL_BUFFER_VIEW_CREATE_INFO);
+ case VK_DBG_OBJECT_BUFFER_VIEW:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO);
+ shallow_copy = sizeof(VK_BUFFER_VIEW_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_IMAGE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO);
- shallow_copy = sizeof(XGL_IMAGE_CREATE_INFO);
+ case VK_DBG_OBJECT_IMAGE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO);
+ shallow_copy = sizeof(VK_IMAGE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_IMAGE_VIEW:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO);
- shallow_copy = sizeof(XGL_IMAGE_VIEW_CREATE_INFO);
+ case VK_DBG_OBJECT_IMAGE_VIEW:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO);
+ shallow_copy = sizeof(VK_IMAGE_VIEW_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_COLOR_TARGET_VIEW:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO);
- shallow_copy = sizeof(XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO);
+ case VK_DBG_OBJECT_COLOR_TARGET_VIEW:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO);
+ shallow_copy = sizeof(VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO);
- shallow_copy = sizeof(XGL_DEPTH_STENCIL_VIEW_CREATE_INFO);
+ case VK_DBG_OBJECT_DEPTH_STENCIL_VIEW:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO);
+ shallow_copy = sizeof(VK_DEPTH_STENCIL_VIEW_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_SAMPLER:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO);
- shallow_copy = sizeof(XGL_SAMPLER_CREATE_INFO);
+ case VK_DBG_OBJECT_SAMPLER:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO);
+ shallow_copy = sizeof(VK_SAMPLER_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_DESCRIPTOR_SET:
+ case VK_DBG_OBJECT_DESCRIPTOR_SET:
/* no create info */
break;
- case XGL_DBG_OBJECT_VIEWPORT_STATE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO);
- shallow_copy = sizeof(XGL_DYNAMIC_VP_STATE_CREATE_INFO);
+ case VK_DBG_OBJECT_VIEWPORT_STATE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO);
+ shallow_copy = sizeof(VK_DYNAMIC_VP_STATE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_RASTER_STATE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO);
- shallow_copy = sizeof(XGL_DYNAMIC_RS_STATE_CREATE_INFO);
+ case VK_DBG_OBJECT_RASTER_STATE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO);
+ shallow_copy = sizeof(VK_DYNAMIC_RS_STATE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_COLOR_BLEND_STATE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO);
- shallow_copy = sizeof(XGL_DYNAMIC_CB_STATE_CREATE_INFO);
+ case VK_DBG_OBJECT_COLOR_BLEND_STATE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO);
+ shallow_copy = sizeof(VK_DYNAMIC_CB_STATE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_DEPTH_STENCIL_STATE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO);
- shallow_copy = sizeof(XGL_DYNAMIC_DS_STATE_CREATE_INFO);
+ case VK_DBG_OBJECT_DEPTH_STENCIL_STATE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO);
+ shallow_copy = sizeof(VK_DYNAMIC_DS_STATE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_CMD_BUFFER:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO);
- shallow_copy = sizeof(XGL_CMD_BUFFER_CREATE_INFO);
+ case VK_DBG_OBJECT_CMD_BUFFER:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO);
+ shallow_copy = sizeof(VK_CMD_BUFFER_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_GRAPHICS_PIPELINE:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO);
+ case VK_DBG_OBJECT_GRAPHICS_PIPELINE:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_SHADER:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO);
- shallow_copy = sizeof(XGL_SHADER_CREATE_INFO);
+ case VK_DBG_OBJECT_SHADER:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_SHADER_CREATE_INFO);
+ shallow_copy = sizeof(VK_SHADER_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_FRAMEBUFFER:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO);
- shallow_copy = sizeof(XGL_FRAMEBUFFER_CREATE_INFO);
+ case VK_DBG_OBJECT_FRAMEBUFFER:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO);
+ shallow_copy = sizeof(VK_FRAMEBUFFER_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_RENDER_PASS:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO);
- shallow_copy = sizeof(XGL_RENDER_PASS_CREATE_INFO);
+ case VK_DBG_OBJECT_RENDER_PASS:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO);
+ shallow_copy = sizeof(VK_RENDER_PASS_CREATE_INFO);
break;
- case XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO);
+ case VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO);
/* TODO */
- shallow_copy = sizeof(XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO) * 0;
+ shallow_copy = sizeof(VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO) * 0;
break;
- case XGL_DBG_OBJECT_DESCRIPTOR_POOL:
- assert(info.header->struct_type == XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO);
- shallow_copy = sizeof(XGL_DESCRIPTOR_POOL_CREATE_INFO);
+ case VK_DBG_OBJECT_DESCRIPTOR_POOL:
+ assert(info.header->struct_type == VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO);
+ shallow_copy = sizeof(VK_DESCRIPTOR_POOL_CREATE_INFO);
break;
default:
assert(!"unknown dbg object type");
if (shallow_copy) {
dbg->create_info = intel_alloc(handle, shallow_copy, 0,
- XGL_SYSTEM_ALLOC_DEBUG);
+ VK_SYSTEM_ALLOC_DEBUG);
if (!dbg->create_info)
return false;
memcpy(dbg->create_info, create_info, shallow_copy);
dbg->create_info_size = shallow_copy;
} else if (info.header->struct_type ==
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO) {
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO) {
size_t size;
- const XGL_MEMORY_ALLOC_INFO *ptr_next, *src = info.ptr;
- XGL_MEMORY_ALLOC_INFO *dst;
+ const VK_MEMORY_ALLOC_INFO *ptr_next, *src = info.ptr;
+ VK_MEMORY_ALLOC_INFO *dst;
uint8_t *d;
size = sizeof(*src);
ptr_next = src->pNext;
while (ptr_next != NULL) {
switch (ptr_next->sType) {
- case XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO:
- size += sizeof(XGL_MEMORY_ALLOC_IMAGE_INFO);
+ case VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO:
+ size += sizeof(VK_MEMORY_ALLOC_IMAGE_INFO);
break;
- case XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO:
- size += sizeof(XGL_MEMORY_ALLOC_BUFFER_INFO);
+ case VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO:
+ size += sizeof(VK_MEMORY_ALLOC_BUFFER_INFO);
break;
default:
return false;
}
- ptr_next = (XGL_MEMORY_ALLOC_INFO *) ptr_next->pNext;
+ ptr_next = (VK_MEMORY_ALLOC_INFO *) ptr_next->pNext;
}
dbg->create_info_size = size;
- dst = intel_alloc(handle, size, 0, XGL_SYSTEM_ALLOC_DEBUG);
+ dst = intel_alloc(handle, size, 0, VK_SYSTEM_ALLOC_DEBUG);
if (!dst)
return false;
memcpy(dst, src, sizeof(*src));
d += sizeof(*src);
while (ptr_next != NULL) {
switch (ptr_next->sType) {
- case XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO:
- memcpy(d, ptr_next, sizeof(XGL_MEMORY_ALLOC_IMAGE_INFO));
- d += sizeof(XGL_MEMORY_ALLOC_IMAGE_INFO);
+ case VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO:
+ memcpy(d, ptr_next, sizeof(VK_MEMORY_ALLOC_IMAGE_INFO));
+ d += sizeof(VK_MEMORY_ALLOC_IMAGE_INFO);
break;
- case XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO:
- memcpy(d, ptr_next, sizeof(XGL_MEMORY_ALLOC_BUFFER_INFO));
- d += sizeof(XGL_MEMORY_ALLOC_BUFFER_INFO);
+ case VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO:
+ memcpy(d, ptr_next, sizeof(VK_MEMORY_ALLOC_BUFFER_INFO));
+ d += sizeof(VK_MEMORY_ALLOC_BUFFER_INFO);
break;
default:
return false;
}
- ptr_next = (XGL_MEMORY_ALLOC_INFO *) ptr_next->pNext;
+ ptr_next = (VK_MEMORY_ALLOC_INFO *) ptr_next->pNext;
}
dbg->create_info = dst;
} else if (info.header->struct_type ==
- XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO) {
- const XGL_DEVICE_CREATE_INFO *src = info.ptr;
- XGL_DEVICE_CREATE_INFO *dst;
+ VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO) {
+ const VK_DEVICE_CREATE_INFO *src = info.ptr;
+ VK_DEVICE_CREATE_INFO *dst;
uint8_t *d;
size_t size;
uint32_t i;
size += 1 + strlen(src->ppEnabledExtensionNames[i]);
}
- dst = intel_alloc(handle, size, 0, XGL_SYSTEM_ALLOC_DEBUG);
+ dst = intel_alloc(handle, size, 0, VK_SYSTEM_ALLOC_DEBUG);
if (!dst)
return false;
size = sizeof(src->pRequestedQueues[0]) * src->queueRecordCount;
memcpy(d, src->pRequestedQueues, size);
- dst->pRequestedQueues = (const XGL_DEVICE_QUEUE_CREATE_INFO *) d;
+ dst->pRequestedQueues = (const VK_DEVICE_QUEUE_CREATE_INFO *) d;
d += size;
size = sizeof(src->ppEnabledExtensionNames[0]) * src->extensionCount;
size += len + 1;
}
dbg->create_info = dst;
- } else if (info.header->struct_type == XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO) {
+ } else if (info.header->struct_type == VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO) {
// TODO: What do we want to copy here?
}
* size is allocated and zeroed.
*/
struct intel_base_dbg *intel_base_dbg_create(const struct intel_handle *handle,
- XGL_DBG_OBJECT_TYPE type,
+ VK_DBG_OBJECT_TYPE type,
const void *create_info,
size_t dbg_size)
{
assert(dbg_size >= sizeof(*dbg));
- dbg = intel_alloc(handle, dbg_size, 0, XGL_SYSTEM_ALLOC_DEBUG);
+ dbg = intel_alloc(handle, dbg_size, 0, VK_SYSTEM_ALLOC_DEBUG);
if (!dbg)
return NULL;
*/
struct intel_base *intel_base_create(const struct intel_handle *handle,
size_t obj_size, bool debug,
- XGL_DBG_OBJECT_TYPE type,
+ VK_DBG_OBJECT_TYPE type,
const void *create_info,
size_t dbg_size)
{
assert(obj_size >= sizeof(*base));
- base = intel_alloc(handle, obj_size, 0, XGL_SYSTEM_ALLOC_API_OBJECT);
+ base = intel_alloc(handle, obj_size, 0, VK_SYSTEM_ALLOC_API_OBJECT);
if (!base)
return NULL;
intel_free(base, base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyObject(
- XGL_OBJECT object)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyObject(
+ VK_OBJECT object)
{
struct intel_obj *obj = intel_obj(object);
obj->destroy(obj);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(
- XGL_BASE_OBJECT object,
- XGL_OBJECT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetObjectInfo(
+ VK_BASE_OBJECT object,
+ VK_OBJECT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
return base->get_info(base, infoType, pDataSize, pData);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemory(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_MEMORY mem_,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_MEMORY mem_,
+ VK_GPU_SIZE memOffset)
{
struct intel_obj *obj = intel_obj(object);
struct intel_mem *mem = intel_mem(mem_);
intel_obj_bind_mem(obj, mem, memOffset);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_SIZE rangeOffset,
- XGL_GPU_SIZE rangeSize,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_SIZE rangeOffset,
+ VK_GPU_SIZE rangeSize,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(
- XGL_IMAGE image,
+ICD_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(
+ VK_IMAGE image,
uint32_t allocationIdx,
- const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ const VK_IMAGE_MEMORY_BIND_INFO* bindInfo,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(
- XGL_BASE_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(
+ VK_BASE_OBJECT object,
size_t tagSize,
const void* pTag)
{
void *tag;
if (!dbg)
- return XGL_SUCCESS;
+ return VK_SUCCESS;
- tag = intel_alloc(base, tagSize, 0, XGL_SYSTEM_ALLOC_DEBUG);
+ tag = intel_alloc(base, tagSize, 0, VK_SYSTEM_ALLOC_DEBUG);
if (!tag)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memcpy(tag, pTag, tagSize);
dbg->tag = tag;
dbg->tag_size = tagSize;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_mem;
struct intel_base_dbg {
- XGL_DBG_OBJECT_TYPE type;
+ VK_DBG_OBJECT_TYPE type;
void *create_info;
size_t create_info_size;
struct intel_base_dbg *dbg;
- XGL_RESULT (*get_info)(struct intel_base *base, int type,
+ VK_RESULT (*get_info)(struct intel_base *base, int type,
size_t *size, void *data);
};
size_t offset;
};
-static inline struct intel_base *intel_base(XGL_BASE_OBJECT base)
+static inline struct intel_base *intel_base(VK_BASE_OBJECT base)
{
return (struct intel_base *) base;
}
-static inline struct intel_obj *intel_obj(XGL_OBJECT obj)
+static inline struct intel_obj *intel_obj(VK_OBJECT obj)
{
return (struct intel_obj *) obj;
}
static inline void intel_obj_bind_mem(struct intel_obj *obj,
struct intel_mem *mem,
- XGL_GPU_SIZE offset)
+ VK_GPU_SIZE offset)
{
obj->mem = mem;
obj->offset = offset;
}
-XGL_RESULT intel_base_get_info(struct intel_base *base, int type,
+VK_RESULT intel_base_get_info(struct intel_base *base, int type,
size_t *size, void *data);
struct intel_base_dbg *intel_base_dbg_create(const struct intel_handle *handle,
- XGL_DBG_OBJECT_TYPE type,
+ VK_DBG_OBJECT_TYPE type,
const void *create_info,
size_t dbg_size);
void intel_base_dbg_destroy(const struct intel_handle *handle,
struct intel_base *intel_base_create(const struct intel_handle *handle,
size_t obj_size, bool debug,
- XGL_DBG_OBJECT_TYPE type,
+ VK_DBG_OBJECT_TYPE type,
const void *create_info,
size_t dbg_size);
void intel_base_destroy(struct intel_base *base);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "shader.h"
#include "pipeline.h"
-static int translate_blend_func(XGL_BLEND_FUNC func)
+static int translate_blend_func(VK_BLEND_FUNC func)
{
switch (func) {
- case XGL_BLEND_FUNC_ADD: return GEN6_BLENDFUNCTION_ADD;
- case XGL_BLEND_FUNC_SUBTRACT: return GEN6_BLENDFUNCTION_SUBTRACT;
- case XGL_BLEND_FUNC_REVERSE_SUBTRACT: return GEN6_BLENDFUNCTION_REVERSE_SUBTRACT;
- case XGL_BLEND_FUNC_MIN: return GEN6_BLENDFUNCTION_MIN;
- case XGL_BLEND_FUNC_MAX: return GEN6_BLENDFUNCTION_MAX;
+ case VK_BLEND_FUNC_ADD: return GEN6_BLENDFUNCTION_ADD;
+ case VK_BLEND_FUNC_SUBTRACT: return GEN6_BLENDFUNCTION_SUBTRACT;
+ case VK_BLEND_FUNC_REVERSE_SUBTRACT: return GEN6_BLENDFUNCTION_REVERSE_SUBTRACT;
+ case VK_BLEND_FUNC_MIN: return GEN6_BLENDFUNCTION_MIN;
+ case VK_BLEND_FUNC_MAX: return GEN6_BLENDFUNCTION_MAX;
default:
assert(!"unknown blend func");
return GEN6_BLENDFUNCTION_ADD;
};
}
-static int translate_blend(XGL_BLEND blend)
+static int translate_blend(VK_BLEND blend)
{
switch (blend) {
- case XGL_BLEND_ZERO: return GEN6_BLENDFACTOR_ZERO;
- case XGL_BLEND_ONE: return GEN6_BLENDFACTOR_ONE;
- case XGL_BLEND_SRC_COLOR: return GEN6_BLENDFACTOR_SRC_COLOR;
- case XGL_BLEND_ONE_MINUS_SRC_COLOR: return GEN6_BLENDFACTOR_INV_SRC_COLOR;
- case XGL_BLEND_DEST_COLOR: return GEN6_BLENDFACTOR_DST_COLOR;
- case XGL_BLEND_ONE_MINUS_DEST_COLOR: return GEN6_BLENDFACTOR_INV_DST_COLOR;
- case XGL_BLEND_SRC_ALPHA: return GEN6_BLENDFACTOR_SRC_ALPHA;
- case XGL_BLEND_ONE_MINUS_SRC_ALPHA: return GEN6_BLENDFACTOR_INV_SRC_ALPHA;
- case XGL_BLEND_DEST_ALPHA: return GEN6_BLENDFACTOR_DST_ALPHA;
- case XGL_BLEND_ONE_MINUS_DEST_ALPHA: return GEN6_BLENDFACTOR_INV_DST_ALPHA;
- case XGL_BLEND_CONSTANT_COLOR: return GEN6_BLENDFACTOR_CONST_COLOR;
- case XGL_BLEND_ONE_MINUS_CONSTANT_COLOR: return GEN6_BLENDFACTOR_INV_CONST_COLOR;
- case XGL_BLEND_CONSTANT_ALPHA: return GEN6_BLENDFACTOR_CONST_ALPHA;
- case XGL_BLEND_ONE_MINUS_CONSTANT_ALPHA: return GEN6_BLENDFACTOR_INV_CONST_ALPHA;
- case XGL_BLEND_SRC_ALPHA_SATURATE: return GEN6_BLENDFACTOR_SRC_ALPHA_SATURATE;
- case XGL_BLEND_SRC1_COLOR: return GEN6_BLENDFACTOR_SRC1_COLOR;
- case XGL_BLEND_ONE_MINUS_SRC1_COLOR: return GEN6_BLENDFACTOR_INV_SRC1_COLOR;
- case XGL_BLEND_SRC1_ALPHA: return GEN6_BLENDFACTOR_SRC1_ALPHA;
- case XGL_BLEND_ONE_MINUS_SRC1_ALPHA: return GEN6_BLENDFACTOR_INV_SRC1_ALPHA;
+ case VK_BLEND_ZERO: return GEN6_BLENDFACTOR_ZERO;
+ case VK_BLEND_ONE: return GEN6_BLENDFACTOR_ONE;
+ case VK_BLEND_SRC_COLOR: return GEN6_BLENDFACTOR_SRC_COLOR;
+ case VK_BLEND_ONE_MINUS_SRC_COLOR: return GEN6_BLENDFACTOR_INV_SRC_COLOR;
+ case VK_BLEND_DEST_COLOR: return GEN6_BLENDFACTOR_DST_COLOR;
+ case VK_BLEND_ONE_MINUS_DEST_COLOR: return GEN6_BLENDFACTOR_INV_DST_COLOR;
+ case VK_BLEND_SRC_ALPHA: return GEN6_BLENDFACTOR_SRC_ALPHA;
+ case VK_BLEND_ONE_MINUS_SRC_ALPHA: return GEN6_BLENDFACTOR_INV_SRC_ALPHA;
+ case VK_BLEND_DEST_ALPHA: return GEN6_BLENDFACTOR_DST_ALPHA;
+ case VK_BLEND_ONE_MINUS_DEST_ALPHA: return GEN6_BLENDFACTOR_INV_DST_ALPHA;
+ case VK_BLEND_CONSTANT_COLOR: return GEN6_BLENDFACTOR_CONST_COLOR;
+ case VK_BLEND_ONE_MINUS_CONSTANT_COLOR: return GEN6_BLENDFACTOR_INV_CONST_COLOR;
+ case VK_BLEND_CONSTANT_ALPHA: return GEN6_BLENDFACTOR_CONST_ALPHA;
+ case VK_BLEND_ONE_MINUS_CONSTANT_ALPHA: return GEN6_BLENDFACTOR_INV_CONST_ALPHA;
+ case VK_BLEND_SRC_ALPHA_SATURATE: return GEN6_BLENDFACTOR_SRC_ALPHA_SATURATE;
+ case VK_BLEND_SRC1_COLOR: return GEN6_BLENDFACTOR_SRC1_COLOR;
+ case VK_BLEND_ONE_MINUS_SRC1_COLOR: return GEN6_BLENDFACTOR_INV_SRC1_COLOR;
+ case VK_BLEND_SRC1_ALPHA: return GEN6_BLENDFACTOR_SRC1_ALPHA;
+ case VK_BLEND_ONE_MINUS_SRC1_ALPHA: return GEN6_BLENDFACTOR_INV_SRC1_ALPHA;
default:
assert(!"unknown blend factor");
return GEN6_BLENDFACTOR_ONE;
};
}
-static int translate_compare_func(XGL_COMPARE_FUNC func)
+static int translate_compare_func(VK_COMPARE_FUNC func)
{
switch (func) {
- case XGL_COMPARE_NEVER: return GEN6_COMPAREFUNCTION_NEVER;
- case XGL_COMPARE_LESS: return GEN6_COMPAREFUNCTION_LESS;
- case XGL_COMPARE_EQUAL: return GEN6_COMPAREFUNCTION_EQUAL;
- case XGL_COMPARE_LESS_EQUAL: return GEN6_COMPAREFUNCTION_LEQUAL;
- case XGL_COMPARE_GREATER: return GEN6_COMPAREFUNCTION_GREATER;
- case XGL_COMPARE_NOT_EQUAL: return GEN6_COMPAREFUNCTION_NOTEQUAL;
- case XGL_COMPARE_GREATER_EQUAL: return GEN6_COMPAREFUNCTION_GEQUAL;
- case XGL_COMPARE_ALWAYS: return GEN6_COMPAREFUNCTION_ALWAYS;
+ case VK_COMPARE_NEVER: return GEN6_COMPAREFUNCTION_NEVER;
+ case VK_COMPARE_LESS: return GEN6_COMPAREFUNCTION_LESS;
+ case VK_COMPARE_EQUAL: return GEN6_COMPAREFUNCTION_EQUAL;
+ case VK_COMPARE_LESS_EQUAL: return GEN6_COMPAREFUNCTION_LEQUAL;
+ case VK_COMPARE_GREATER: return GEN6_COMPAREFUNCTION_GREATER;
+ case VK_COMPARE_NOT_EQUAL: return GEN6_COMPAREFUNCTION_NOTEQUAL;
+ case VK_COMPARE_GREATER_EQUAL: return GEN6_COMPAREFUNCTION_GEQUAL;
+ case VK_COMPARE_ALWAYS: return GEN6_COMPAREFUNCTION_ALWAYS;
default:
assert(!"unknown compare_func");
return GEN6_COMPAREFUNCTION_NEVER;
}
}
-static int translate_stencil_op(XGL_STENCIL_OP op)
+static int translate_stencil_op(VK_STENCIL_OP op)
{
switch (op) {
- case XGL_STENCIL_OP_KEEP: return GEN6_STENCILOP_KEEP;
- case XGL_STENCIL_OP_ZERO: return GEN6_STENCILOP_ZERO;
- case XGL_STENCIL_OP_REPLACE: return GEN6_STENCILOP_REPLACE;
- case XGL_STENCIL_OP_INC_CLAMP: return GEN6_STENCILOP_INCRSAT;
- case XGL_STENCIL_OP_DEC_CLAMP: return GEN6_STENCILOP_DECRSAT;
- case XGL_STENCIL_OP_INVERT: return GEN6_STENCILOP_INVERT;
- case XGL_STENCIL_OP_INC_WRAP: return GEN6_STENCILOP_INCR;
- case XGL_STENCIL_OP_DEC_WRAP: return GEN6_STENCILOP_DECR;
+ case VK_STENCIL_OP_KEEP: return GEN6_STENCILOP_KEEP;
+ case VK_STENCIL_OP_ZERO: return GEN6_STENCILOP_ZERO;
+ case VK_STENCIL_OP_REPLACE: return GEN6_STENCILOP_REPLACE;
+ case VK_STENCIL_OP_INC_CLAMP: return GEN6_STENCILOP_INCRSAT;
+ case VK_STENCIL_OP_DEC_CLAMP: return GEN6_STENCILOP_DECRSAT;
+ case VK_STENCIL_OP_INVERT: return GEN6_STENCILOP_INVERT;
+ case VK_STENCIL_OP_INC_WRAP: return GEN6_STENCILOP_INCR;
+ case VK_STENCIL_OP_DEC_WRAP: return GEN6_STENCILOP_DECR;
default:
assert(!"unknown stencil op");
return GEN6_STENCILOP_KEEP;
}
struct intel_pipeline_create_info {
- XGL_GRAPHICS_PIPELINE_CREATE_INFO graphics;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
- XGL_PIPELINE_IA_STATE_CREATE_INFO ia;
- XGL_PIPELINE_DS_STATE_CREATE_INFO db;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rs;
- XGL_PIPELINE_TESS_STATE_CREATE_INFO tess;
- XGL_PIPELINE_MS_STATE_CREATE_INFO ms;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vp;
- XGL_PIPELINE_SHADER vs;
- XGL_PIPELINE_SHADER tcs;
- XGL_PIPELINE_SHADER tes;
- XGL_PIPELINE_SHADER gs;
- XGL_PIPELINE_SHADER fs;
-
- XGL_COMPUTE_PIPELINE_CREATE_INFO compute;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO graphics;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
+ VK_PIPELINE_IA_STATE_CREATE_INFO ia;
+ VK_PIPELINE_DS_STATE_CREATE_INFO db;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rs;
+ VK_PIPELINE_TESS_STATE_CREATE_INFO tess;
+ VK_PIPELINE_MS_STATE_CREATE_INFO ms;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vp;
+ VK_PIPELINE_SHADER vs;
+ VK_PIPELINE_SHADER tcs;
+ VK_PIPELINE_SHADER tes;
+ VK_PIPELINE_SHADER gs;
+ VK_PIPELINE_SHADER fs;
+
+ VK_COMPUTE_PIPELINE_CREATE_INFO compute;
};
/* in S1.3 */
enum intel_dev_meta_shader id)
{
struct intel_pipeline_shader *sh;
- XGL_RESULT ret;
+ VK_RESULT ret;
- sh = intel_alloc(dev, sizeof(*sh), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ sh = intel_alloc(dev, sizeof(*sh), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!sh)
return NULL;
memset(sh, 0, sizeof(*sh));
ret = intel_pipeline_shader_compile_meta(sh, dev->gpu, id);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_free(dev, sh);
return NULL;
}
case INTEL_DEV_META_VS_COPY_MEM:
case INTEL_DEV_META_VS_COPY_MEM_UNALIGNED:
sh->max_threads = intel_gpu_get_max_threads(dev->gpu,
- XGL_SHADER_STAGE_VERTEX);
+ VK_SHADER_STAGE_VERTEX);
break;
default:
sh->max_threads = intel_gpu_get_max_threads(dev->gpu,
- XGL_SHADER_STAGE_FRAGMENT);
+ VK_SHADER_STAGE_FRAGMENT);
break;
}
intel_free(dev, sh);
}
-static XGL_RESULT pipeline_build_shader(struct intel_pipeline *pipeline,
+static VK_RESULT pipeline_build_shader(struct intel_pipeline *pipeline,
const struct intel_desc_layout_chain *chain,
- const XGL_PIPELINE_SHADER *sh_info,
+ const VK_PIPELINE_SHADER *sh_info,
struct intel_pipeline_shader *sh)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = intel_pipeline_shader_compile(sh,
pipeline->dev->gpu, chain, sh_info);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
sh->max_threads =
pipeline->active_shaders |= 1 << sh_info->stage;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT pipeline_build_shaders(struct intel_pipeline *pipeline,
+static VK_RESULT pipeline_build_shaders(struct intel_pipeline *pipeline,
const struct intel_pipeline_create_info *info)
{
const struct intel_desc_layout_chain *chain =
intel_desc_layout_chain(info->graphics.pSetLayoutChain);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
- if (ret == XGL_SUCCESS && info->vs.shader) {
+ if (ret == VK_SUCCESS && info->vs.shader) {
ret = pipeline_build_shader(pipeline, chain,
&info->vs, &pipeline->vs);
}
- if (ret == XGL_SUCCESS && info->tcs.shader) {
+ if (ret == VK_SUCCESS && info->tcs.shader) {
ret = pipeline_build_shader(pipeline, chain,
&info->tcs,&pipeline->tcs);
}
- if (ret == XGL_SUCCESS && info->tes.shader) {
+ if (ret == VK_SUCCESS && info->tes.shader) {
ret = pipeline_build_shader(pipeline, chain,
&info->tes,&pipeline->tes);
}
- if (ret == XGL_SUCCESS && info->gs.shader) {
+ if (ret == VK_SUCCESS && info->gs.shader) {
ret = pipeline_build_shader(pipeline, chain,
&info->gs, &pipeline->gs);
}
- if (ret == XGL_SUCCESS && info->fs.shader) {
+ if (ret == VK_SUCCESS && info->fs.shader) {
ret = pipeline_build_shader(pipeline, chain,
&info->fs, &pipeline->fs);
}
- if (ret == XGL_SUCCESS && info->compute.cs.shader) {
+ if (ret == VK_SUCCESS && info->compute.cs.shader) {
chain = intel_desc_layout_chain(info->compute.setLayoutChain);
ret = pipeline_build_shader(pipeline, chain,
&info->compute.cs, &pipeline->cs);
return ptr;
}
-static XGL_RESULT pipeline_build_ia(struct intel_pipeline *pipeline,
+static VK_RESULT pipeline_build_ia(struct intel_pipeline *pipeline,
const struct intel_pipeline_create_info* info)
{
pipeline->topology = info->ia.topology;
pipeline->disable_vs_cache = info->ia.disableVertexReuse;
switch (info->ia.topology) {
- case XGL_TOPOLOGY_POINT_LIST:
+ case VK_TOPOLOGY_POINT_LIST:
pipeline->prim_type = GEN6_3DPRIM_POINTLIST;
break;
- case XGL_TOPOLOGY_LINE_LIST:
+ case VK_TOPOLOGY_LINE_LIST:
pipeline->prim_type = GEN6_3DPRIM_LINELIST;
break;
- case XGL_TOPOLOGY_LINE_STRIP:
+ case VK_TOPOLOGY_LINE_STRIP:
pipeline->prim_type = GEN6_3DPRIM_LINESTRIP;
break;
- case XGL_TOPOLOGY_TRIANGLE_LIST:
+ case VK_TOPOLOGY_TRIANGLE_LIST:
pipeline->prim_type = GEN6_3DPRIM_TRILIST;
break;
- case XGL_TOPOLOGY_TRIANGLE_STRIP:
+ case VK_TOPOLOGY_TRIANGLE_STRIP:
pipeline->prim_type = GEN6_3DPRIM_TRISTRIP;
break;
- case XGL_TOPOLOGY_TRIANGLE_FAN:
+ case VK_TOPOLOGY_TRIANGLE_FAN:
pipeline->prim_type = GEN6_3DPRIM_TRIFAN;
break;
- case XGL_TOPOLOGY_LINE_LIST_ADJ:
+ case VK_TOPOLOGY_LINE_LIST_ADJ:
pipeline->prim_type = GEN6_3DPRIM_LINELIST_ADJ;
break;
- case XGL_TOPOLOGY_LINE_STRIP_ADJ:
+ case VK_TOPOLOGY_LINE_STRIP_ADJ:
pipeline->prim_type = GEN6_3DPRIM_LINESTRIP_ADJ;
break;
- case XGL_TOPOLOGY_TRIANGLE_LIST_ADJ:
+ case VK_TOPOLOGY_TRIANGLE_LIST_ADJ:
pipeline->prim_type = GEN6_3DPRIM_TRILIST_ADJ;
break;
- case XGL_TOPOLOGY_TRIANGLE_STRIP_ADJ:
+ case VK_TOPOLOGY_TRIANGLE_STRIP_ADJ:
pipeline->prim_type = GEN6_3DPRIM_TRISTRIP_ADJ;
break;
- case XGL_TOPOLOGY_PATCH:
+ case VK_TOPOLOGY_PATCH:
if (!info->tess.patchControlPoints ||
info->tess.patchControlPoints > 32)
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
pipeline->prim_type = GEN7_3DPRIM_PATCHLIST_1 +
info->tess.patchControlPoints - 1;
break;
default:
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
if (info->ia.primitiveRestartEnable) {
pipeline->primitive_restart = false;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT pipeline_build_rs_state(struct intel_pipeline *pipeline,
+static VK_RESULT pipeline_build_rs_state(struct intel_pipeline *pipeline,
const struct intel_pipeline_create_info* info)
{
- const XGL_PIPELINE_RS_STATE_CREATE_INFO *rs_state = &info->rs;
+ const VK_PIPELINE_RS_STATE_CREATE_INFO *rs_state = &info->rs;
bool ccw;
pipeline->depthClipEnable = rs_state->depthClipEnable;
pipeline->rasterizerDiscardEnable = rs_state->rasterizerDiscardEnable;
pipeline->use_rs_point_size = !rs_state->programPointSize;
- if (rs_state->provokingVertex == XGL_PROVOKING_VERTEX_FIRST) {
+ if (rs_state->provokingVertex == VK_PROVOKING_VERTEX_FIRST) {
pipeline->provoking_vertex_tri = 0;
pipeline->provoking_vertex_trifan = 1;
pipeline->provoking_vertex_line = 0;
}
switch (rs_state->fillMode) {
- case XGL_FILL_POINTS:
+ case VK_FILL_POINTS:
pipeline->cmd_sf_fill |= GEN7_SF_DW1_FRONTFACE_POINT |
GEN7_SF_DW1_BACKFACE_POINT;
break;
- case XGL_FILL_WIREFRAME:
+ case VK_FILL_WIREFRAME:
pipeline->cmd_sf_fill |= GEN7_SF_DW1_FRONTFACE_WIREFRAME |
GEN7_SF_DW1_BACKFACE_WIREFRAME;
break;
- case XGL_FILL_SOLID:
+ case VK_FILL_SOLID:
default:
pipeline->cmd_sf_fill |= GEN7_SF_DW1_FRONTFACE_SOLID |
GEN7_SF_DW1_BACKFACE_SOLID;
break;
}
- ccw = (rs_state->frontFace == XGL_FRONT_FACE_CCW);
+ ccw = (rs_state->frontFace == VK_FRONT_FACE_CCW);
/* flip the winding order */
- if (info->vp.clipOrigin == XGL_COORDINATE_ORIGIN_LOWER_LEFT)
+ if (info->vp.clipOrigin == VK_COORDINATE_ORIGIN_LOWER_LEFT)
ccw = !ccw;
if (ccw) {
}
switch (rs_state->cullMode) {
- case XGL_CULL_NONE:
+ case VK_CULL_NONE:
default:
pipeline->cmd_sf_cull |= GEN7_SF_DW2_CULLMODE_NONE;
pipeline->cmd_clip_cull |= GEN7_CLIP_DW1_CULLMODE_NONE;
break;
- case XGL_CULL_FRONT:
+ case VK_CULL_FRONT:
pipeline->cmd_sf_cull |= GEN7_SF_DW2_CULLMODE_FRONT;
pipeline->cmd_clip_cull |= GEN7_CLIP_DW1_CULLMODE_FRONT;
break;
- case XGL_CULL_BACK:
+ case VK_CULL_BACK:
pipeline->cmd_sf_cull |= GEN7_SF_DW2_CULLMODE_BACK;
pipeline->cmd_clip_cull |= GEN7_CLIP_DW1_CULLMODE_BACK;
break;
- case XGL_CULL_FRONT_AND_BACK:
+ case VK_CULL_FRONT_AND_BACK:
pipeline->cmd_sf_cull |= GEN7_SF_DW2_CULLMODE_BOTH;
pipeline->cmd_clip_cull |= GEN7_CLIP_DW1_CULLMODE_BOTH;
break;
if (intel_gpu_gen(pipeline->dev->gpu) == INTEL_GEN(6))
pipeline->cmd_clip_cull = 0;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void pipeline_destroy(struct intel_obj *obj)
intel_base_destroy(&pipeline->obj.base);
}
-static XGL_RESULT pipeline_get_info(struct intel_base *base, int type,
+static VK_RESULT pipeline_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
struct intel_pipeline *pipeline = intel_pipeline_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data) {
mem_req->size = pipeline->scratch_size;
mem_req->alignment = 1024;
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
}
}
break;
return ret;
}
-static XGL_RESULT pipeline_validate(struct intel_pipeline *pipeline)
+static VK_RESULT pipeline_validate(struct intel_pipeline *pipeline)
{
/*
* Validate required elements
*/
if (!(pipeline->active_shaders & SHADER_VERTEX_FLAG)) {
// TODO: Log debug message: Vertex Shader required.
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
/*
if (((pipeline->active_shaders & SHADER_TESS_CONTROL_FLAG) == 0) !=
((pipeline->active_shaders & SHADER_TESS_EVAL_FLAG) == 0) ) {
// TODO: Log debug message: Both Tess control and Tess eval are required to use tessalation
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
if ((pipeline->active_shaders & SHADER_COMPUTE_FLAG) &&
SHADER_TESS_EVAL_FLAG | SHADER_GEOMETRY_FLAG |
SHADER_FRAGMENT_FLAG))) {
// TODO: Log debug message: Can only specify compute shader when doing compute
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
/*
- * XGL_TOPOLOGY_PATCH primitive topology is only valid for tessellation pipelines.
+ * VK_TOPOLOGY_PATCH primitive topology is only valid for tessellation pipelines.
* Mismatching primitive topology and tessellation fails graphics pipeline creation.
*/
if (pipeline->active_shaders & (SHADER_TESS_CONTROL_FLAG | SHADER_TESS_EVAL_FLAG) &&
- (pipeline->topology != XGL_TOPOLOGY_PATCH)) {
+ (pipeline->topology != VK_TOPOLOGY_PATCH)) {
// TODO: Log debug message: Invalid topology used with tessalation shader.
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
- if ((pipeline->topology == XGL_TOPOLOGY_PATCH) &&
+ if ((pipeline->topology == VK_TOPOLOGY_PATCH) &&
(pipeline->active_shaders & ~(SHADER_TESS_CONTROL_FLAG | SHADER_TESS_EVAL_FLAG))) {
// TODO: Log debug message: Cannot use TOPOLOGY_PATCH on non-tessalation shader.
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void pipeline_build_urb_alloc_gen6(struct intel_pipeline *pipeline,
/* VERTEX_ELEMENT_STATE */
for (i = 0, attrs_processed = 0; attrs_processed < attr_count; i++) {
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION *attr = NULL;
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION *attr = NULL;
/*
* The compiler will pack the shader references and then
*/
for (j = 0; j < info->vi.attributeCount; j++) {
if (info->vi.pVertexAttributeDescriptions[j].location == i) {
- attr = (XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION *) &info->vi.pVertexAttributeDescriptions[j];
+ attr = (VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION *) &info->vi.pVertexAttributeDescriptions[j];
attrs_processed++;
break;
}
const struct intel_pipeline_create_info *info)
{
switch (info->vp.depthMode) {
- case XGL_DEPTH_MODE_ZERO_TO_ONE:
+ case VK_DEPTH_MODE_ZERO_TO_ONE:
pipeline->depth_zero_to_one = true;
break;
- case XGL_DEPTH_MODE_NEGATIVE_ONE_TO_ONE:
+ case VK_DEPTH_MODE_NEGATIVE_ONE_TO_ONE:
default:
pipeline->depth_zero_to_one = false;
break;
vue_offset << GEN7_SBE_DW1_URB_READ_OFFSET__SHIFT;
switch (info->rs.pointOrigin) {
- case XGL_COORDINATE_ORIGIN_UPPER_LEFT:
+ case VK_COORDINATE_ORIGIN_UPPER_LEFT:
body[1] |= GEN7_SBE_DW1_POINT_SPRITE_TEXCOORD_UPPERLEFT;
break;
- case XGL_COORDINATE_ORIGIN_LOWER_LEFT:
+ case VK_COORDINATE_ORIGIN_LOWER_LEFT:
body[1] |= GEN7_SBE_DW1_POINT_SPRITE_TEXCOORD_LOWERLEFT;
break;
default:
body[2 + i] = hi << GEN8_SBE_SWIZ_HIGH__SHIFT | lo;
}
- if (info->ia.topology == XGL_TOPOLOGY_POINT_LIST)
+ if (info->ia.topology == VK_TOPOLOGY_POINT_LIST)
body[10] = fs->point_sprite_enables;
else
body[10] = 0;
uint32_t *dw = pipeline->cmd_cb;
for (i = 0; i < info->cb.attachmentCount; i++) {
- const XGL_PIPELINE_CB_ATTACHMENT_STATE *att = &info->cb.pAttachments[i];
+ const VK_PIPELINE_CB_ATTACHMENT_STATE *att = &info->cb.pAttachments[i];
uint32_t dw0, dw1;
pipeline->dual_source_blend_enable = icd_pipeline_cb_att_needs_dual_source_blending(att);
}
- if (info->cb.logicOp != XGL_LOGIC_OP_COPY) {
+ if (info->cb.logicOp != VK_LOGIC_OP_COPY) {
int logicop;
switch (info->cb.logicOp) {
- case XGL_LOGIC_OP_CLEAR: logicop = GEN6_LOGICOP_CLEAR; break;
- case XGL_LOGIC_OP_AND: logicop = GEN6_LOGICOP_AND; break;
- case XGL_LOGIC_OP_AND_REVERSE: logicop = GEN6_LOGICOP_AND_REVERSE; break;
- case XGL_LOGIC_OP_AND_INVERTED: logicop = GEN6_LOGICOP_AND_INVERTED; break;
- case XGL_LOGIC_OP_NOOP: logicop = GEN6_LOGICOP_NOOP; break;
- case XGL_LOGIC_OP_XOR: logicop = GEN6_LOGICOP_XOR; break;
- case XGL_LOGIC_OP_OR: logicop = GEN6_LOGICOP_OR; break;
- case XGL_LOGIC_OP_NOR: logicop = GEN6_LOGICOP_NOR; break;
- case XGL_LOGIC_OP_EQUIV: logicop = GEN6_LOGICOP_EQUIV; break;
- case XGL_LOGIC_OP_INVERT: logicop = GEN6_LOGICOP_INVERT; break;
- case XGL_LOGIC_OP_OR_REVERSE: logicop = GEN6_LOGICOP_OR_REVERSE; break;
- case XGL_LOGIC_OP_COPY_INVERTED: logicop = GEN6_LOGICOP_COPY_INVERTED; break;
- case XGL_LOGIC_OP_OR_INVERTED: logicop = GEN6_LOGICOP_OR_INVERTED; break;
- case XGL_LOGIC_OP_NAND: logicop = GEN6_LOGICOP_NAND; break;
- case XGL_LOGIC_OP_SET: logicop = GEN6_LOGICOP_SET; break;
+ case VK_LOGIC_OP_CLEAR: logicop = GEN6_LOGICOP_CLEAR; break;
+ case VK_LOGIC_OP_AND: logicop = GEN6_LOGICOP_AND; break;
+ case VK_LOGIC_OP_AND_REVERSE: logicop = GEN6_LOGICOP_AND_REVERSE; break;
+ case VK_LOGIC_OP_AND_INVERTED: logicop = GEN6_LOGICOP_AND_INVERTED; break;
+ case VK_LOGIC_OP_NOOP: logicop = GEN6_LOGICOP_NOOP; break;
+ case VK_LOGIC_OP_XOR: logicop = GEN6_LOGICOP_XOR; break;
+ case VK_LOGIC_OP_OR: logicop = GEN6_LOGICOP_OR; break;
+ case VK_LOGIC_OP_NOR: logicop = GEN6_LOGICOP_NOR; break;
+ case VK_LOGIC_OP_EQUIV: logicop = GEN6_LOGICOP_EQUIV; break;
+ case VK_LOGIC_OP_INVERT: logicop = GEN6_LOGICOP_INVERT; break;
+ case VK_LOGIC_OP_OR_REVERSE: logicop = GEN6_LOGICOP_OR_REVERSE; break;
+ case VK_LOGIC_OP_COPY_INVERTED: logicop = GEN6_LOGICOP_COPY_INVERTED; break;
+ case VK_LOGIC_OP_OR_INVERTED: logicop = GEN6_LOGICOP_OR_INVERTED; break;
+ case VK_LOGIC_OP_NAND: logicop = GEN6_LOGICOP_NAND; break;
+ case VK_LOGIC_OP_SET: logicop = GEN6_LOGICOP_SET; break;
default:
assert(!"unknown logic op");
logicop = GEN6_LOGICOP_CLEAR;
}
-static XGL_RESULT pipeline_build_all(struct intel_pipeline *pipeline,
+static VK_RESULT pipeline_build_all(struct intel_pipeline *pipeline,
const struct intel_pipeline_create_info *info)
{
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = pipeline_build_shaders(pipeline, info);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
if (info->vi.bindingCount > ARRAY_SIZE(pipeline->vb) ||
info->vi.attributeCount > ARRAY_SIZE(pipeline->vb))
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
pipeline->vb_count = info->vi.bindingCount;
memcpy(pipeline->vb, info->vi.pVertexBindingDescriptions,
ret = pipeline_build_ia(pipeline, info);
- if (ret == XGL_SUCCESS)
+ if (ret == VK_SUCCESS)
ret = pipeline_build_rs_state(pipeline, info);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
pipeline->db_format = info->db.format;
pipeline_build_cb(pipeline, info);
pipeline->cb_state = info->cb;
}
struct intel_pipeline_create_info_header {
- XGL_STRUCTURE_TYPE struct_type;
+ VK_STRUCTURE_TYPE struct_type;
const struct intel_pipeline_create_info_header *next;
};
-static XGL_RESULT pipeline_create_info_init(struct intel_pipeline_create_info *info,
+static VK_RESULT pipeline_create_info_init(struct intel_pipeline_create_info *info,
const struct intel_pipeline_create_info_header *header)
{
memset(info, 0, sizeof(*info));
void *dst;
switch (header->struct_type) {
- case XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO:
size = sizeof(info->graphics);
dst = &info->graphics;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
size = sizeof(info->vi);
dst = &info->vi;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
size = sizeof(info->ia);
dst = &info->ia;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
size = sizeof(info->db);
dst = &info->db;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
size = sizeof(info->cb);
dst = &info->cb;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
size = sizeof(info->rs);
dst = &info->rs;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
size = sizeof(info->tess);
dst = &info->tess;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
size = sizeof(info->ms);
dst = &info->ms;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
size = sizeof(info->vp);
dst = &info->vp;
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
{
- const XGL_PIPELINE_SHADER *shader =
- (const XGL_PIPELINE_SHADER *) (header + 1);
+ const VK_PIPELINE_SHADER *shader =
+ (const VK_PIPELINE_SHADER *) (header + 1);
src = (const void *) shader;
size = sizeof(*shader);
switch (shader->stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
dst = &info->vs;
break;
- case XGL_SHADER_STAGE_TESS_CONTROL:
+ case VK_SHADER_STAGE_TESS_CONTROL:
dst = &info->tcs;
break;
- case XGL_SHADER_STAGE_TESS_EVALUATION:
+ case VK_SHADER_STAGE_TESS_EVALUATION:
dst = &info->tes;
break;
- case XGL_SHADER_STAGE_GEOMETRY:
+ case VK_SHADER_STAGE_GEOMETRY:
dst = &info->gs;
break;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
dst = &info->fs;
break;
default:
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
break;
}
}
break;
- case XGL_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO:
size = sizeof(info->compute);
dst = &info->compute;
break;
default:
- return XGL_ERROR_BAD_PIPELINE_DATA;
+ return VK_ERROR_BAD_PIPELINE_DATA;
break;
}
header = header->next;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT graphics_pipeline_create(struct intel_dev *dev,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO *info_,
+static VK_RESULT graphics_pipeline_create(struct intel_dev *dev,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO *info_,
struct intel_pipeline **pipeline_ret)
{
struct intel_pipeline_create_info info;
struct intel_pipeline *pipeline;
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = pipeline_create_info_init(&info,
(const struct intel_pipeline_create_info_header *) info_);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
pipeline = (struct intel_pipeline *) intel_base_create(&dev->base.handle,
sizeof(*pipeline), dev->base.dbg,
- XGL_DBG_OBJECT_GRAPHICS_PIPELINE, info_, 0);
+ VK_DBG_OBJECT_GRAPHICS_PIPELINE, info_, 0);
if (!pipeline)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pipeline->dev = dev;
pipeline->obj.base.get_info = pipeline_get_info;
pipeline->obj.destroy = pipeline_destroy;
ret = pipeline_build_all(pipeline, &info);
- if (ret == XGL_SUCCESS)
+ if (ret == VK_SUCCESS)
ret = pipeline_validate(pipeline);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
pipeline_destroy(&pipeline->obj);
return ret;
}
*pipeline_ret = pipeline;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(
- XGL_DEVICE device,
- const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(
+ VK_DEVICE device,
+ const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglStorePipeline(
- XGL_PIPELINE pipeline,
+ICD_EXPORT VK_RESULT VKAPI vkStorePipeline(
+ VK_PIPELINE pipeline,
size_t* pDataSize,
void* pData)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipeline(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE* pPipeline)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipelineDerivative(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipelineDerivative(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
uint32_t slot_count;
};
-#define SHADER_VERTEX_FLAG (1 << XGL_SHADER_STAGE_VERTEX)
-#define SHADER_TESS_CONTROL_FLAG (1 << XGL_SHADER_STAGE_TESS_CONTROL)
-#define SHADER_TESS_EVAL_FLAG (1 << XGL_SHADER_STAGE_TESS_EVALUATION)
-#define SHADER_GEOMETRY_FLAG (1 << XGL_SHADER_STAGE_GEOMETRY)
-#define SHADER_FRAGMENT_FLAG (1 << XGL_SHADER_STAGE_FRAGMENT)
-#define SHADER_COMPUTE_FLAG (1 << XGL_SHADER_STAGE_COMPUTE)
+#define SHADER_VERTEX_FLAG (1 << VK_SHADER_STAGE_VERTEX)
+#define SHADER_TESS_CONTROL_FLAG (1 << VK_SHADER_STAGE_TESS_CONTROL)
+#define SHADER_TESS_EVAL_FLAG (1 << VK_SHADER_STAGE_TESS_EVALUATION)
+#define SHADER_GEOMETRY_FLAG (1 << VK_SHADER_STAGE_GEOMETRY)
+#define SHADER_FRAGMENT_FLAG (1 << VK_SHADER_STAGE_FRAGMENT)
+#define SHADER_COMPUTE_FLAG (1 << VK_SHADER_STAGE_COMPUTE)
struct intel_pipeline_shader {
/* this is not an intel_obj */
* must grab everything we need from shader object as that
* can go away after the pipeline is created
*/
- XGL_FLAGS uses;
+ VK_FLAGS uses;
uint64_t inputs_read;
uint64_t outputs_written;
uint32_t outputs_offset;
/* If present, where does the SIMD16 kernel start? */
uint32_t offset_16;
- XGL_FLAGS barycentric_interps;
- XGL_FLAGS point_sprite_enables;
+ VK_FLAGS barycentric_interps;
+ VK_FLAGS point_sprite_enables;
- XGL_GPU_SIZE per_thread_scratch_size;
+ VK_GPU_SIZE per_thread_scratch_size;
enum intel_computed_depth_mode computed_depth_mode;
/* these are set up by the driver */
uint32_t max_threads;
- XGL_GPU_SIZE scratch_offset;
+ VK_GPU_SIZE scratch_offset;
};
/*
struct intel_dev *dev;
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vb[INTEL_MAX_VERTEX_BINDING_COUNT];
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vb[INTEL_MAX_VERTEX_BINDING_COUNT];
uint32_t vb_count;
- /* XGL_PIPELINE_IA_STATE_CREATE_INFO */
- XGL_PRIMITIVE_TOPOLOGY topology;
+ /* VK_PIPELINE_IA_STATE_CREATE_INFO */
+ VK_PRIMITIVE_TOPOLOGY topology;
int prim_type;
bool disable_vs_cache;
bool primitive_restart;
int provoking_vertex_trifan;
int provoking_vertex_line;
- // TODO: This should probably be Intel HW state, not XGL state.
+ // TODO: This should probably be Intel HW state, not VK state.
/* Depth Buffer format */
- XGL_FORMAT db_format;
+ VK_FORMAT db_format;
bool depth_zero_to_one;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb_state;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb_state;
- // XGL_PIPELINE_RS_STATE_CREATE_INFO rs_state;
+ // VK_PIPELINE_RS_STATE_CREATE_INFO rs_state;
bool depthClipEnable;
bool rasterizerDiscardEnable;
bool use_rs_point_size;
- XGL_PIPELINE_TESS_STATE_CREATE_INFO tess_state;
+ VK_PIPELINE_TESS_STATE_CREATE_INFO tess_state;
uint32_t active_shaders;
struct intel_pipeline_shader vs;
struct intel_pipeline_shader gs;
struct intel_pipeline_shader fs;
struct intel_pipeline_shader cs;
- XGL_GPU_SIZE scratch_size;
+ VK_GPU_SIZE scratch_size;
uint32_t wa_flags;
/* The following are only partial HW commands that will need
* more processing before sending to the HW
*/
- // XGL_PIPELINE_DS_STATE_CREATE_INFO ds_state
+ // VK_PIPELINE_DS_STATE_CREATE_INFO ds_state
bool stencilTestEnable;
uint32_t cmd_depth_stencil;
uint32_t cmd_depth_test;
uint32_t cmd_3dstate_sbe[14];
};
-static inline struct intel_pipeline *intel_pipeline(XGL_PIPELINE pipeline)
+static inline struct intel_pipeline *intel_pipeline(VK_PIPELINE pipeline)
{
return (struct intel_pipeline *) pipeline;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_query_destroy(query);
}
-static XGL_RESULT query_get_info(struct intel_base *base, int type,
+static VK_RESULT query_get_info(struct intel_base *base, int type,
size_t *size, void *data)
{
struct intel_query *query = intel_query_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = query->slot_stride * query->slot_count;
mem_req->alignment = 64;
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
}
break;
default:
return ret;
}
-XGL_RESULT intel_query_create(struct intel_dev *dev,
- const XGL_QUERY_POOL_CREATE_INFO *info,
+VK_RESULT intel_query_create(struct intel_dev *dev,
+ const VK_QUERY_POOL_CREATE_INFO *info,
struct intel_query **query_ret)
{
struct intel_query *query;
query = (struct intel_query *) intel_base_create(&dev->base.handle,
- sizeof(*query), dev->base.dbg, XGL_DBG_OBJECT_QUERY_POOL,
+ sizeof(*query), dev->base.dbg, VK_DBG_OBJECT_QUERY_POOL,
info, 0);
if (!query)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
query->type = info->queryType;
query->slot_count = info->slots;
* compare the differences to get the query results.
*/
switch (info->queryType) {
- case XGL_QUERY_OCCLUSION:
+ case VK_QUERY_OCCLUSION:
query->slot_stride = u_align(sizeof(uint64_t) * 2, 64);
break;
- case XGL_QUERY_PIPELINE_STATISTICS:
+ case VK_QUERY_PIPELINE_STATISTICS:
query->slot_stride =
- u_align(sizeof(XGL_PIPELINE_STATISTICS_DATA) * 2, 64);
+ u_align(sizeof(VK_PIPELINE_STATISTICS_DATA) * 2, 64);
break;
default:
break;
if (!query->slot_stride) {
intel_query_destroy(query);
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
}
query->obj.base.get_info = query_get_info;
*query_ret = query;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_query_destroy(struct intel_query *query)
static void
query_process_pipeline_statistics(const struct intel_query *query,
uint32_t count, const uint8_t *raw,
- XGL_PIPELINE_STATISTICS_DATA *results)
+ VK_PIPELINE_STATISTICS_DATA *results)
{
const uint32_t num_regs = sizeof(results[0]) / sizeof(uint64_t);
uint32_t i, j;
}
}
-XGL_RESULT intel_query_get_results(struct intel_query *query,
+VK_RESULT intel_query_get_results(struct intel_query *query,
uint32_t slot_start, uint32_t slot_count,
void *results)
{
const uint8_t *ptr;
if (!query->obj.mem)
- return XGL_ERROR_MEMORY_NOT_BOUND;
+ return VK_ERROR_MEMORY_NOT_BOUND;
if (intel_mem_is_busy(query->obj.mem))
- return XGL_NOT_READY;
+ return VK_NOT_READY;
ptr = (const uint8_t *) intel_mem_map_sync(query->obj.mem, false);
if (!ptr)
- return XGL_ERROR_MEMORY_MAP_FAILED;
+ return VK_ERROR_MEMORY_MAP_FAILED;
ptr += query->obj.offset + query->slot_stride * slot_start;
switch (query->type) {
- case XGL_QUERY_OCCLUSION:
+ case VK_QUERY_OCCLUSION:
query_process_occlusion(query, slot_count, ptr, results);
break;
- case XGL_QUERY_PIPELINE_STATISTICS:
+ case VK_QUERY_PIPELINE_STATISTICS:
query_process_pipeline_statistics(query, slot_count, ptr, results);
break;
default:
intel_mem_unmap(query->obj.mem);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(
- XGL_DEVICE device,
- const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- XGL_QUERY_POOL* pQueryPool)
+ICD_EXPORT VK_RESULT VKAPI vkCreateQueryPool(
+ VK_DEVICE device,
+ const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ VK_QUERY_POOL* pQueryPool)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_query **) pQueryPool);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount,
size_t* pDataSize,
struct intel_query *query = intel_query(queryPool);
switch (query->type) {
- case XGL_QUERY_OCCLUSION:
+ case VK_QUERY_OCCLUSION:
*pDataSize = sizeof(uint64_t) * queryCount;
break;
- case XGL_QUERY_PIPELINE_STATISTICS:
- *pDataSize = sizeof(XGL_PIPELINE_STATISTICS_DATA) * queryCount;
+ case VK_QUERY_PIPELINE_STATISTICS:
+ *pDataSize = sizeof(VK_PIPELINE_STATISTICS_DATA) * queryCount;
break;
default:
- return XGL_ERROR_INVALID_HANDLE;
+ return VK_ERROR_INVALID_HANDLE;
break;
}
if (pData)
return intel_query_get_results(query, startQuery, queryCount, pData);
else
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_query {
struct intel_obj obj;
- XGL_QUERY_TYPE type;
+ VK_QUERY_TYPE type;
uint32_t slot_stride;
uint32_t slot_count;
};
-static inline struct intel_query *intel_query(XGL_QUERY_POOL pool)
+static inline struct intel_query *intel_query(VK_QUERY_POOL pool)
{
return (struct intel_query *) pool;
}
return intel_query_from_base(&obj->base);
}
-XGL_RESULT intel_query_create(struct intel_dev *dev,
- const XGL_QUERY_POOL_CREATE_INFO *info,
+VK_RESULT intel_query_create(struct intel_dev *dev,
+ const VK_QUERY_POOL_CREATE_INFO *info,
struct intel_query **query_ret);
void intel_query_destroy(struct intel_query *query);
-XGL_RESULT intel_query_get_results(struct intel_query *query,
+VK_RESULT intel_query_get_results(struct intel_query *query,
uint32_t slot_start, uint32_t slot_count,
void *results);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
{
intel_cmd_decode(cmd, true);
- intel_dev_log(queue->dev, XGL_DBG_MSG_ERROR,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ intel_dev_log(queue->dev, VK_DBG_MSG_ERROR,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"GPU hanged with %d/%d active/pending command buffers lost",
active_lost, pending_lost);
}
-static XGL_RESULT queue_submit_bo(struct intel_queue *queue,
+static VK_RESULT queue_submit_bo(struct intel_queue *queue,
struct intel_bo *bo,
- XGL_GPU_SIZE used)
+ VK_GPU_SIZE used)
{
struct intel_winsys *winsys = queue->dev->winsys;
int err;
else
err = intel_winsys_submit_bo(winsys, queue->ring, bo, used, 0);
- return (err) ? XGL_ERROR_UNKNOWN : XGL_SUCCESS;
+ return (err) ? VK_ERROR_UNKNOWN : VK_SUCCESS;
}
static struct intel_bo *queue_create_bo(struct intel_queue *queue,
- XGL_GPU_SIZE size,
+ VK_GPU_SIZE size,
const void *cmd,
size_t cmd_len)
{
return bo;
}
-static XGL_RESULT queue_select_pipeline(struct intel_queue *queue,
+static VK_RESULT queue_select_pipeline(struct intel_queue *queue,
int pipeline_select)
{
uint32_t pipeline_select_cmd[] = {
GEN6_MI_CMD(MI_BATCH_BUFFER_END),
};
struct intel_bo *bo;
- XGL_RESULT ret;
+ VK_RESULT ret;
if (queue->ring != INTEL_RING_RENDER ||
queue->last_pipeline_select == pipeline_select)
- return XGL_SUCCESS;
+ return VK_SUCCESS;
switch (pipeline_select) {
case GEN6_PIPELINE_SELECT_DW0_SELECT_3D:
bo = queue->select_compute_bo;
break;
default:
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
break;
}
bo = queue_create_bo(queue, sizeof(pipeline_select_cmd),
pipeline_select_cmd, sizeof(pipeline_select_cmd));
if (!bo)
- return XGL_ERROR_OUT_OF_GPU_MEMORY;
+ return VK_ERROR_OUT_OF_GPU_MEMORY;
switch (pipeline_select) {
case GEN6_PIPELINE_SELECT_DW0_SELECT_3D:
}
ret = queue_submit_bo(queue, bo, sizeof(pipeline_select_cmd));
- if (ret == XGL_SUCCESS)
+ if (ret == VK_SUCCESS)
queue->last_pipeline_select = pipeline_select;
return ret;
}
-static XGL_RESULT queue_init_hw_and_atomic_bo(struct intel_queue *queue)
+static VK_RESULT queue_init_hw_and_atomic_bo(struct intel_queue *queue)
{
const uint32_t ctx_init_cmd[] = {
/* STATE_SIP */
GEN6_MI_CMD(MI_NOOP),
};
struct intel_bo *bo;
- XGL_RESULT ret;
+ VK_RESULT ret;
if (queue->ring != INTEL_RING_RENDER) {
queue->last_pipeline_select = -1;
queue->atomic_bo = queue_create_bo(queue,
sizeof(uint32_t) * INTEL_QUEUE_ATOMIC_COUNTER_COUNT,
NULL, 0);
- return (queue->atomic_bo) ? XGL_SUCCESS : XGL_ERROR_OUT_OF_GPU_MEMORY;
+ return (queue->atomic_bo) ? VK_SUCCESS : VK_ERROR_OUT_OF_GPU_MEMORY;
}
bo = queue_create_bo(queue,
sizeof(uint32_t) * INTEL_QUEUE_ATOMIC_COUNTER_COUNT,
ctx_init_cmd, sizeof(ctx_init_cmd));
if (!bo)
- return XGL_ERROR_OUT_OF_GPU_MEMORY;
+ return VK_ERROR_OUT_OF_GPU_MEMORY;
ret = queue_submit_bo(queue, bo, sizeof(ctx_init_cmd));
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_bo_unref(bo);
return ret;
}
/* reuse */
queue->atomic_bo = bo;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT queue_submit_cmd_prepare(struct intel_queue *queue,
+static VK_RESULT queue_submit_cmd_prepare(struct intel_queue *queue,
struct intel_cmd *cmd)
{
- if (unlikely(cmd->result != XGL_SUCCESS)) {
- intel_dev_log(cmd->dev, XGL_DBG_MSG_ERROR,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ if (unlikely(cmd->result != VK_SUCCESS)) {
+ intel_dev_log(cmd->dev, VK_DBG_MSG_ERROR,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"invalid command buffer submitted");
return cmd->result;
}
return queue_select_pipeline(queue, cmd->pipeline_select);
}
-static XGL_RESULT queue_submit_cmd_debug(struct intel_queue *queue,
+static VK_RESULT queue_submit_cmd_debug(struct intel_queue *queue,
struct intel_cmd *cmd)
{
uint32_t active[2], pending[2];
struct intel_bo *bo;
- XGL_GPU_SIZE used;
- XGL_RESULT ret;
+ VK_GPU_SIZE used;
+ VK_RESULT ret;
ret = queue_submit_cmd_prepare(queue, cmd);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
if (intel_debug & INTEL_DEBUG_HANG) {
bo = intel_cmd_get_batch(cmd, &used);
ret = queue_submit_bo(queue, bo, used);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
if (intel_debug & INTEL_DEBUG_HANG) {
if (intel_debug & INTEL_DEBUG_BATCH)
intel_cmd_decode(cmd, false);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT queue_submit_cmd(struct intel_queue *queue,
+static VK_RESULT queue_submit_cmd(struct intel_queue *queue,
struct intel_cmd *cmd)
{
struct intel_bo *bo;
- XGL_GPU_SIZE used;
- XGL_RESULT ret;
+ VK_GPU_SIZE used;
+ VK_RESULT ret;
ret = queue_submit_cmd_prepare(queue, cmd);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
bo = intel_cmd_get_batch(cmd, &used);
ret = queue_submit_bo(queue, bo, used);
}
return ret;
}
-XGL_RESULT intel_queue_create(struct intel_dev *dev,
+VK_RESULT intel_queue_create(struct intel_dev *dev,
enum intel_gpu_engine_type engine,
struct intel_queue **queue_ret)
{
struct intel_queue *queue;
enum intel_ring_type ring;
- XGL_FENCE_CREATE_INFO fence_info;
- XGL_RESULT ret;
+ VK_FENCE_CREATE_INFO fence_info;
+ VK_RESULT ret;
switch (engine) {
case INTEL_GPU_ENGINE_3D:
ring = INTEL_RING_RENDER;
break;
default:
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
break;
}
queue = (struct intel_queue *) intel_base_create(&dev->base.handle,
- sizeof(*queue), dev->base.dbg, XGL_DBG_OBJECT_QUEUE, NULL, 0);
+ sizeof(*queue), dev->base.dbg, VK_DBG_OBJECT_QUEUE, NULL, 0);
if (!queue)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
queue->dev = dev;
queue->ring = ring;
- if (queue_init_hw_and_atomic_bo(queue) != XGL_SUCCESS) {
+ if (queue_init_hw_and_atomic_bo(queue) != VK_SUCCESS) {
intel_queue_destroy(queue);
- return XGL_ERROR_INITIALIZATION_FAILED;
+ return VK_ERROR_INITIALIZATION_FAILED;
}
memset(&fence_info, 0, sizeof(fence_info));
- fence_info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
ret = intel_fence_create(dev, &fence_info, &queue->fence);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_queue_destroy(queue);
return ret;
}
*queue_ret = queue;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_queue_destroy(struct intel_queue *queue)
intel_base_destroy(&queue->base);
}
-XGL_RESULT intel_queue_wait(struct intel_queue *queue, int64_t timeout)
+VK_RESULT intel_queue_wait(struct intel_queue *queue, int64_t timeout)
{
- /* return XGL_SUCCESS instead of XGL_ERROR_UNAVAILABLE */
+ /* return VK_SUCCESS instead of VK_ERROR_UNAVAILABLE */
if (!queue->fence->seqno_bo)
- return XGL_SUCCESS;
+ return VK_SUCCESS;
return intel_fence_wait(queue->fence, timeout);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
/*
* The winsys maintains the list of memory references. These are ignored
* until we move away from the winsys.
*/
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
/*
* The winsys maintains the list of memory references. These are ignored
* until we move away from the winsys.
*/
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(
- XGL_QUEUE queue_)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(
+ VK_QUEUE queue_)
{
struct intel_queue *queue = intel_queue(queue_);
return intel_queue_wait(queue, -1);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue_,
+ICD_EXPORT VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue_,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER* pCmdBuffers,
- XGL_FENCE fence_)
+ const VK_CMD_BUFFER* pCmdBuffers,
+ VK_FENCE fence_)
{
struct intel_queue *queue = intel_queue(queue_);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
struct intel_cmd *last_cmd;
uint32_t i;
for (i = 0; i < cmdBufferCount; i++) {
struct intel_cmd *cmd = intel_cmd(pCmdBuffers[i]);
ret = queue_submit_cmd_debug(queue, cmd);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
break;
}
} else {
for (i = 0; i < cmdBufferCount; i++) {
struct intel_cmd *cmd = intel_cmd(pCmdBuffers[i]);
ret = queue_submit_cmd(queue, cmd);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
break;
}
}
last_cmd = intel_cmd(pCmdBuffers[i - 1]);
- if (ret == XGL_SUCCESS) {
+ if (ret == VK_SUCCESS) {
intel_fence_set_seqno(queue->fence,
intel_bo_ref(intel_cmd_get_batch(last_cmd, NULL)));
- if (fence_ != XGL_NULL_HANDLE) {
+ if (fence_ != VK_NULL_HANDLE) {
struct intel_fence *fence = intel_fence(fence_);
intel_fence_copy(fence, queue->fence);
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_OPEN_INFO* pOpenInfo,
+ VK_SEMAPHORE* pSemaphore)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_CREATE_INFO* pCreateInfo,
+ VK_SEMAPHORE* pSemaphore)
{
/*
* We want to find an unused semaphore register and initialize it. Signal
*
* XXX However, MI_SEMAPHORE_MBOX does not seem to have the flexibility.
*/
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_fence *fence;
};
-static inline struct intel_queue *intel_queue(XGL_QUEUE queue)
+static inline struct intel_queue *intel_queue(VK_QUEUE queue)
{
return (struct intel_queue *) queue;
}
-XGL_RESULT intel_queue_create(struct intel_dev *dev,
+VK_RESULT intel_queue_create(struct intel_dev *dev,
enum intel_gpu_engine_type engine,
struct intel_queue **queue_ret);
void intel_queue_destroy(struct intel_queue *queue);
-XGL_RESULT intel_queue_wait(struct intel_queue *queue, int64_t timeout);
+VK_RESULT intel_queue_wait(struct intel_queue *queue, int64_t timeout);
#endif /* QUEUE_H */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/**
* Translate a pipe texture filter to the matching hardware mapfilter.
*/
-static int translate_tex_filter(XGL_TEX_FILTER filter)
+static int translate_tex_filter(VK_TEX_FILTER filter)
{
switch (filter) {
- case XGL_TEX_FILTER_NEAREST: return GEN6_MAPFILTER_NEAREST;
- case XGL_TEX_FILTER_LINEAR: return GEN6_MAPFILTER_LINEAR;
+ case VK_TEX_FILTER_NEAREST: return GEN6_MAPFILTER_NEAREST;
+ case VK_TEX_FILTER_LINEAR: return GEN6_MAPFILTER_LINEAR;
default:
assert(!"unknown tex filter");
return GEN6_MAPFILTER_NEAREST;
}
}
-static int translate_tex_mipmap_mode(XGL_TEX_MIPMAP_MODE mode)
+static int translate_tex_mipmap_mode(VK_TEX_MIPMAP_MODE mode)
{
switch (mode) {
- case XGL_TEX_MIPMAP_NEAREST: return GEN6_MIPFILTER_NEAREST;
- case XGL_TEX_MIPMAP_LINEAR: return GEN6_MIPFILTER_LINEAR;
- case XGL_TEX_MIPMAP_BASE: return GEN6_MIPFILTER_NONE;
+ case VK_TEX_MIPMAP_NEAREST: return GEN6_MIPFILTER_NEAREST;
+ case VK_TEX_MIPMAP_LINEAR: return GEN6_MIPFILTER_LINEAR;
+ case VK_TEX_MIPMAP_BASE: return GEN6_MIPFILTER_NONE;
default:
assert(!"unknown tex mipmap mode");
return GEN6_MIPFILTER_NONE;
}
}
-static int translate_tex_addr(XGL_TEX_ADDRESS addr)
+static int translate_tex_addr(VK_TEX_ADDRESS addr)
{
switch (addr) {
- case XGL_TEX_ADDRESS_WRAP: return GEN6_TEXCOORDMODE_WRAP;
- case XGL_TEX_ADDRESS_MIRROR: return GEN6_TEXCOORDMODE_MIRROR;
- case XGL_TEX_ADDRESS_CLAMP: return GEN6_TEXCOORDMODE_CLAMP;
- case XGL_TEX_ADDRESS_MIRROR_ONCE: return GEN6_TEXCOORDMODE_MIRROR_ONCE;
- case XGL_TEX_ADDRESS_CLAMP_BORDER: return GEN6_TEXCOORDMODE_CLAMP_BORDER;
+ case VK_TEX_ADDRESS_WRAP: return GEN6_TEXCOORDMODE_WRAP;
+ case VK_TEX_ADDRESS_MIRROR: return GEN6_TEXCOORDMODE_MIRROR;
+ case VK_TEX_ADDRESS_CLAMP: return GEN6_TEXCOORDMODE_CLAMP;
+ case VK_TEX_ADDRESS_MIRROR_ONCE: return GEN6_TEXCOORDMODE_MIRROR_ONCE;
+ case VK_TEX_ADDRESS_CLAMP_BORDER: return GEN6_TEXCOORDMODE_CLAMP_BORDER;
default:
assert(!"unknown tex address");
return GEN6_TEXCOORDMODE_WRAP;
}
}
-static int translate_compare_func(XGL_COMPARE_FUNC func)
+static int translate_compare_func(VK_COMPARE_FUNC func)
{
switch (func) {
- case XGL_COMPARE_NEVER: return GEN6_COMPAREFUNCTION_NEVER;
- case XGL_COMPARE_LESS: return GEN6_COMPAREFUNCTION_LESS;
- case XGL_COMPARE_EQUAL: return GEN6_COMPAREFUNCTION_EQUAL;
- case XGL_COMPARE_LESS_EQUAL: return GEN6_COMPAREFUNCTION_LEQUAL;
- case XGL_COMPARE_GREATER: return GEN6_COMPAREFUNCTION_GREATER;
- case XGL_COMPARE_NOT_EQUAL: return GEN6_COMPAREFUNCTION_NOTEQUAL;
- case XGL_COMPARE_GREATER_EQUAL: return GEN6_COMPAREFUNCTION_GEQUAL;
- case XGL_COMPARE_ALWAYS: return GEN6_COMPAREFUNCTION_ALWAYS;
+ case VK_COMPARE_NEVER: return GEN6_COMPAREFUNCTION_NEVER;
+ case VK_COMPARE_LESS: return GEN6_COMPAREFUNCTION_LESS;
+ case VK_COMPARE_EQUAL: return GEN6_COMPAREFUNCTION_EQUAL;
+ case VK_COMPARE_LESS_EQUAL: return GEN6_COMPAREFUNCTION_LEQUAL;
+ case VK_COMPARE_GREATER: return GEN6_COMPAREFUNCTION_GREATER;
+ case VK_COMPARE_NOT_EQUAL: return GEN6_COMPAREFUNCTION_NOTEQUAL;
+ case VK_COMPARE_GREATER_EQUAL: return GEN6_COMPAREFUNCTION_GEQUAL;
+ case VK_COMPARE_ALWAYS: return GEN6_COMPAREFUNCTION_ALWAYS;
default:
assert(!"unknown compare_func");
return GEN6_COMPAREFUNCTION_NEVER;
}
}
-static void translate_border_color(XGL_BORDER_COLOR_TYPE type, float rgba[4])
+static void translate_border_color(VK_BORDER_COLOR_TYPE type, float rgba[4])
{
switch (type) {
- case XGL_BORDER_COLOR_OPAQUE_WHITE:
+ case VK_BORDER_COLOR_OPAQUE_WHITE:
rgba[0] = 1.0;
rgba[1] = 1.0;
rgba[2] = 1.0;
rgba[3] = 1.0;
break;
- case XGL_BORDER_COLOR_TRANSPARENT_BLACK:
+ case VK_BORDER_COLOR_TRANSPARENT_BLACK:
default:
rgba[0] = 0.0;
rgba[1] = 0.0;
rgba[2] = 0.0;
rgba[3] = 0.0;
break;
- case XGL_BORDER_COLOR_OPAQUE_BLACK:
+ case VK_BORDER_COLOR_OPAQUE_BLACK:
rgba[0] = 0.0;
rgba[1] = 0.0;
rgba[2] = 0.0;
static void
sampler_init(struct intel_sampler *sampler,
const struct intel_gpu *gpu,
- const XGL_SAMPLER_CREATE_INFO *info)
+ const VK_SAMPLER_CREATE_INFO *info)
{
int mip_filter, min_filter, mag_filter, max_aniso;
int lod_bias, max_lod, min_lod;
* To achieve our goal, we just need to set MinLod to zero and set
* MagFilter to MinFilter when mipmapping is disabled.
*/
- if (info->mipMode == XGL_TEX_MIPMAP_BASE && min_lod) {
+ if (info->mipMode == VK_TEX_MIPMAP_BASE && min_lod) {
min_lod = 0;
mag_filter = min_filter;
}
intel_sampler_destroy(sampler);
}
-XGL_RESULT intel_sampler_create(struct intel_dev *dev,
- const XGL_SAMPLER_CREATE_INFO *info,
+VK_RESULT intel_sampler_create(struct intel_dev *dev,
+ const VK_SAMPLER_CREATE_INFO *info,
struct intel_sampler **sampler_ret)
{
struct intel_sampler *sampler;
sampler = (struct intel_sampler *) intel_base_create(&dev->base.handle,
- sizeof(*sampler), dev->base.dbg, XGL_DBG_OBJECT_SAMPLER, info, 0);
+ sizeof(*sampler), dev->base.dbg, VK_DBG_OBJECT_SAMPLER, info, 0);
if (!sampler)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
sampler->obj.destroy = sampler_destroy;
*sampler_ret = sampler;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_sampler_destroy(struct intel_sampler *sampler)
intel_base_destroy(&sampler->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSampler(
- XGL_DEVICE device,
- const XGL_SAMPLER_CREATE_INFO* pCreateInfo,
- XGL_SAMPLER* pSampler)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSampler(
+ VK_DEVICE device,
+ const VK_SAMPLER_CREATE_INFO* pCreateInfo,
+ VK_SAMPLER* pSampler)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
uint32_t cmd[15];
};
-static inline struct intel_sampler *intel_sampler(XGL_SAMPLER sampler)
+static inline struct intel_sampler *intel_sampler(VK_SAMPLER sampler)
{
return (struct intel_sampler *) sampler;
}
return (struct intel_sampler *) obj;
}
-XGL_RESULT intel_sampler_create(struct intel_dev *dev,
- const XGL_SAMPLER_CREATE_INFO *info,
+VK_RESULT intel_sampler_create(struct intel_dev *dev,
+ const VK_SAMPLER_CREATE_INFO *info,
struct intel_sampler **sampler_ret);
void intel_sampler_destroy(struct intel_sampler *sampler);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
intel_base_destroy(&sh->obj.base);
}
-static XGL_RESULT shader_create(struct intel_dev *dev,
- const XGL_SHADER_CREATE_INFO *info,
+static VK_RESULT shader_create(struct intel_dev *dev,
+ const VK_SHADER_CREATE_INFO *info,
struct intel_shader **sh_ret)
{
const struct icd_spv_header *spv =
struct intel_shader *sh;
sh = (struct intel_shader *) intel_base_create(&dev->base.handle,
- sizeof(*sh), dev->base.dbg, XGL_DBG_OBJECT_SHADER, info, 0);
+ sizeof(*sh), dev->base.dbg, VK_DBG_OBJECT_SHADER, info, 0);
if (!sh)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
if (info->codeSize < sizeof(*spv))
- return XGL_ERROR_INVALID_MEMORY_SIZE;
+ return VK_ERROR_INVALID_MEMORY_SIZE;
if (spv->magic != ICD_SPV_MAGIC)
- return XGL_ERROR_BAD_SHADER_CODE;
+ return VK_ERROR_BAD_SHADER_CODE;
sh->ir = shader_create_ir(dev->gpu, info->pCode, info->codeSize);
if (!sh->ir) {
shader_destroy(&sh->obj);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
sh->obj.destroy = shader_destroy;
*sh_ret = sh;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateShader(
- XGL_DEVICE device,
- const XGL_SHADER_CREATE_INFO* pCreateInfo,
- XGL_SHADER* pShader)
+ICD_EXPORT VK_RESULT VKAPI vkCreateShader(
+ VK_DEVICE device,
+ const VK_SHADER_CREATE_INFO* pCreateInfo,
+ VK_SHADER* pShader)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_ir *ir;
};
-static inline struct intel_shader *intel_shader(XGL_SHADER shader)
+static inline struct intel_shader *intel_shader(VK_SHADER shader)
{
return (struct intel_shader *) shader;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
*max_gby = (float) (center_y + half_len);
}
-static XGL_RESULT
+static VK_RESULT
viewport_state_alloc_cmd(struct intel_dynamic_vp *state,
const struct intel_gpu *gpu,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info)
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info)
{
INTEL_GPU_ASSERT(gpu, 6, 7.5);
state->cmd_len += 2 * info->viewportAndScissorCount;
state->cmd = intel_alloc(state, sizeof(uint32_t) * state->cmd_len,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!state->cmd)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT
+static VK_RESULT
viewport_state_init(struct intel_dynamic_vp *state,
const struct intel_gpu *gpu,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info)
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info)
{
const uint32_t sf_stride = (intel_gpu_gen(gpu) >= INTEL_GEN(7)) ? 16 : 8;
const uint32_t clip_stride = (intel_gpu_gen(gpu) >= INTEL_GEN(7)) ? 16 : 4;
uint32_t *sf_viewport, *clip_viewport, *cc_viewport, *scissor_rect;
uint32_t i;
- XGL_RESULT ret;
+ VK_RESULT ret;
INTEL_GPU_ASSERT(gpu, 6, 7.5);
ret = viewport_state_alloc_cmd(state, gpu, info);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
sf_viewport = state->cmd;
scissor_rect = state->cmd + state->cmd_scissor_rect_pos;
for (i = 0; i < info->viewportAndScissorCount; i++) {
- const XGL_VIEWPORT *viewport = &info->pViewports[i];
+ const VK_VIEWPORT *viewport = &info->pViewports[i];
uint32_t *dw = NULL;
float translate[3], scale[3];
int min_gbx, max_gbx, min_gby, max_gby;
}
for (i = 0; i < info->viewportAndScissorCount; i++) {
- const XGL_RECT *scissor = &info->pScissors[i];
+ const VK_RECT *scissor = &info->pScissors[i];
/* SCISSOR_RECT */
int16_t max_x, max_y;
uint32_t *dw = NULL;
scissor_rect += 2;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void viewport_state_destroy(struct intel_obj *obj)
intel_viewport_state_destroy(state);
}
-XGL_RESULT intel_viewport_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info,
+VK_RESULT intel_viewport_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info,
struct intel_dynamic_vp **state_ret)
{
struct intel_dynamic_vp *state;
- XGL_RESULT ret;
+ VK_RESULT ret;
state = (struct intel_dynamic_vp *) intel_base_create(&dev->base.handle,
- sizeof(*state), dev->base.dbg, XGL_DBG_OBJECT_VIEWPORT_STATE,
+ sizeof(*state), dev->base.dbg, VK_DBG_OBJECT_VIEWPORT_STATE,
info, 0);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
state->obj.destroy = viewport_state_destroy;
ret = viewport_state_init(state, dev->gpu, info);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_viewport_state_destroy(state);
return ret;
}
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_viewport_state_destroy(struct intel_dynamic_vp *state)
intel_raster_state_destroy(state);
}
-XGL_RESULT intel_raster_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO *info,
+VK_RESULT intel_raster_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO *info,
struct intel_dynamic_rs **state_ret)
{
struct intel_dynamic_rs *state;
state = (struct intel_dynamic_rs *) intel_base_create(&dev->base.handle,
- sizeof(*state), dev->base.dbg, XGL_DBG_OBJECT_RASTER_STATE,
+ sizeof(*state), dev->base.dbg, VK_DBG_OBJECT_RASTER_STATE,
info, 0);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
state->obj.destroy = raster_state_destroy;
state->rs_info = *info;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_raster_state_destroy(struct intel_dynamic_rs *state)
intel_blend_state_destroy(state);
}
-XGL_RESULT intel_blend_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO *info,
+VK_RESULT intel_blend_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO *info,
struct intel_dynamic_cb **state_ret)
{
struct intel_dynamic_cb *state;
state = (struct intel_dynamic_cb *) intel_base_create(&dev->base.handle,
- sizeof(*state), dev->base.dbg, XGL_DBG_OBJECT_COLOR_BLEND_STATE,
+ sizeof(*state), dev->base.dbg, VK_DBG_OBJECT_COLOR_BLEND_STATE,
info, 0);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
state->obj.destroy = blend_state_destroy;
state->cb_info = *info;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_blend_state_destroy(struct intel_dynamic_cb *state)
intel_ds_state_destroy(state);
}
-XGL_RESULT intel_ds_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO *info,
+VK_RESULT intel_ds_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO *info,
struct intel_dynamic_ds **state_ret)
{
struct intel_dynamic_ds *state;
state = (struct intel_dynamic_ds *) intel_base_create(&dev->base.handle,
- sizeof(*state), dev->base.dbg, XGL_DBG_OBJECT_DEPTH_STENCIL_STATE,
+ sizeof(*state), dev->base.dbg, VK_DBG_OBJECT_DEPTH_STENCIL_STATE,
info, 0);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
state->obj.destroy = ds_state_destroy;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_ds_state_destroy(struct intel_dynamic_ds *state)
intel_base_destroy(&state->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_dynamic_vp **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_dynamic_rs **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_dynamic_cb **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_dynamic_rs {
struct intel_obj obj;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO rs_info;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO rs_info;
};
struct intel_dynamic_cb {
struct intel_obj obj;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO cb_info;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO cb_info;
};
struct intel_dynamic_ds {
struct intel_obj obj;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO ds_info;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO ds_info;
};
-static inline struct intel_dynamic_vp *intel_dynamic_vp(XGL_DYNAMIC_VP_STATE_OBJECT state)
+static inline struct intel_dynamic_vp *intel_dynamic_vp(VK_DYNAMIC_VP_STATE_OBJECT state)
{
return (struct intel_dynamic_vp *) state;
}
return (struct intel_dynamic_vp *) obj;
}
-static inline struct intel_dynamic_rs *intel_dynamic_rs(XGL_DYNAMIC_RS_STATE_OBJECT state)
+static inline struct intel_dynamic_rs *intel_dynamic_rs(VK_DYNAMIC_RS_STATE_OBJECT state)
{
return (struct intel_dynamic_rs *) state;
}
return (struct intel_dynamic_rs *) obj;
}
-static inline struct intel_dynamic_cb *intel_dynamic_cb(XGL_DYNAMIC_CB_STATE_OBJECT state)
+static inline struct intel_dynamic_cb *intel_dynamic_cb(VK_DYNAMIC_CB_STATE_OBJECT state)
{
return (struct intel_dynamic_cb *) state;
}
return (struct intel_dynamic_cb *) obj;
}
-static inline struct intel_dynamic_ds *intel_dynamic_ds(XGL_DYNAMIC_DS_STATE_OBJECT state)
+static inline struct intel_dynamic_ds *intel_dynamic_ds(VK_DYNAMIC_DS_STATE_OBJECT state)
{
return (struct intel_dynamic_ds *) state;
}
return (struct intel_dynamic_ds *) obj;
}
-XGL_RESULT intel_viewport_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info,
+VK_RESULT intel_viewport_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info,
struct intel_dynamic_vp **state_ret);
void intel_viewport_state_destroy(struct intel_dynamic_vp *state);
-XGL_RESULT intel_raster_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO *info,
+VK_RESULT intel_raster_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO *info,
struct intel_dynamic_rs **state_ret);
void intel_raster_state_destroy(struct intel_dynamic_rs *state);
-XGL_RESULT intel_blend_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO *info,
+VK_RESULT intel_blend_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO *info,
struct intel_dynamic_cb **state_ret);
void intel_blend_state_destroy(struct intel_dynamic_cb *state);
-XGL_RESULT intel_ds_state_create(struct intel_dev *dev,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO *info,
+VK_RESULT intel_ds_state_create(struct intel_dev *dev,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO *info,
struct intel_dynamic_ds **state_ret);
void intel_ds_state_destroy(struct intel_dynamic_ds *state);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
static void surface_state_buf_gen7(const struct intel_gpu *gpu,
unsigned offset, unsigned size,
unsigned struct_size,
- XGL_FORMAT elem_format,
+ VK_FORMAT elem_format,
bool is_rt, bool render_cache_rw,
uint32_t dw[8])
{
}
}
-static int img_type_to_view_type(XGL_IMAGE_TYPE type)
+static int img_type_to_view_type(VK_IMAGE_TYPE type)
{
switch (type) {
- case XGL_IMAGE_1D: return XGL_IMAGE_VIEW_1D;
- case XGL_IMAGE_2D: return XGL_IMAGE_VIEW_2D;
- case XGL_IMAGE_3D: return XGL_IMAGE_VIEW_3D;
- default: assert(!"unknown img type"); return XGL_IMAGE_VIEW_1D;
+ case VK_IMAGE_1D: return VK_IMAGE_VIEW_1D;
+ case VK_IMAGE_2D: return VK_IMAGE_VIEW_2D;
+ case VK_IMAGE_3D: return VK_IMAGE_VIEW_3D;
+ default: assert(!"unknown img type"); return VK_IMAGE_VIEW_1D;
}
}
-static int view_type_to_surface_type(XGL_IMAGE_VIEW_TYPE type)
+static int view_type_to_surface_type(VK_IMAGE_VIEW_TYPE type)
{
switch (type) {
- case XGL_IMAGE_VIEW_1D: return GEN6_SURFTYPE_1D;
- case XGL_IMAGE_VIEW_2D: return GEN6_SURFTYPE_2D;
- case XGL_IMAGE_VIEW_3D: return GEN6_SURFTYPE_3D;
- case XGL_IMAGE_VIEW_CUBE: return GEN6_SURFTYPE_CUBE;
+ case VK_IMAGE_VIEW_1D: return GEN6_SURFTYPE_1D;
+ case VK_IMAGE_VIEW_2D: return GEN6_SURFTYPE_2D;
+ case VK_IMAGE_VIEW_3D: return GEN6_SURFTYPE_3D;
+ case VK_IMAGE_VIEW_CUBE: return GEN6_SURFTYPE_CUBE;
default: assert(!"unknown view type"); return GEN6_SURFTYPE_NULL;
}
}
-static int channel_swizzle_to_scs(XGL_CHANNEL_SWIZZLE swizzle)
+static int channel_swizzle_to_scs(VK_CHANNEL_SWIZZLE swizzle)
{
switch (swizzle) {
- case XGL_CHANNEL_SWIZZLE_ZERO: return GEN75_SCS_ZERO;
- case XGL_CHANNEL_SWIZZLE_ONE: return GEN75_SCS_ONE;
- case XGL_CHANNEL_SWIZZLE_R: return GEN75_SCS_RED;
- case XGL_CHANNEL_SWIZZLE_G: return GEN75_SCS_GREEN;
- case XGL_CHANNEL_SWIZZLE_B: return GEN75_SCS_BLUE;
- case XGL_CHANNEL_SWIZZLE_A: return GEN75_SCS_ALPHA;
+ case VK_CHANNEL_SWIZZLE_ZERO: return GEN75_SCS_ZERO;
+ case VK_CHANNEL_SWIZZLE_ONE: return GEN75_SCS_ONE;
+ case VK_CHANNEL_SWIZZLE_R: return GEN75_SCS_RED;
+ case VK_CHANNEL_SWIZZLE_G: return GEN75_SCS_GREEN;
+ case VK_CHANNEL_SWIZZLE_B: return GEN75_SCS_BLUE;
+ case VK_CHANNEL_SWIZZLE_A: return GEN75_SCS_ALPHA;
default: assert(!"unknown swizzle"); return GEN75_SCS_ZERO;
}
}
static void surface_state_tex_gen7(const struct intel_gpu *gpu,
const struct intel_img *img,
- XGL_IMAGE_VIEW_TYPE type,
- XGL_FORMAT format,
+ VK_IMAGE_VIEW_TYPE type,
+ VK_FORMAT format,
unsigned first_level,
unsigned num_levels,
unsigned first_layer,
unsigned num_layers,
- XGL_CHANNEL_MAPPING swizzles,
+ VK_CHANNEL_MAPPING swizzles,
bool is_rt,
uint32_t dw[8])
{
width = img->layout.width0;
height = img->layout.height0;
- depth = (type == XGL_IMAGE_VIEW_3D) ?
+ depth = (type == VK_IMAGE_VIEW_3D) ?
img->depth : num_layers;
pitch = img->layout.bo_stride;
channel_swizzle_to_scs(swizzles.b) << GEN75_SURFACE_DW7_SCS_B__SHIFT |
channel_swizzle_to_scs(swizzles.a) << GEN75_SURFACE_DW7_SCS_A__SHIFT;
} else {
- assert(swizzles.r == XGL_CHANNEL_SWIZZLE_R &&
- swizzles.g == XGL_CHANNEL_SWIZZLE_G &&
- swizzles.b == XGL_CHANNEL_SWIZZLE_B &&
- swizzles.a == XGL_CHANNEL_SWIZZLE_A);
+ assert(swizzles.r == VK_CHANNEL_SWIZZLE_R &&
+ swizzles.g == VK_CHANNEL_SWIZZLE_G &&
+ swizzles.b == VK_CHANNEL_SWIZZLE_B &&
+ swizzles.a == VK_CHANNEL_SWIZZLE_A);
}
}
static void surface_state_buf_gen6(const struct intel_gpu *gpu,
unsigned offset, unsigned size,
unsigned struct_size,
- XGL_FORMAT elem_format,
+ VK_FORMAT elem_format,
bool is_rt, bool render_cache_rw,
uint32_t dw[6])
{
static void surface_state_tex_gen6(const struct intel_gpu *gpu,
const struct intel_img *img,
- XGL_IMAGE_VIEW_TYPE type,
- XGL_FORMAT format,
+ VK_IMAGE_VIEW_TYPE type,
+ VK_FORMAT format,
unsigned first_level,
unsigned num_levels,
unsigned first_layer,
width = img->layout.width0;
height = img->layout.height0;
- depth = (type == XGL_IMAGE_VIEW_3D) ?
+ depth = (type == VK_IMAGE_VIEW_3D) ?
img->depth : num_layers;
pitch = img->layout.bo_stride;
static void
ds_init_info(const struct intel_gpu *gpu,
const struct intel_img *img,
- XGL_FORMAT format, unsigned level,
+ VK_FORMAT format, unsigned level,
unsigned first_layer, unsigned num_layers,
struct ds_surface_info *info)
{
* As for GEN7+, separate_stencil is always true.
*/
switch (format) {
- case XGL_FMT_D16_UNORM:
+ case VK_FMT_D16_UNORM:
info->format = GEN6_ZFORMAT_D16_UNORM;
break;
- case XGL_FMT_D32_SFLOAT:
+ case VK_FMT_D32_SFLOAT:
info->format = GEN6_ZFORMAT_D32_FLOAT;
break;
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
info->format = (separate_stencil) ?
GEN6_ZFORMAT_D32_FLOAT :
GEN6_ZFORMAT_D32_FLOAT_S8X24_UINT;
break;
- case XGL_FMT_S8_UINT:
+ case VK_FMT_S8_UINT:
if (separate_stencil) {
info->format = GEN6_ZFORMAT_D32_FLOAT;
break;
break;
}
- if (format != XGL_FMT_S8_UINT)
+ if (format != VK_FMT_S8_UINT)
info->zs.stride = img->layout.bo_stride;
if (img->s8_layout) {
intel_layout_pos_to_mem(img->s8_layout, x, y, &x, &y);
info->stencil.offset = intel_layout_mem_to_raw(img->s8_layout, x, y);
}
- } else if (format == XGL_FMT_S8_UINT) {
+ } else if (format == VK_FMT_S8_UINT) {
info->stencil.stride = img->layout.bo_stride * 2;
}
info->width = img->layout.width0;
info->height = img->layout.height0;
- info->depth = (img->type == XGL_IMAGE_3D) ?
+ info->depth = (img->type == VK_IMAGE_3D) ?
img->depth : num_layers;
info->lod = level;
static void ds_view_init(struct intel_ds_view *view,
const struct intel_gpu *gpu,
const struct intel_img *img,
- XGL_FORMAT format, unsigned level,
+ VK_FORMAT format, unsigned level,
unsigned first_layer, unsigned num_layers)
{
const int max_2d_size U_ASSERT_ONLY =
intel_buf_view_destroy(view);
}
-XGL_RESULT intel_buf_view_create(struct intel_dev *dev,
- const XGL_BUFFER_VIEW_CREATE_INFO *info,
+VK_RESULT intel_buf_view_create(struct intel_dev *dev,
+ const VK_BUFFER_VIEW_CREATE_INFO *info,
struct intel_buf_view **view_ret)
{
struct intel_buf *buf = intel_buf(info->buffer);
const bool will_write = (buf->usage |
- (XGL_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT &
- XGL_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT));
- XGL_FORMAT format;
- XGL_GPU_SIZE stride;
+ (VK_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT &
+ VK_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT));
+ VK_FORMAT format;
+ VK_GPU_SIZE stride;
uint32_t *cmd;
struct intel_buf_view *view;
int i;
view = (struct intel_buf_view *) intel_base_create(&dev->base.handle,
- sizeof(*view), dev->base.dbg, XGL_DBG_OBJECT_BUFFER_VIEW,
+ sizeof(*view), dev->base.dbg, VK_DBG_OBJECT_BUFFER_VIEW,
info, 0);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->obj.destroy = buf_view_destroy;
/*
* The compiler expects uniform buffers to have pitch of
* 4 for fragment shaders, but 16 for other stages. The format
- * must be XGL_FMT_R32G32B32A32_SFLOAT.
+ * must be VK_FMT_R32G32B32A32_SFLOAT.
*/
- if (info->viewType == XGL_BUFFER_VIEW_RAW) {
- format = XGL_FMT_R32G32B32A32_SFLOAT;
+ if (info->viewType == VK_BUFFER_VIEW_RAW) {
+ format = VK_FMT_R32G32B32A32_SFLOAT;
stride = 16;
} else {
format = info->format;
}
/* switch to view->fs_cmd */
- if (info->viewType == XGL_BUFFER_VIEW_RAW) {
+ if (info->viewType == VK_BUFFER_VIEW_RAW) {
cmd = view->fs_cmd;
stride = 4;
} else {
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_buf_view_destroy(struct intel_buf_view *view)
intel_img_view_destroy(view);
}
-XGL_RESULT intel_img_view_create(struct intel_dev *dev,
- const XGL_IMAGE_VIEW_CREATE_INFO *info,
+VK_RESULT intel_img_view_create(struct intel_dev *dev,
+ const VK_IMAGE_VIEW_CREATE_INFO *info,
struct intel_img_view **view_ret)
{
struct intel_img *img = intel_img(info->image);
struct intel_img_view *view;
uint32_t mip_levels, array_size;
- XGL_CHANNEL_MAPPING state_swizzles;
+ VK_CHANNEL_MAPPING state_swizzles;
if (info->subresourceRange.baseMipLevel >= img->mip_levels ||
info->subresourceRange.baseArraySlice >= img->array_size ||
!info->subresourceRange.mipLevels ||
!info->subresourceRange.arraySize)
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
mip_levels = info->subresourceRange.mipLevels;
if (mip_levels > img->mip_levels - info->subresourceRange.baseMipLevel)
array_size = img->array_size - info->subresourceRange.baseArraySlice;
view = (struct intel_img_view *) intel_base_create(&dev->base.handle,
- sizeof(*view), dev->base.dbg, XGL_DBG_OBJECT_IMAGE_VIEW, info, 0);
+ sizeof(*view), dev->base.dbg, VK_DBG_OBJECT_IMAGE_VIEW, info, 0);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->obj.destroy = img_view_destroy;
if (intel_gpu_gen(dev->gpu) >= INTEL_GEN(7.5)) {
state_swizzles = info->channels;
- view->shader_swizzles.r = XGL_CHANNEL_SWIZZLE_R;
- view->shader_swizzles.g = XGL_CHANNEL_SWIZZLE_G;
- view->shader_swizzles.b = XGL_CHANNEL_SWIZZLE_B;
- view->shader_swizzles.a = XGL_CHANNEL_SWIZZLE_A;
+ view->shader_swizzles.r = VK_CHANNEL_SWIZZLE_R;
+ view->shader_swizzles.g = VK_CHANNEL_SWIZZLE_G;
+ view->shader_swizzles.b = VK_CHANNEL_SWIZZLE_B;
+ view->shader_swizzles.a = VK_CHANNEL_SWIZZLE_A;
} else {
- state_swizzles.r = XGL_CHANNEL_SWIZZLE_R;
- state_swizzles.g = XGL_CHANNEL_SWIZZLE_G;
- state_swizzles.b = XGL_CHANNEL_SWIZZLE_B;
- state_swizzles.a = XGL_CHANNEL_SWIZZLE_A;
+ state_swizzles.r = VK_CHANNEL_SWIZZLE_R;
+ state_swizzles.g = VK_CHANNEL_SWIZZLE_G;
+ state_swizzles.b = VK_CHANNEL_SWIZZLE_B;
+ state_swizzles.a = VK_CHANNEL_SWIZZLE_A;
view->shader_swizzles = info->channels;
}
/* shader_swizzles is ignored by the compiler */
- if (view->shader_swizzles.r != XGL_CHANNEL_SWIZZLE_R ||
- view->shader_swizzles.g != XGL_CHANNEL_SWIZZLE_G ||
- view->shader_swizzles.b != XGL_CHANNEL_SWIZZLE_B ||
- view->shader_swizzles.a != XGL_CHANNEL_SWIZZLE_A) {
- intel_dev_log(dev, XGL_DBG_MSG_WARNING,
- XGL_VALIDATION_LEVEL_0, XGL_NULL_HANDLE, 0, 0,
+ if (view->shader_swizzles.r != VK_CHANNEL_SWIZZLE_R ||
+ view->shader_swizzles.g != VK_CHANNEL_SWIZZLE_G ||
+ view->shader_swizzles.b != VK_CHANNEL_SWIZZLE_B ||
+ view->shader_swizzles.a != VK_CHANNEL_SWIZZLE_A) {
+ intel_dev_log(dev, VK_DBG_MSG_WARNING,
+ VK_VALIDATION_LEVEL_0, VK_NULL_HANDLE, 0, 0,
"image data swizzling is ignored");
}
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_img_view_destroy(struct intel_img_view *view)
intel_rt_view_destroy(view);
}
-XGL_RESULT intel_rt_view_create(struct intel_dev *dev,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
+VK_RESULT intel_rt_view_create(struct intel_dev *dev,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
struct intel_rt_view **view_ret)
{
- static const XGL_CHANNEL_MAPPING identity_channel_mapping = {
- .r = XGL_CHANNEL_SWIZZLE_R,
- .g = XGL_CHANNEL_SWIZZLE_G,
- .b = XGL_CHANNEL_SWIZZLE_B,
- .a = XGL_CHANNEL_SWIZZLE_A,
+ static const VK_CHANNEL_MAPPING identity_channel_mapping = {
+ .r = VK_CHANNEL_SWIZZLE_R,
+ .g = VK_CHANNEL_SWIZZLE_G,
+ .b = VK_CHANNEL_SWIZZLE_B,
+ .a = VK_CHANNEL_SWIZZLE_A,
};
struct intel_img *img = intel_img(info->image);
struct intel_rt_view *view;
view = (struct intel_rt_view *) intel_base_create(&dev->base.handle,
- sizeof(*view), dev->base.dbg, XGL_DBG_OBJECT_COLOR_TARGET_VIEW,
+ sizeof(*view), dev->base.dbg, VK_DBG_OBJECT_COLOR_TARGET_VIEW,
info, 0);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->obj.destroy = rt_view_destroy;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_rt_view_destroy(struct intel_rt_view *view)
intel_ds_view_destroy(view);
}
-XGL_RESULT intel_ds_view_create(struct intel_dev *dev,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
+VK_RESULT intel_ds_view_create(struct intel_dev *dev,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
struct intel_ds_view **view_ret)
{
struct intel_img *img = intel_img(info->image);
struct intel_ds_view *view;
view = (struct intel_ds_view *) intel_base_create(&dev->base.handle,
- sizeof(*view), dev->base.dbg, XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW,
+ sizeof(*view), dev->base.dbg, VK_DBG_OBJECT_DEPTH_STENCIL_VIEW,
info, 0);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->obj.destroy = ds_view_destroy;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_ds_view_destroy(struct intel_ds_view *view)
intel_base_destroy(&view->obj.base);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(
- XGL_DEVICE device,
- const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
- XGL_BUFFER_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBufferView(
+ VK_DEVICE device,
+ const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
+ VK_BUFFER_VIEW* pView)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_buf_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImageView(
- XGL_DEVICE device,
- const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImageView(
+ VK_DEVICE device,
+ const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_img_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(
- XGL_DEVICE device,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(
+ VK_DEVICE device,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView)
{
struct intel_dev *dev = intel_dev(device);
(struct intel_rt_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(
- XGL_DEVICE device,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
- XGL_DEPTH_STENCIL_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(
+ VK_DEVICE device,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
+ VK_DEPTH_STENCIL_VIEW* pView)
{
struct intel_dev *dev = intel_dev(device);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
struct intel_img *img;
float min_lod;
- XGL_CHANNEL_MAPPING shader_swizzles;
+ VK_CHANNEL_MAPPING shader_swizzles;
/* SURFACE_STATE */
uint32_t cmd[8];
bool has_hiz;
};
-static inline struct intel_buf_view *intel_buf_view(XGL_BUFFER_VIEW view)
+static inline struct intel_buf_view *intel_buf_view(VK_BUFFER_VIEW view)
{
return (struct intel_buf_view *) view;
}
return (struct intel_buf_view *) obj;
}
-static inline struct intel_img_view *intel_img_view(XGL_IMAGE_VIEW view)
+static inline struct intel_img_view *intel_img_view(VK_IMAGE_VIEW view)
{
return (struct intel_img_view *) view;
}
return (struct intel_img_view *) obj;
}
-static inline struct intel_rt_view *intel_rt_view(XGL_COLOR_ATTACHMENT_VIEW view)
+static inline struct intel_rt_view *intel_rt_view(VK_COLOR_ATTACHMENT_VIEW view)
{
return (struct intel_rt_view *) view;
}
return (struct intel_rt_view *) obj;
}
-static inline struct intel_ds_view *intel_ds_view(XGL_DEPTH_STENCIL_VIEW view)
+static inline struct intel_ds_view *intel_ds_view(VK_DEPTH_STENCIL_VIEW view)
{
return (struct intel_ds_view *) view;
}
void intel_null_view_init(struct intel_null_view *view,
struct intel_dev *dev);
-XGL_RESULT intel_buf_view_create(struct intel_dev *dev,
- const XGL_BUFFER_VIEW_CREATE_INFO *info,
+VK_RESULT intel_buf_view_create(struct intel_dev *dev,
+ const VK_BUFFER_VIEW_CREATE_INFO *info,
struct intel_buf_view **view_ret);
void intel_buf_view_destroy(struct intel_buf_view *view);
-XGL_RESULT intel_img_view_create(struct intel_dev *dev,
- const XGL_IMAGE_VIEW_CREATE_INFO *info,
+VK_RESULT intel_img_view_create(struct intel_dev *dev,
+ const VK_IMAGE_VIEW_CREATE_INFO *info,
struct intel_img_view **view_ret);
void intel_img_view_destroy(struct intel_img_view *view);
-XGL_RESULT intel_rt_view_create(struct intel_dev *dev,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
+VK_RESULT intel_rt_view_create(struct intel_dev *dev,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
struct intel_rt_view **view_ret);
void intel_rt_view_destroy(struct intel_rt_view *view);
-XGL_RESULT intel_ds_view_create(struct intel_dev *dev,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
+VK_RESULT intel_ds_view_create(struct intel_dev *dev,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
struct intel_ds_view **view_ret);
void intel_ds_view_destroy(struct intel_ds_view *view);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
struct intel_gpu;
struct intel_img;
-XGL_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_INFO_TYPE type,
+VK_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
+ VK_PHYSICAL_GPU_INFO_TYPE type,
size_t *size, void *data);
void intel_wsi_gpu_cleanup(struct intel_gpu *gpu);
-XGL_RESULT intel_wsi_img_init(struct intel_img *img);
+VK_RESULT intel_wsi_img_init(struct intel_img *img);
void intel_wsi_img_cleanup(struct intel_img *img);
-XGL_RESULT intel_wsi_fence_init(struct intel_fence *fence);
+VK_RESULT intel_wsi_fence_init(struct intel_fence *fence);
void intel_wsi_fence_cleanup(struct intel_fence *fence);
void intel_wsi_fence_copy(struct intel_fence *fence,
const struct intel_fence *src);
-XGL_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
+VK_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
int64_t timeout_ns);
#endif /* WSI_H */
#include "wsi.h"
-XGL_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_INFO_TYPE type,
- size_t *size, void *data)
+VK_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
+ VK_PHYSICAL_GPU_INFO_TYPE type,
+ size_t *size, void *data)
{
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
}
void intel_wsi_gpu_cleanup(struct intel_gpu *gpu)
{
}
-XGL_RESULT intel_wsi_img_init(struct intel_img *img)
+VK_RESULT intel_wsi_img_init(struct intel_img *img)
{
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_wsi_img_cleanup(struct intel_img *img)
{
}
-XGL_RESULT intel_wsi_fence_init(struct intel_fence *fence)
+VK_RESULT intel_wsi_fence_init(struct intel_fence *fence)
{
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_wsi_fence_cleanup(struct intel_fence *fence)
{
}
-XGL_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
- int64_t timeout_ns)
+VK_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
+ int64_t timeout_ns)
{
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(
- XGL_PHYSICAL_GPU gpu_,
- const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(
+ VK_PHYSICAL_GPU gpu_,
+ const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(
+ VK_DEVICE device,
xcb_window_t window,
xcb_randr_crtc_t crtc,
uint64_t * pMsc)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(
- XGL_DEVICE device,
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(
+ VK_DEVICE device,
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(
- XGL_QUEUE queue_,
- const XGL_WSI_X11_PRESENT_INFO* pPresentInfo,
- XGL_FENCE fence_)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(
+ VK_QUEUE queue_,
+ const VK_WSI_X11_PRESENT_INFO* pPresentInfo,
+ VK_FENCE fence_)
{
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
uint32_t connector_id;
char name[32];
- XGL_EXTENT2D physical_dimension;
- XGL_EXTENT2D physical_resolution;
+ VK_EXTENT2D physical_dimension;
+ VK_EXTENT2D physical_resolution;
drmModeModeInfoPtr modes;
uint32_t mode_count;
};
static bool x11_is_format_presentable(const struct intel_dev *dev,
- XGL_FORMAT format)
+ VK_FORMAT format)
{
/* this is what DDX expects */
switch (format) {
- case XGL_FMT_B5G6R5_UNORM:
- case XGL_FMT_B8G8R8A8_UNORM:
- case XGL_FMT_B8G8R8A8_SRGB:
+ case VK_FMT_B5G6R5_UNORM:
+ case VK_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_SRGB:
return true;
default:
return false;
/**
* Send a PresentSelectInput to select interested events.
*/
-static XGL_RESULT x11_swap_chain_present_select_input(struct intel_x11_swap_chain *sc)
+static VK_RESULT x11_swap_chain_present_select_input(struct intel_x11_swap_chain *sc)
{
xcb_void_cookie_t cookie;
xcb_generic_error_t *error;
error = xcb_request_check(sc->c, cookie);
if (error) {
free(error);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT wsi_x11_dri3_pixmap_from_buffer(struct intel_wsi_x11 *x11,
- struct intel_dev *dev,
- struct intel_img *img,
- struct intel_mem *mem)
+static VK_RESULT wsi_x11_dri3_pixmap_from_buffer(struct intel_wsi_x11 *x11,
+ struct intel_dev *dev,
+ struct intel_img *img,
+ struct intel_mem *mem)
{
struct intel_x11_img_data *data =
(struct intel_x11_img_data *) img->wsi_data;
data->prime_fd = x11_export_prime_fd(dev, mem->bo, &img->layout);
if (data->prime_fd < 0)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
data->pixmap = x11_dri3_pixmap_from_buffer(x11->c, x11->root,
x11->root_depth, data->prime_fd, &img->layout);
data->mem = mem;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/**
* Create a presentable image.
*/
-static XGL_RESULT wsi_x11_img_create(struct intel_wsi_x11 *x11,
- struct intel_dev *dev,
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO *info,
- struct intel_img **img_ret)
+static VK_RESULT wsi_x11_img_create(struct intel_wsi_x11 *x11,
+ struct intel_dev *dev,
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO *info,
+ struct intel_img **img_ret)
{
- XGL_IMAGE_CREATE_INFO img_info;
- XGL_MEMORY_ALLOC_INFO mem_info;
+ VK_IMAGE_CREATE_INFO img_info;
+ VK_MEMORY_ALLOC_INFO mem_info;
struct intel_img *img;
struct intel_mem *mem;
- XGL_RESULT ret;
+ VK_RESULT ret;
if (!x11_is_format_presentable(dev, info->format)) {
- intel_dev_log(dev, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "invalid presentable image format");
- return XGL_ERROR_INVALID_VALUE;
+ intel_dev_log(dev, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "invalid presentable image format");
+ return VK_ERROR_INVALID_VALUE;
}
/* create image */
memset(&img_info, 0, sizeof(img_info));
- img_info.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
- img_info.imageType = XGL_IMAGE_2D;
+ img_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = info->format;
img_info.extent.width = info->extent.width;
img_info.extent.height = info->extent.height;
img_info.mipLevels = 1;
img_info.arraySize = 1;
img_info.samples = 1;
- img_info.tiling = XGL_OPTIMAL_TILING;
+ img_info.tiling = VK_OPTIMAL_TILING;
img_info.usage = info->usage;
img_info.flags = 0;
ret = intel_img_create(dev, &img_info, true, &img);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
/* allocate memory */
memset(&mem_info, 0, sizeof(mem_info));
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.allocationSize = img->total_size;
mem_info.memProps = 0;
- mem_info.memType = XGL_MEMORY_TYPE_IMAGE;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_HIGH;
+ mem_info.memType = VK_MEMORY_TYPE_IMAGE;
+ mem_info.memPriority = VK_MEMORY_PRIORITY_HIGH;
ret = intel_mem_alloc(dev, &mem_info, &mem);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_img_destroy(img);
return ret;
}
ret = wsi_x11_dri3_pixmap_from_buffer(x11, dev, img, mem);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
intel_mem_free(mem);
intel_img_destroy(img);
return ret;
*img_ret = img;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/**
* Send a PresentPixmap.
*/
-static XGL_RESULT x11_swap_chain_present_pixmap(struct intel_x11_swap_chain *sc,
- const XGL_WSI_X11_PRESENT_INFO *info)
+static VK_RESULT x11_swap_chain_present_pixmap(struct intel_x11_swap_chain *sc,
+ const VK_WSI_X11_PRESENT_INFO *info)
{
struct intel_img *img = intel_img(info->srcImage);
struct intel_x11_img_data *data =
err = xcb_request_check(sc->c, cookie);
if (err) {
free(err);
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/**
}
}
-static XGL_RESULT x11_swap_chain_wait(struct intel_x11_swap_chain *sc,
+static VK_RESULT x11_swap_chain_wait(struct intel_x11_swap_chain *sc,
uint32_t serial, int64_t timeout)
{
const bool wait = (timeout != 0);
ev = (xcb_present_generic_event_t *)
xcb_wait_for_special_event(sc->c, sc->present_special_event);
if (!ev)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
} else {
ev = (xcb_present_generic_event_t *)
xcb_poll_for_special_event(sc->c, sc->present_special_event);
if (!ev)
- return XGL_NOT_READY;
+ return VK_NOT_READY;
}
x11_swap_chain_present_event(sc, ev);
free(ev);
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void x11_swap_chain_destroy(struct intel_x11_swap_chain *sc)
}
static struct intel_wsi_x11 *wsi_x11_create(struct intel_gpu *gpu,
- const XGL_WSI_X11_CONNECTION_INFO *info)
+ const VK_WSI_X11_CONNECTION_INFO *info)
{
struct intel_wsi_x11 *x11;
int depth, fd;
return NULL;
}
- x11 = intel_alloc(gpu, sizeof(*x11), 0, XGL_SYSTEM_ALLOC_API_OBJECT);
+ x11 = intel_alloc(gpu, sizeof(*x11), 0, VK_SYSTEM_ALLOC_API_OBJECT);
if (!x11)
return NULL;
memset(x11, 0, sizeof(*x11));
- /* there is no XGL_DBG_OBJECT_WSI_DISPLAY */
- intel_handle_init(&x11->handle, XGL_DBG_OBJECT_UNKNOWN, gpu->handle.icd);
+ /* there is no VK_DBG_OBJECT_WSI_DISPLAY */
+ intel_handle_init(&x11->handle, VK_DBG_OBJECT_UNKNOWN, gpu->handle.icd);
x11->c = info->pConnection;
x11->root = info->root;
struct intel_wsi_x11 *x11 = (struct intel_wsi_x11 *) dev->gpu->wsi_data;
struct intel_x11_swap_chain *sc;
- sc = intel_alloc(dev, sizeof(*sc), 0, XGL_SYSTEM_ALLOC_API_OBJECT);
+ sc = intel_alloc(dev, sizeof(*sc), 0, VK_SYSTEM_ALLOC_API_OBJECT);
if (!sc)
return NULL;
memset(sc, 0, sizeof(*sc));
- /* there is no XGL_DBG_OBJECT_WSI_SWAP_CHAIN */
- intel_handle_init(&sc->handle, XGL_DBG_OBJECT_UNKNOWN,
+ /* there is no VK_DBG_OBJECT_WSI_SWAP_CHAIN */
+ intel_handle_init(&sc->handle, VK_DBG_OBJECT_UNKNOWN,
dev->base.handle.icd);
sc->c = x11->c;
sc->window = window;
- if (x11_swap_chain_present_select_input(sc) != XGL_SUCCESS) {
+ if (x11_swap_chain_present_select_input(sc) != VK_SUCCESS) {
intel_free(dev, sc);
return NULL;
}
return sc;
}
-static XGL_RESULT intel_wsi_gpu_init(struct intel_gpu *gpu,
- const XGL_WSI_X11_CONNECTION_INFO *info)
+static VK_RESULT intel_wsi_gpu_init(struct intel_gpu *gpu,
+ const VK_WSI_X11_CONNECTION_INFO *info)
{
struct intel_wsi_x11 *x11;
x11 = wsi_x11_create(gpu, info);
if (!x11)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
gpu->wsi_data = x11;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static void x11_display_init_modes(struct intel_x11_display *dpy,
return;
dpy->modes = intel_alloc(dpy, sizeof(dpy->modes[0]) * conn->count_modes,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!dpy->modes)
return;
struct intel_x11_display *dpy;
drmModeConnectorPtr conn;
- dpy = intel_alloc(gpu, sizeof(*dpy), 0, XGL_SYSTEM_ALLOC_API_OBJECT);
+ dpy = intel_alloc(gpu, sizeof(*dpy), 0, VK_SYSTEM_ALLOC_API_OBJECT);
if (!dpy)
return NULL;
memset(dpy, 0, sizeof(*dpy));
- /* there is no XGL_DBG_OBJECT_WSI_DISPLAY */
- intel_handle_init(&dpy->handle, XGL_DBG_OBJECT_UNKNOWN, gpu->handle.icd);
+ /* there is no VK_DBG_OBJECT_WSI_DISPLAY */
+ intel_handle_init(&dpy->handle, VK_DBG_OBJECT_UNKNOWN, gpu->handle.icd);
dpy->fd = fd;
dpy->connector_id = connector_id;
return;
displays = intel_alloc(gpu, sizeof(*displays) * res->count_connectors,
- 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!displays) {
drmModeFreeResources(res);
return;
gpu->display_count = i;
}
-XGL_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
- XGL_PHYSICAL_GPU_INFO_TYPE type,
+VK_RESULT intel_wsi_gpu_get_info(struct intel_gpu *gpu,
+ VK_PHYSICAL_GPU_INFO_TYPE type,
size_t *size, void *data)
{
if (false)
x11_display_scan(gpu);
- return XGL_ERROR_INVALID_VALUE;
+ return VK_ERROR_INVALID_VALUE;
}
void intel_wsi_gpu_cleanup(struct intel_gpu *gpu)
}
}
-XGL_RESULT intel_wsi_img_init(struct intel_img *img)
+VK_RESULT intel_wsi_img_init(struct intel_img *img)
{
struct intel_x11_img_data *data;
- data = intel_alloc(img, sizeof(*data), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ data = intel_alloc(img, sizeof(*data), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!data)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(data, 0, sizeof(*data));
assert(!img->wsi_data);
img->wsi_data = data;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_wsi_img_cleanup(struct intel_img *img)
intel_free(img, img->wsi_data);
}
-XGL_RESULT intel_wsi_fence_init(struct intel_fence *fence)
+VK_RESULT intel_wsi_fence_init(struct intel_fence *fence)
{
struct intel_x11_fence_data *data;
- data = intel_alloc(fence, sizeof(*data), 0, XGL_SYSTEM_ALLOC_INTERNAL);
+ data = intel_alloc(fence, sizeof(*data), 0, VK_SYSTEM_ALLOC_INTERNAL);
if (!data)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(data, 0, sizeof(*data));
assert(!fence->wsi_data);
fence->wsi_data = data;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void intel_wsi_fence_cleanup(struct intel_fence *fence)
sizeof(struct intel_x11_fence_data));
}
-XGL_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
+VK_RESULT intel_wsi_fence_wait(struct intel_fence *fence,
int64_t timeout_ns)
{
struct intel_x11_fence_data *data =
(struct intel_x11_fence_data *) fence->wsi_data;
if (!data->swap_chain)
- return XGL_SUCCESS;
+ return VK_SUCCESS;
return x11_swap_chain_wait(data->swap_chain, data->serial, timeout_ns);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(
- XGL_PHYSICAL_GPU gpu_,
- const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(
+ VK_PHYSICAL_GPU gpu_,
+ const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
struct intel_gpu *gpu = intel_gpu(gpu_);
return intel_wsi_gpu_init(gpu, pConnectionInfo);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(
+ VK_DEVICE device,
xcb_window_t window,
xcb_randr_crtc_t crtc,
uint64_t * pMsc)
{
struct intel_dev *dev = intel_dev(device);
struct intel_x11_swap_chain *sc;
- XGL_RESULT ret;
+ VK_RESULT ret;
sc = x11_swap_chain_lookup(dev, window);
if (!sc)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
x11_swap_chain_present_notify_msc(sc);
/* wait for the event */
ret = x11_swap_chain_wait(sc, sc->local.serial, -1);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
*pMsc = sc->remote.msc;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(
- XGL_DEVICE device,
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(
+ VK_DEVICE device,
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem)
{
struct intel_dev *dev = intel_dev(device);
struct intel_wsi_x11 *x11 = (struct intel_wsi_x11 *) dev->gpu->wsi_data;
struct intel_img *img;
- XGL_RESULT ret;
+ VK_RESULT ret;
ret = wsi_x11_img_create(x11, dev, pCreateInfo, &img);
- if (ret == XGL_SUCCESS) {
- *pImage = (XGL_IMAGE) img;
- *pMem = (XGL_GPU_MEMORY) img->obj.mem;
+ if (ret == VK_SUCCESS) {
+ *pImage = (VK_IMAGE) img;
+ *pMem = (VK_GPU_MEMORY) img->obj.mem;
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(
- XGL_QUEUE queue_,
- const XGL_WSI_X11_PRESENT_INFO* pPresentInfo,
- XGL_FENCE fence_)
+ICD_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(
+ VK_QUEUE queue_,
+ const VK_WSI_X11_PRESENT_INFO* pPresentInfo,
+ VK_FENCE fence_)
{
struct intel_queue *queue = intel_queue(queue_);
struct intel_dev *dev = queue->dev;
(struct intel_x11_fence_data *) queue->fence->wsi_data;
struct intel_img *img = intel_img(pPresentInfo->srcImage);
struct intel_x11_swap_chain *sc;
- XGL_RESULT ret;
+ VK_RESULT ret;
sc = x11_swap_chain_lookup(dev, pPresentInfo->destWindow);
if (!sc)
- return XGL_ERROR_UNKNOWN;
+ return VK_ERROR_UNKNOWN;
ret = x11_swap_chain_present_pixmap(sc, pPresentInfo);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
return ret;
data->swap_chain = sc;
data->serial = sc->local.serial;
intel_fence_set_seqno(queue->fence, img->obj.mem->bo);
- if (fence_ != XGL_NULL_HANDLE) {
+ if (fence_ != VK_NULL_HANDLE) {
struct intel_fence *fence = intel_fence(fence_);
intel_fence_copy(fence, queue->fence);
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
# Create the nulldrv XGL DRI library
-set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES")
+set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES")
add_custom_command(OUTPUT nulldrv_gpa.c
COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py icd-get-proc-addr > nulldrv_gpa.c
-# Null XGL Driver
+# Null VK Driver
-This directory provides a null XGL driver
+This directory provides a null VK driver
;;;; Begin Copyright Notice ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-; XGL
+; VK
;
; Copyright (C) 2015 LunarG, Inc.
;
; The following is required on Windows, for exporting symbols from the DLL
-LIBRARY XGL_nulldrv
+LIBRARY VK_nulldrv
EXPORTS
- xglGetProcAddr
- xglCreateInstance
- xglEnumerateGpus
- xglDestroyInstance
+ vkGetProcAddr
+ vkCreateInstance
+ vkEnumerateGpus
+ vkDestroyInstance
xcbCreateWindow
xcbDestroyWindow
xcbGetMessage
/*
- * XGL null driver
+ * Vulkan null driver
*
* Copyright (C) 2015 LunarG, Inc.
*
// The null driver supports all WSI extenstions ... for now ...
static const char * const nulldrv_gpu_exts[NULLDRV_EXT_COUNT] = {
- [NULLDRV_EXT_WSI_X11] = "XGL_WSI_X11",
- [NULLDRV_EXT_WSI_WINDOWS] = "XGL_WSI_WINDOWS"
+ [NULLDRV_EXT_WSI_X11] = "VK_WSI_X11",
+ [NULLDRV_EXT_WSI_WINDOWS] = "VK_WSI_WINDOWS"
};
-static struct nulldrv_base *nulldrv_base(XGL_BASE_OBJECT base)
+static struct nulldrv_base *nulldrv_base(VK_BASE_OBJECT base)
{
return (struct nulldrv_base *) base;
}
-static XGL_RESULT nulldrv_base_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT nulldrv_base_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
size_t s;
uint32_t *count;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
- s = sizeof(XGL_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
+ s = sizeof(VK_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
break;
}
- case XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
+ case VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
*size = sizeof(uint32_t);
if (data == NULL)
return ret;
count = (uint32_t *) data;
*count = 1;
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
static struct nulldrv_base *nulldrv_base_create(struct nulldrv_dev *dev,
size_t obj_size,
- XGL_DBG_OBJECT_TYPE type)
+ VK_DBG_OBJECT_TYPE type)
{
struct nulldrv_base *base;
return base;
}
-static XGL_RESULT nulldrv_gpu_add(int devid, const char *primary_node,
+static VK_RESULT nulldrv_gpu_add(int devid, const char *primary_node,
const char *render_node, struct nulldrv_gpu **gpu_ret)
{
struct nulldrv_gpu *gpu;
gpu = malloc(sizeof(*gpu));
if (!gpu)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(gpu, 0, sizeof(*gpu));
// Initialize pointer to loader's dispatch table with ICD_LOADER_MAGIC
*gpu_ret = gpu;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_queue_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_queue_create(struct nulldrv_dev *dev,
uint32_t node_index,
struct nulldrv_queue **queue_ret)
{
struct nulldrv_queue *queue;
queue = (struct nulldrv_queue *) nulldrv_base_create(dev, sizeof(*queue),
- XGL_DBG_OBJECT_QUEUE);
+ VK_DBG_OBJECT_QUEUE);
if (!queue)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
queue->dev = dev;
*queue_ret = queue;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT dev_create_queues(struct nulldrv_dev *dev,
- const XGL_DEVICE_QUEUE_CREATE_INFO *queues,
+static VK_RESULT dev_create_queues(struct nulldrv_dev *dev,
+ const VK_DEVICE_QUEUE_CREATE_INFO *queues,
uint32_t count)
{
uint32_t i;
if (!count)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
for (i = 0; i < count; i++) {
- const XGL_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
- XGL_RESULT ret = XGL_SUCCESS;
+ const VK_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
+ VK_RESULT ret = VK_SUCCESS;
if (q->queueCount == 1 && !dev->queues[q->queueNodeIndex]) {
ret = nulldrv_queue_create(dev, q->queueNodeIndex,
&dev->queues[q->queueNodeIndex]);
}
else {
- ret = XGL_ERROR_INVALID_POINTER;
+ ret = VK_ERROR_INVALID_POINTER;
}
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static enum nulldrv_ext_type nulldrv_gpu_lookup_extension(const struct nulldrv_gpu *gpu,
return type;
}
-static XGL_RESULT nulldrv_desc_ooxx_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_ooxx_create(struct nulldrv_dev *dev,
struct nulldrv_desc_ooxx **ooxx_ret)
{
struct nulldrv_desc_ooxx *ooxx;
ooxx = malloc(sizeof(*ooxx));
if (!ooxx)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(ooxx, 0, sizeof(*ooxx));
*ooxx_ret = ooxx;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_dev_create(struct nulldrv_gpu *gpu,
- const XGL_DEVICE_CREATE_INFO *info,
+static VK_RESULT nulldrv_dev_create(struct nulldrv_gpu *gpu,
+ const VK_DEVICE_CREATE_INFO *info,
struct nulldrv_dev **dev_ret)
{
struct nulldrv_dev *dev;
uint32_t i;
- XGL_RESULT ret;
+ VK_RESULT ret;
dev = (struct nulldrv_dev *) nulldrv_base_create(NULL, sizeof(*dev),
- XGL_DBG_OBJECT_DEVICE);
+ VK_DBG_OBJECT_DEVICE);
if (!dev)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
for (i = 0; i < info->extensionCount; i++) {
const enum nulldrv_ext_type ext = nulldrv_gpu_lookup_extension(gpu,
info->ppEnabledExtensionNames[i]);
if (ext == NULLDRV_EXT_INVALID)
- return XGL_ERROR_INVALID_EXTENSION;
+ return VK_ERROR_INVALID_EXTENSION;
dev->exts[ext] = true;
}
ret = nulldrv_desc_ooxx_create(dev, &dev->desc_ooxx);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
ret = dev_create_queues(dev, info->pRequestedQueues,
info->queueRecordCount);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
*dev_ret = dev;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_gpu *nulldrv_gpu(XGL_PHYSICAL_GPU gpu)
+static struct nulldrv_gpu *nulldrv_gpu(VK_PHYSICAL_GPU gpu)
{
return (struct nulldrv_gpu *) gpu;
}
-static XGL_RESULT nulldrv_rt_view_create(struct nulldrv_dev *dev,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_rt_view_create(struct nulldrv_dev *dev,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
struct nulldrv_rt_view **view_ret)
{
struct nulldrv_rt_view *view;
view = (struct nulldrv_rt_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_COLOR_TARGET_VIEW);
+ VK_DBG_OBJECT_COLOR_TARGET_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_fence_create(struct nulldrv_dev *dev,
- const XGL_FENCE_CREATE_INFO *info,
+static VK_RESULT nulldrv_fence_create(struct nulldrv_dev *dev,
+ const VK_FENCE_CREATE_INFO *info,
struct nulldrv_fence **fence_ret)
{
struct nulldrv_fence *fence;
fence = (struct nulldrv_fence *) nulldrv_base_create(dev, sizeof(*fence),
- XGL_DBG_OBJECT_FENCE);
+ VK_DBG_OBJECT_FENCE);
if (!fence)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*fence_ret = fence;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_dev *nulldrv_dev(XGL_DEVICE dev)
+static struct nulldrv_dev *nulldrv_dev(VK_DEVICE dev)
{
return (struct nulldrv_dev *) dev;
}
}
-static XGL_RESULT img_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT img_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
struct nulldrv_img *img = nulldrv_img_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = img->total_size;
mem_req->alignment = 4096;
- mem_req->memType = XGL_MEMORY_TYPE_IMAGE;
+ mem_req->memType = VK_MEMORY_TYPE_IMAGE;
}
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
{
- XGL_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
+ VK_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
- *size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
img_req->usage = img->usage;
img_req->samples = img->samples;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = img->usage;
return ret;
}
-static XGL_RESULT nulldrv_img_create(struct nulldrv_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+static VK_RESULT nulldrv_img_create(struct nulldrv_dev *dev,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout,
struct nulldrv_img **img_ret)
{
struct nulldrv_img *img;
img = (struct nulldrv_img *) nulldrv_base_create(dev, sizeof(*img),
- XGL_DBG_OBJECT_IMAGE);
+ VK_DBG_OBJECT_IMAGE);
if (!img)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
img->type = info->imageType;
img->depth = info->extent.depth;
img->mip_levels = info->mipLevels;
img->array_size = info->arraySize;
img->usage = info->usage;
- if (info->tiling == XGL_LINEAR_TILING)
- img->format_class = XGL_IMAGE_FORMAT_CLASS_LINEAR;
+ if (info->tiling == VK_LINEAR_TILING)
+ img->format_class = VK_IMAGE_FORMAT_CLASS_LINEAR;
else
img->format_class = icd_format_get_class(info->format);
img->samples = info->samples;
*img_ret = img;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_img *nulldrv_img(XGL_IMAGE image)
+static struct nulldrv_img *nulldrv_img(VK_IMAGE image)
{
return (struct nulldrv_img *) image;
}
-static XGL_RESULT nulldrv_mem_alloc(struct nulldrv_dev *dev,
- const XGL_MEMORY_ALLOC_INFO *info,
+static VK_RESULT nulldrv_mem_alloc(struct nulldrv_dev *dev,
+ const VK_MEMORY_ALLOC_INFO *info,
struct nulldrv_mem **mem_ret)
{
struct nulldrv_mem *mem;
mem = (struct nulldrv_mem *) nulldrv_base_create(dev, sizeof(*mem),
- XGL_DBG_OBJECT_GPU_MEMORY);
+ VK_DBG_OBJECT_GPU_MEMORY);
if (!mem)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
mem->bo = malloc(info->allocationSize);
if (!mem->bo) {
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
mem->size = info->allocationSize;
*mem_ret = mem;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_ds_view_create(struct nulldrv_dev *dev,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_ds_view_create(struct nulldrv_dev *dev,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
struct nulldrv_ds_view **view_ret)
{
struct nulldrv_img *img = nulldrv_img(info->image);
struct nulldrv_ds_view *view;
view = (struct nulldrv_ds_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW);
+ VK_DBG_OBJECT_DEPTH_STENCIL_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->img = img;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_sampler_create(struct nulldrv_dev *dev,
- const XGL_SAMPLER_CREATE_INFO *info,
+static VK_RESULT nulldrv_sampler_create(struct nulldrv_dev *dev,
+ const VK_SAMPLER_CREATE_INFO *info,
struct nulldrv_sampler **sampler_ret)
{
struct nulldrv_sampler *sampler;
sampler = (struct nulldrv_sampler *) nulldrv_base_create(dev,
- sizeof(*sampler), XGL_DBG_OBJECT_SAMPLER);
+ sizeof(*sampler), VK_DBG_OBJECT_SAMPLER);
if (!sampler)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*sampler_ret = sampler;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_img_view_create(struct nulldrv_dev *dev,
- const XGL_IMAGE_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_img_view_create(struct nulldrv_dev *dev,
+ const VK_IMAGE_VIEW_CREATE_INFO *info,
struct nulldrv_img_view **view_ret)
{
struct nulldrv_img *img = nulldrv_img(info->image);
struct nulldrv_img_view *view;
view = (struct nulldrv_img_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_IMAGE_VIEW);
+ VK_DBG_OBJECT_IMAGE_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->img = img;
view->min_lod = info->minLod;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static void *nulldrv_mem_map(struct nulldrv_mem *mem, XGL_FLAGS flags)
+static void *nulldrv_mem_map(struct nulldrv_mem *mem, VK_FLAGS flags)
{
return mem->bo;
}
-static struct nulldrv_mem *nulldrv_mem(XGL_GPU_MEMORY mem)
+static struct nulldrv_mem *nulldrv_mem(VK_GPU_MEMORY mem)
{
return (struct nulldrv_mem *) mem;
}
return (struct nulldrv_buf *) base;
}
-static XGL_RESULT buf_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT buf_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
struct nulldrv_buf *buf = nulldrv_buf_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = buf->size;
mem_req->alignment = 4096;
- mem_req->memType = XGL_MEMORY_TYPE_BUFFER;
+ mem_req->memType = VK_MEMORY_TYPE_BUFFER;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = buf->usage;
return ret;
}
-static XGL_RESULT nulldrv_buf_create(struct nulldrv_dev *dev,
- const XGL_BUFFER_CREATE_INFO *info,
+static VK_RESULT nulldrv_buf_create(struct nulldrv_dev *dev,
+ const VK_BUFFER_CREATE_INFO *info,
struct nulldrv_buf **buf_ret)
{
struct nulldrv_buf *buf;
buf = (struct nulldrv_buf *) nulldrv_base_create(dev, sizeof(*buf),
- XGL_DBG_OBJECT_BUFFER);
+ VK_DBG_OBJECT_BUFFER);
if (!buf)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
buf->size = info->size;
buf->usage = info->usage;
*buf_ret = buf;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_layout_create(struct nulldrv_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
+static VK_RESULT nulldrv_desc_layout_create(struct nulldrv_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
struct nulldrv_desc_layout **layout_ret)
{
struct nulldrv_desc_layout *layout;
layout = (struct nulldrv_desc_layout *)
nulldrv_base_create(dev, sizeof(*layout),
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT);
if (!layout)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*layout_ret = layout;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_layout_chain_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_layout_chain_create(struct nulldrv_dev *dev,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT *pSetLayoutArray,
+ const VK_DESCRIPTOR_SET_LAYOUT *pSetLayoutArray,
struct nulldrv_desc_layout_chain **chain_ret)
{
struct nulldrv_desc_layout_chain *chain;
chain = (struct nulldrv_desc_layout_chain *)
nulldrv_base_create(dev, sizeof(*chain),
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN);
if (!chain)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*chain_ret = chain;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_desc_layout *nulldrv_desc_layout(XGL_DESCRIPTOR_SET_LAYOUT layout)
+static struct nulldrv_desc_layout *nulldrv_desc_layout(VK_DESCRIPTOR_SET_LAYOUT layout)
{
return (struct nulldrv_desc_layout *) layout;
}
-static XGL_RESULT shader_create(struct nulldrv_dev *dev,
- const XGL_SHADER_CREATE_INFO *info,
+static VK_RESULT shader_create(struct nulldrv_dev *dev,
+ const VK_SHADER_CREATE_INFO *info,
struct nulldrv_shader **sh_ret)
{
struct nulldrv_shader *sh;
sh = (struct nulldrv_shader *) nulldrv_base_create(dev, sizeof(*sh),
- XGL_DBG_OBJECT_SHADER);
+ VK_DBG_OBJECT_SHADER);
if (!sh)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*sh_ret = sh;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT graphics_pipeline_create(struct nulldrv_dev *dev,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO *info_,
+static VK_RESULT graphics_pipeline_create(struct nulldrv_dev *dev,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO *info_,
struct nulldrv_pipeline **pipeline_ret)
{
struct nulldrv_pipeline *pipeline;
pipeline = (struct nulldrv_pipeline *)
nulldrv_base_create(dev, sizeof(*pipeline),
- XGL_DBG_OBJECT_GRAPHICS_PIPELINE);
+ VK_DBG_OBJECT_GRAPHICS_PIPELINE);
if (!pipeline)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*pipeline_ret = pipeline;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_viewport_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_viewport_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_vp **state_ret)
{
struct nulldrv_dynamic_vp *state;
state = (struct nulldrv_dynamic_vp *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_VIEWPORT_STATE);
+ sizeof(*state), VK_DBG_OBJECT_VIEWPORT_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_raster_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_raster_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_rs **state_ret)
{
struct nulldrv_dynamic_rs *state;
state = (struct nulldrv_dynamic_rs *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_RASTER_STATE);
+ sizeof(*state), VK_DBG_OBJECT_RASTER_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_blend_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_blend_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_cb **state_ret)
{
struct nulldrv_dynamic_cb *state;
state = (struct nulldrv_dynamic_cb *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_COLOR_BLEND_STATE);
+ sizeof(*state), VK_DBG_OBJECT_COLOR_BLEND_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_ds_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_ds_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_ds **state_ret)
{
struct nulldrv_dynamic_ds *state;
state = (struct nulldrv_dynamic_ds *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_DEPTH_STENCIL_STATE);
+ sizeof(*state), VK_DBG_OBJECT_DEPTH_STENCIL_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_cmd_create(struct nulldrv_dev *dev,
- const XGL_CMD_BUFFER_CREATE_INFO *info,
+static VK_RESULT nulldrv_cmd_create(struct nulldrv_dev *dev,
+ const VK_CMD_BUFFER_CREATE_INFO *info,
struct nulldrv_cmd **cmd_ret)
{
struct nulldrv_cmd *cmd;
cmd = (struct nulldrv_cmd *) nulldrv_base_create(dev, sizeof(*cmd),
- XGL_DBG_OBJECT_CMD_BUFFER);
+ VK_DBG_OBJECT_CMD_BUFFER);
if (!cmd)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*cmd_ret = cmd;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_pool_create(struct nulldrv_dev *dev,
- XGL_DESCRIPTOR_POOL_USAGE usage,
+static VK_RESULT nulldrv_desc_pool_create(struct nulldrv_dev *dev,
+ VK_DESCRIPTOR_POOL_USAGE usage,
uint32_t max_sets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct nulldrv_desc_pool **pool_ret)
{
struct nulldrv_desc_pool *pool;
pool = (struct nulldrv_desc_pool *)
nulldrv_base_create(dev, sizeof(*pool),
- XGL_DBG_OBJECT_DESCRIPTOR_POOL);
+ VK_DBG_OBJECT_DESCRIPTOR_POOL);
if (!pool)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pool->dev = dev;
*pool_ret = pool;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_set_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_set_create(struct nulldrv_dev *dev,
struct nulldrv_desc_pool *pool,
- XGL_DESCRIPTOR_SET_USAGE usage,
+ VK_DESCRIPTOR_SET_USAGE usage,
const struct nulldrv_desc_layout *layout,
struct nulldrv_desc_set **set_ret)
{
set = (struct nulldrv_desc_set *)
nulldrv_base_create(dev, sizeof(*set),
- XGL_DBG_OBJECT_DESCRIPTOR_SET);
+ VK_DBG_OBJECT_DESCRIPTOR_SET);
if (!set)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
set->ooxx = dev->desc_ooxx;
set->layout = layout;
*set_ret = set;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_desc_pool *nulldrv_desc_pool(XGL_DESCRIPTOR_POOL pool)
+static struct nulldrv_desc_pool *nulldrv_desc_pool(VK_DESCRIPTOR_POOL pool)
{
return (struct nulldrv_desc_pool *) pool;
}
-static XGL_RESULT nulldrv_fb_create(struct nulldrv_dev *dev,
- const XGL_FRAMEBUFFER_CREATE_INFO* info,
+static VK_RESULT nulldrv_fb_create(struct nulldrv_dev *dev,
+ const VK_FRAMEBUFFER_CREATE_INFO* info,
struct nulldrv_framebuffer ** fb_ret)
{
struct nulldrv_framebuffer *fb;
fb = (struct nulldrv_framebuffer *) nulldrv_base_create(dev, sizeof(*fb),
- XGL_DBG_OBJECT_FRAMEBUFFER);
+ VK_DBG_OBJECT_FRAMEBUFFER);
if (!fb)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*fb_ret = fb;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_render_pass_create(struct nulldrv_dev *dev,
- const XGL_RENDER_PASS_CREATE_INFO* info,
+static VK_RESULT nulldrv_render_pass_create(struct nulldrv_dev *dev,
+ const VK_RENDER_PASS_CREATE_INFO* info,
struct nulldrv_render_pass** rp_ret)
{
struct nulldrv_render_pass *rp;
rp = (struct nulldrv_render_pass *) nulldrv_base_create(dev, sizeof(*rp),
- XGL_DBG_OBJECT_RENDER_PASS);
+ VK_DBG_OBJECT_RENDER_PASS);
if (!rp)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*rp_ret = rp;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_buf *nulldrv_buf(XGL_BUFFER buf)
+static struct nulldrv_buf *nulldrv_buf(VK_BUFFER buf)
{
return (struct nulldrv_buf *) buf;
}
-static XGL_RESULT nulldrv_buf_view_create(struct nulldrv_dev *dev,
- const XGL_BUFFER_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_buf_view_create(struct nulldrv_dev *dev,
+ const VK_BUFFER_VIEW_CREATE_INFO *info,
struct nulldrv_buf_view **view_ret)
{
struct nulldrv_buf *buf = nulldrv_buf(info->buffer);
struct nulldrv_buf_view *view;
view = (struct nulldrv_buf_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_BUFFER_VIEW);
+ VK_DBG_OBJECT_BUFFER_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->buf = buf;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
// Driver entry points
//*********************************************
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(
- XGL_DEVICE device,
- const XGL_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_BUFFER* pBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBuffer(
+ VK_DEVICE device,
+ const VK_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_BUFFER* pBuffer)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_buf_create(dev, pCreateInfo, (struct nulldrv_buf **) pBuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(
- XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(
+ VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_cmd **) pCmdBuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_CMD_BUFFER_BEGIN_INFO *info)
+ICD_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_CMD_BUFFER_BEGIN_INFO *info)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT void XGLAPI xglCmdInitAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdInitAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
const uint32_t* pData)
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdLoadAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdLoadAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER srcBuffer,
- XGL_GPU_SIZE srcOffset)
+ VK_BUFFER srcBuffer,
+ VK_GPU_SIZE srcOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdSaveAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdSaveAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerBegin(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDbgMarkerBegin(
+ VK_CMD_BUFFER cmdBuffer,
const char* pMarker)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerEnd(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT void VKAPI vkCmdDbgMarkerEnd(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_COPY* pRegions)
+ const VK_BUFFER_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_COPY* pRegions)
+ const VK_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBlitImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdBlitImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_BLIT* pRegions)
+ const VK_IMAGE_BLIT* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyBufferToImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyImageToBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCloneImageData(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout)
+ICD_EXPORT void VKAPI vkCmdCloneImageData(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdUpdateBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE dataSize,
+ICD_EXPORT void VKAPI vkCmdUpdateBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE dataSize,
const uint32_t* pData)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdFillBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE fillSize,
+ICD_EXPORT void VKAPI vkCmdFillBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE fillSize,
uint32_t data)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
+ICD_EXPORT void VKAPI vkCmdClearColorImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
+ICD_EXPORT void VKAPI vkCmdClearDepthStencil(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
float depth,
uint32_t stencil,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdResolveImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t rectCount,
- const XGL_IMAGE_RESOLVE* pRects)
+ const VK_IMAGE_RESOLVE* pRects)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBeginQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdBeginQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot,
- XGL_FLAGS flags)
+ VK_FLAGS flags)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdEndQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdEndQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResetQueryPool(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdResetQueryPool(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdSetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdSetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdResetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdWriteTimestamp(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_TIMESTAMP_TYPE timestampType,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ICD_EXPORT void VKAPI vkCmdWriteTimestamp(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_TIMESTAMP_TYPE timestampType,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindPipeline(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_PIPELINE pipeline)
+ICD_EXPORT void VKAPI vkCmdBindPipeline(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_PIPELINE pipeline)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindDynamicStateObject(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_STATE_BIND_POINT stateBindPoint,
- XGL_DYNAMIC_STATE_OBJECT state)
+ICD_EXPORT void VKAPI vkCmdBindDynamicStateObject(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_STATE_BIND_POINT stateBindPoint,
+ VK_DYNAMIC_STATE_OBJECT state)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+ICD_EXPORT void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindVertexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdBindVertexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t binding)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindIndexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE indexType)
+ICD_EXPORT void VKAPI vkCmdBindIndexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE indexType)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDraw(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDraw(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstVertex,
uint32_t vertexCount,
uint32_t firstInstance,
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexed(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDrawIndexed(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstIndex,
uint32_t indexCount,
int32_t vertexOffset,
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndexedIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDispatch(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDispatch(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t x,
uint32_t y,
uint32_t z)
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDispatchIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset)
+ICD_EXPORT void VKAPI vkCmdDispatchIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdWaitEvents(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_EVENT_WAIT_INFO* pWaitInfo)
+ICD_EXPORT void VKAPI vkCmdWaitEvents(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_EVENT_WAIT_INFO* pWaitInfo)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdPipelineBarrier(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_PIPELINE_BARRIER* pBarrier)
+ICD_EXPORT void VKAPI vkCmdPipelineBarrier(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_PIPELINE_BARRIER* pBarrier)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDevice(
- XGL_PHYSICAL_GPU gpu_,
- const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDevice(
+ VK_PHYSICAL_GPU gpu_,
+ const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
NULLDRV_LOG_FUNC;
struct nulldrv_gpu *gpu = nulldrv_gpu(gpu_);
return nulldrv_dev_create(gpu, pCreateInfo, (struct nulldrv_dev**)pDevice);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyDevice(
+ VK_DEVICE device)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(
+ VK_DEVICE device,
uint32_t queueNodeIndex,
uint32_t queueIndex,
- XGL_QUEUE* pQueue)
+ VK_QUEUE* pQueue)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
*pQueue = dev->queues[0];
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(
+ VK_DEVICE device)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(
- XGL_DEVICE device,
- XGL_VALIDATION_LEVEL validationLevel)
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(
+ VK_DEVICE device,
+ VK_VALIDATION_LEVEL validationLevel)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(
+ VK_DEVICE device,
int32_t msgCode,
- XGL_DBG_MSG_FILTER filter)
+ VK_DBG_MSG_FILTER filter)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(
- XGL_DEVICE device,
- XGL_DBG_DEVICE_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(
+ VK_DEVICE device,
+ VK_DBG_DEVICE_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateEvent(
- XGL_DEVICE device,
- const XGL_EVENT_CREATE_INFO* pCreateInfo,
- XGL_EVENT* pEvent)
+ICD_EXPORT VK_RESULT VKAPI vkCreateEvent(
+ VK_DEVICE device,
+ const VK_EVENT_CREATE_INFO* pCreateInfo,
+ VK_EVENT* pEvent)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkGetEventStatus(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkSetEvent(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkResetEvent(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFence(
- XGL_DEVICE device,
- const XGL_FENCE_CREATE_INFO* pCreateInfo,
- XGL_FENCE* pFence)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFence(
+ VK_DEVICE device,
+ const VK_FENCE_CREATE_INFO* pCreateInfo,
+ VK_FENCE* pFence)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_fence **) pFence);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(
- XGL_FENCE fence_)
+ICD_EXPORT VK_RESULT VKAPI vkGetFenceStatus(
+ VK_FENCE fence_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWaitForFences(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkWaitForFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- const XGL_FENCE* pFences,
+ const VK_FENCE* pFences,
bool32_t waitAll,
uint64_t timeout)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(
- XGL_DEVICE device,
- XGL_FORMAT format,
- XGL_FORMAT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetFormatInfo(
+ VK_DEVICE device,
+ VK_FORMAT format,
+ VK_FORMAT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(
- XGL_PHYSICAL_GPU gpu_,
- XGL_PHYSICAL_GPU_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetGpuInfo(
+ VK_PHYSICAL_GPU gpu_,
+ VK_PHYSICAL_GPU_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(
- XGL_PHYSICAL_GPU gpu_,
+ICD_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(
+ VK_PHYSICAL_GPU gpu_,
const char* pExtName)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(
- XGL_PHYSICAL_GPU gpu0_,
- XGL_PHYSICAL_GPU gpu1_,
- XGL_GPU_COMPATIBILITY_INFO* pInfo)
+ICD_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(
+ VK_PHYSICAL_GPU gpu0_,
+ VK_PHYSICAL_GPU gpu1_,
+ VK_GPU_COMPATIBILITY_INFO* pInfo)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(
- XGL_DEVICE device,
- const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerImage(
+ VK_DEVICE device,
+ const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImage(
- XGL_DEVICE device,
- const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImage(
+ VK_DEVICE device,
+ const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_img **) pImage);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
- XGL_IMAGE image,
- const XGL_IMAGE_SUBRESOURCE* pSubresource,
- XGL_SUBRESOURCE_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(
+ VK_IMAGE image,
+ const VK_IMAGE_SUBRESOURCE* pSubresource,
+ VK_SUBRESOURCE_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (infoType) {
- case XGL_INFO_TYPE_SUBRESOURCE_LAYOUT:
+ case VK_INFO_TYPE_SUBRESOURCE_LAYOUT:
{
- XGL_SUBRESOURCE_LAYOUT *layout = (XGL_SUBRESOURCE_LAYOUT *) pData;
+ VK_SUBRESOURCE_LAYOUT *layout = (VK_SUBRESOURCE_LAYOUT *) pData;
- *pDataSize = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ *pDataSize = sizeof(VK_SUBRESOURCE_LAYOUT);
if (pData == NULL)
return ret;
}
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkAllocMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_mem_alloc(dev, pAllocInfo, (struct nulldrv_mem **) pMem);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglFreeMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkFreeMemory(
+ VK_GPU_MEMORY mem_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(
- XGL_GPU_MEMORY mem_,
- XGL_MEMORY_PRIORITY priority)
+ICD_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(
+ VK_GPU_MEMORY mem_,
+ VK_MEMORY_PRIORITY priority)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglMapMemory(
- XGL_GPU_MEMORY mem_,
- XGL_FLAGS flags,
+ICD_EXPORT VK_RESULT VKAPI vkMapMemory(
+ VK_GPU_MEMORY mem_,
+ VK_FLAGS flags,
void** ppData)
{
NULLDRV_LOG_FUNC;
*ppData = ptr;
- return (ptr) ? XGL_SUCCESS : XGL_ERROR_UNKNOWN;
+ return (ptr) ? VK_SUCCESS : VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkUnmapMemory(
+ VK_GPU_MEMORY mem_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkPinSystemMemory(
+ VK_DEVICE device,
const void* pSysMem,
size_t memSize,
- XGL_GPU_MEMORY* pMem)
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(
- XGL_DEVICE device,
- const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(
+ VK_DEVICE device,
+ const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance)
{
NULLDRV_LOG_FUNC;
struct nulldrv_instance *inst;
inst = (struct nulldrv_instance *) nulldrv_base_create(NULL, sizeof(*inst),
- XGL_DBG_OBJECT_INSTANCE);
+ VK_DBG_OBJECT_INSTANCE);
if (!inst)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
inst->obj.base.get_info = NULL;
- *pInstance = (XGL_INSTANCE*)inst;
+ *pInstance = (VK_INSTANCE*)inst;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE pInstance)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
- XGL_INSTANCE instance,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus)
+ VK_PHYSICAL_GPU* pGpus)
{
NULLDRV_LOG_FUNC;
- XGL_RESULT ret;
+ VK_RESULT ret;
struct nulldrv_gpu *gpu;
*pGpuCount = 1;
ret = nulldrv_gpu_add(0, 0, 0, &gpu);
- if (ret == XGL_SUCCESS)
- pGpus[0] = (XGL_PHYSICAL_GPU) gpu;
+ if (ret == VK_SUCCESS)
+ pGpus[0] = (VK_PHYSICAL_GPU) gpu;
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(
- XGL_PHYSICAL_GPU gpu,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateLayers(
+ VK_PHYSICAL_GPU gpu,
size_t maxLayerCount,
size_t maxStringSize,
size_t* pOutLayerCount,
void* pReserved)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
+ICD_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
void* pUserData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+ICD_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(
- XGL_INSTANCE instance,
- XGL_DBG_GLOBAL_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(
+ VK_INSTANCE instance,
+ VK_DBG_GLOBAL_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyObject(
- XGL_OBJECT object)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyObject(
+ VK_OBJECT object)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(
- XGL_BASE_OBJECT object,
- XGL_OBJECT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetObjectInfo(
+ VK_BASE_OBJECT object,
+ VK_OBJECT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
return base->get_info(base, infoType, pDataSize, pData);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemory(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_MEMORY mem_,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_MEMORY mem_,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_SIZE rangeOffset,
- XGL_GPU_SIZE rangeSize,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_SIZE rangeOffset,
+ VK_GPU_SIZE rangeSize,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(
- XGL_IMAGE image,
+ICD_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(
+ VK_IMAGE image,
uint32_t allocationIdx,
- const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ const VK_IMAGE_MEMORY_BIND_INFO* bindInfo,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(
- XGL_BASE_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(
+ VK_BASE_OBJECT object,
size_t tagSize,
const void* pTag)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(
- XGL_DEVICE device,
- const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(
+ VK_DEVICE device,
+ const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglStorePipeline(
- XGL_PIPELINE pipeline,
+ICD_EXPORT VK_RESULT VKAPI vkStorePipeline(
+ VK_PIPELINE pipeline,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipeline(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipelineDerivative(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipelineDerivative(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(
- XGL_DEVICE device,
- const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- XGL_QUERY_POOL* pQueryPool)
+ICD_EXPORT VK_RESULT VKAPI vkCreateQueryPool(
+ VK_DEVICE device,
+ const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ VK_QUERY_POOL* pQueryPool)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(
- XGL_QUEUE queue_)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(
+ VK_QUEUE queue_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue_,
+ICD_EXPORT VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue_,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER* pCmdBuffers,
- XGL_FENCE fence_)
+ const VK_CMD_BUFFER* pCmdBuffers,
+ VK_FENCE fence_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_OPEN_INFO* pOpenInfo,
+ VK_SEMAPHORE* pSemaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_CREATE_INFO* pCreateInfo,
+ VK_SEMAPHORE* pSemaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSampler(
- XGL_DEVICE device,
- const XGL_SAMPLER_CREATE_INFO* pCreateInfo,
- XGL_SAMPLER* pSampler)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSampler(
+ VK_DEVICE device,
+ const VK_SAMPLER_CREATE_INFO* pCreateInfo,
+ VK_SAMPLER* pSampler)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_sampler **) pSampler);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateShader(
- XGL_DEVICE device,
- const XGL_SHADER_CREATE_INFO* pCreateInfo,
- XGL_SHADER* pShader)
+ICD_EXPORT VK_RESULT VKAPI vkCreateShader(
+ VK_DEVICE device,
+ const VK_SHADER_CREATE_INFO* pCreateInfo,
+ VK_SHADER* pShader)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return shader_create(dev, pCreateInfo, (struct nulldrv_shader **) pShader);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_vp **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_rs **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_cb **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_ds **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(
- XGL_DEVICE device,
- const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
- XGL_BUFFER_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBufferView(
+ VK_DEVICE device,
+ const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
+ VK_BUFFER_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_buf_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImageView(
- XGL_DEVICE device,
- const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImageView(
+ VK_DEVICE device,
+ const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_img_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(
- XGL_DEVICE device,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(
+ VK_DEVICE device,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_rt_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(
- XGL_DEVICE device,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
- XGL_DEPTH_STENCIL_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(
+ VK_DEVICE device,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
+ VK_DEPTH_STENCIL_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(
- XGL_DEVICE device,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(
+ VK_DEVICE device,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_layout **) pSetLayout);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(
+ VK_DEVICE device,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_layout_chain **) pLayoutChain);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+ICD_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_CMD_BUFFER cmd_)
+ICD_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_CMD_BUFFER cmd_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_POOL_USAGE poolUsage,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_POOL_USAGE poolUsage,
uint32_t maxSets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_POOL* pDescriptorPool)
+ const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_POOL* pDescriptorPool)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_pool **) pDescriptorPool);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(
- XGL_DESCRIPTOR_POOL descriptorPool)
+ICD_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(
+ VK_DESCRIPTOR_POOL descriptorPool)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
- XGL_DESCRIPTOR_SET_USAGE setUsage,
+ICD_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
+ VK_DESCRIPTOR_SET_USAGE setUsage,
uint32_t count,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
- XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
+ VK_DESCRIPTOR_SET* pDescriptorSets,
uint32_t* pCount)
{
NULLDRV_LOG_FUNC;
struct nulldrv_desc_pool *pool = nulldrv_desc_pool(descriptorPool);
struct nulldrv_dev *dev = pool->dev;
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
uint32_t i;
for (i = 0; i < count; i++) {
const struct nulldrv_desc_layout *layout =
- nulldrv_desc_layout((XGL_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
+ nulldrv_desc_layout((VK_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
ret = nulldrv_desc_set_create(dev, pool, setUsage, layout,
(struct nulldrv_desc_set **) &pDescriptorSets[i]);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
break;
}
return ret;
}
-ICD_EXPORT void XGLAPI xglClearDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
+ICD_EXPORT void VKAPI vkClearDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets)
+ const VK_DESCRIPTOR_SET* pDescriptorSets)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglUpdateDescriptors(
- XGL_DESCRIPTOR_SET descriptorSet,
+ICD_EXPORT void VKAPI vkUpdateDescriptors(
+ VK_DESCRIPTOR_SET descriptorSet,
uint32_t updateCount,
const void** ppUpdateArray)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(
- XGL_DEVICE device,
- const XGL_FRAMEBUFFER_CREATE_INFO* info,
- XGL_FRAMEBUFFER* fb_ret)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(
+ VK_DEVICE device,
+ const VK_FRAMEBUFFER_CREATE_INFO* info,
+ VK_FRAMEBUFFER* fb_ret)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(
- XGL_DEVICE device,
- const XGL_RENDER_PASS_CREATE_INFO* info,
- XGL_RENDER_PASS* rp_ret)
+ICD_EXPORT VK_RESULT VKAPI vkCreateRenderPass(
+ VK_DEVICE device,
+ const VK_RENDER_PASS_CREATE_INFO* info,
+ VK_RENDER_PASS* rp_ret)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_render_pass_create(dev, info, (struct nulldrv_render_pass **) rp_ret);
}
-ICD_EXPORT void XGLAPI xglCmdBeginRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_RENDER_PASS_BEGIN* pRenderPassBegin)
+ICD_EXPORT void VKAPI vkCmdBeginRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_RENDER_PASS_BEGIN* pRenderPassBegin)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdEndRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_RENDER_PASS renderPass)
+ICD_EXPORT void VKAPI vkCmdEndRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_RENDER_PASS renderPass)
{
NULLDRV_LOG_FUNC;
}
return 0;
}
-ICD_EXPORT XGL_RESULT xcbQueuePresent(void *queue, void *image, void* fence)
+ICD_EXPORT VK_RESULT xcbQueuePresent(void *queue, void *image, void* fence)
{
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <string.h>
#include <assert.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglIcd.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkIcd.h>
#if defined(PLATFORM_LINUX)
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#else
-#include <xglWsiWinExt.h>
+#include <vkWsiWinExt.h>
#endif
#include "icd.h"
struct nulldrv_base {
void *loader_data;
uint32_t magic;
- XGL_RESULT (*get_info)(struct nulldrv_base *base, int type1,
+ VK_RESULT (*get_info)(struct nulldrv_base *base, int type1,
size_t *size, void *data);
};
struct nulldrv_img {
struct nulldrv_obj obj;
- XGL_IMAGE_TYPE type;
+ VK_IMAGE_TYPE type;
int32_t depth;
uint32_t mip_levels;
uint32_t array_size;
- XGL_FLAGS usage;
- XGL_IMAGE_FORMAT_CLASS format_class;
+ VK_FLAGS usage;
+ VK_IMAGE_FORMAT_CLASS format_class;
uint32_t samples;
size_t total_size;
};
struct nulldrv_mem {
struct nulldrv_base base;
struct nulldrv_bo *bo;
- XGL_GPU_SIZE size;
+ VK_GPU_SIZE size;
};
struct nulldrv_ds_view {
struct nulldrv_buf {
struct nulldrv_obj obj;
- XGL_GPU_SIZE size;
- XGL_FLAGS usage;
+ VK_GPU_SIZE size;
+ VK_FLAGS usage;
};
struct nulldrv_desc_layout {
//
-// File: xgl.h
+// File: vulkan.h
//
/*
** Copyright (c) 2014 The Khronos Group Inc.
** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
*/
-#ifndef __XGL_H__
-#define __XGL_H__
+#ifndef __VULKAN_H__
+#define __VULKAN_H__
-#define XGL_MAKE_VERSION(major, minor, patch) \
+#define VK_MAKE_VERSION(major, minor, patch) \
((major << 22) | (minor << 12) | patch)
-#include "xglPlatform.h"
+#include "vkPlatform.h"
-// XGL API version supported by this file
-#define XGL_API_VERSION XGL_MAKE_VERSION(0, 67, 0)
+// VK API version supported by this file
+#define VK_API_VERSION VK_MAKE_VERSION(0, 67, 0)
#ifdef __cplusplus
extern "C"
/*
***************************************************************************************************
-* Core XGL API
+* Core VK API
***************************************************************************************************
*/
#ifdef __cplusplus
- #define XGL_DEFINE_HANDLE(_obj) struct _obj##_T {char _dummy;}; typedef _obj##_T* _obj;
- #define XGL_DEFINE_SUBCLASS_HANDLE(_obj, _base) struct _obj##_T : public _base##_T {}; typedef _obj##_T* _obj;
+ #define VK_DEFINE_HANDLE(_obj) struct _obj##_T {char _dummy;}; typedef _obj##_T* _obj;
+ #define VK_DEFINE_SUBCLASS_HANDLE(_obj, _base) struct _obj##_T : public _base##_T {}; typedef _obj##_T* _obj;
#else // __cplusplus
- #define XGL_DEFINE_HANDLE(_obj) typedef void* _obj;
- #define XGL_DEFINE_SUBCLASS_HANDLE(_obj, _base) typedef void* _obj;
+ #define VK_DEFINE_HANDLE(_obj) typedef void* _obj;
+ #define VK_DEFINE_SUBCLASS_HANDLE(_obj, _base) typedef void* _obj;
#endif // __cplusplus
-XGL_DEFINE_HANDLE(XGL_INSTANCE)
-XGL_DEFINE_HANDLE(XGL_PHYSICAL_GPU)
-XGL_DEFINE_HANDLE(XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DEVICE, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_QUEUE, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_GPU_MEMORY, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_OBJECT, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_BUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_BUFFER_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_IMAGE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_IMAGE_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_COLOR_ATTACHMENT_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DEPTH_STENCIL_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SHADER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_PIPELINE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SAMPLER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET_LAYOUT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET_LAYOUT_CHAIN, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_POOL, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_STATE_OBJECT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_VP_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_RS_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_CB_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_DS_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_CMD_BUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_FENCE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SEMAPHORE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_EVENT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_QUERY_POOL, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_FRAMEBUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_RENDER_PASS, XGL_OBJECT)
-
-#define XGL_MAX_PHYSICAL_GPUS 16
-#define XGL_MAX_PHYSICAL_GPU_NAME 256
-
-#define XGL_LOD_CLAMP_NONE MAX_FLOAT
-#define XGL_LAST_MIP_OR_SLICE 0xffffffff
-
-#define XGL_TRUE 1
-#define XGL_FALSE 0
-
-#define XGL_NULL_HANDLE 0
+VK_DEFINE_HANDLE(VK_INSTANCE)
+VK_DEFINE_HANDLE(VK_PHYSICAL_GPU)
+VK_DEFINE_HANDLE(VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DEVICE, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_QUEUE, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_GPU_MEMORY, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_OBJECT, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_BUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_BUFFER_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_IMAGE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_IMAGE_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_COLOR_ATTACHMENT_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DEPTH_STENCIL_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SHADER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_PIPELINE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SAMPLER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET_LAYOUT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET_LAYOUT_CHAIN, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_POOL, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_STATE_OBJECT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_VP_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_RS_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_CB_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_DS_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_CMD_BUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_FENCE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SEMAPHORE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_EVENT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_QUERY_POOL, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_FRAMEBUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_RENDER_PASS, VK_OBJECT)
+
+#define VK_MAX_PHYSICAL_GPUS 16
+#define VK_MAX_PHYSICAL_GPU_NAME 256
+
+#define VK_LOD_CLAMP_NONE MAX_FLOAT
+#define VK_LAST_MIP_OR_SLICE 0xffffffff
+
+#define VK_TRUE 1
+#define VK_FALSE 0
+
+#define VK_NULL_HANDLE 0
// This macro defines INT_MAX in enumerations to force compilers to use 32 bits
// to represent them. This may or may not be necessary on some compilers. The
// option to compile it out may allow compilers that warn about missing enumerants
// in switch statements to be silenced.
-#define XGL_MAX_ENUM(T) T##_MAX_ENUM = 0x7FFFFFFF
+#define VK_MAX_ENUM(T) T##_MAX_ENUM = 0x7FFFFFFF
// ------------------------------------------------------------------------------------------------
// Enumerations
-typedef enum _XGL_MEMORY_PRIORITY
-{
- XGL_MEMORY_PRIORITY_UNUSED = 0x0,
- XGL_MEMORY_PRIORITY_VERY_LOW = 0x1,
- XGL_MEMORY_PRIORITY_LOW = 0x2,
- XGL_MEMORY_PRIORITY_NORMAL = 0x3,
- XGL_MEMORY_PRIORITY_HIGH = 0x4,
- XGL_MEMORY_PRIORITY_VERY_HIGH = 0x5,
-
- XGL_MEMORY_PRIORITY_BEGIN_RANGE = XGL_MEMORY_PRIORITY_UNUSED,
- XGL_MEMORY_PRIORITY_END_RANGE = XGL_MEMORY_PRIORITY_VERY_HIGH,
- XGL_NUM_MEMORY_PRIORITY = (XGL_MEMORY_PRIORITY_END_RANGE - XGL_MEMORY_PRIORITY_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_MEMORY_PRIORITY)
-} XGL_MEMORY_PRIORITY;
-
-typedef enum _XGL_IMAGE_LAYOUT
-{
- XGL_IMAGE_LAYOUT_UNDEFINED = 0x00000000, // Implicit layout an image is when its contents are undefined due to various reasons (e.g. right after creation)
- XGL_IMAGE_LAYOUT_GENERAL = 0x00000001, // General layout when image can be used for any kind of access
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 0x00000002, // Optimal layout when image is only used for color attachment read/write
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 0x00000003, // Optimal layout when image is only used for depth/stencil attachment read/write
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 0x00000004, // Optimal layout when image is used for read only depth/stencil attachment and shader access
- XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 0x00000005, // Optimal layout when image is used for read only shader access
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL = 0x00000006, // Optimal layout when image is used only for clear operations
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL = 0x00000007, // Optimal layout when image is used only as source of transfer operations
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL = 0x00000008, // Optimal layout when image is used only as destination of transfer operations
-
- XGL_IMAGE_LAYOUT_BEGIN_RANGE = XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_END_RANGE = XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
- XGL_NUM_IMAGE_LAYOUT = (XGL_IMAGE_LAYOUT_END_RANGE - XGL_IMAGE_LAYOUT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_LAYOUT)
-} XGL_IMAGE_LAYOUT;
-
-typedef enum _XGL_PIPE_EVENT
-{
- XGL_PIPE_EVENT_TOP_OF_PIPE = 0x00000001, // Set event before the GPU starts processing subsequent command
- XGL_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE = 0x00000002, // Set event when all pending vertex processing is complete
- XGL_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE = 0x00000003, // Set event when all pending fragment shader executions are complete, within each fragment location
- XGL_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE = 0x00000004, // Set event when all pending fragment shader executions are complete
- XGL_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE = 0x00000005, // Set event when all pending graphics operations are complete
- XGL_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE = 0x00000006, // Set event when all pending compute operations are complete
- XGL_PIPE_EVENT_TRANSFER_COMPLETE = 0x00000007, // Set event when all pending transfer operations are complete
- XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE = 0x00000008, // Set event when all pending GPU work is complete
-
- XGL_PIPE_EVENT_BEGIN_RANGE = XGL_PIPE_EVENT_TOP_OF_PIPE,
- XGL_PIPE_EVENT_END_RANGE = XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE,
- XGL_NUM_PIPE_EVENT = (XGL_PIPE_EVENT_END_RANGE - XGL_PIPE_EVENT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPE_EVENT)
-} XGL_PIPE_EVENT;
-
-typedef enum _XGL_WAIT_EVENT
-{
- XGL_WAIT_EVENT_TOP_OF_PIPE = 0x00000001, // Wait event before the GPU starts processing subsequent commands
- XGL_WAIT_EVENT_BEFORE_RASTERIZATION = 0x00000002, // Wait event before rasterizing subsequent primitives
-
- XGL_WAIT_EVENT_BEGIN_RANGE = XGL_WAIT_EVENT_TOP_OF_PIPE,
- XGL_WAIT_EVENT_END_RANGE = XGL_WAIT_EVENT_BEFORE_RASTERIZATION,
- XGL_NUM_WAIT_EVENT = (XGL_WAIT_EVENT_END_RANGE - XGL_WAIT_EVENT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_WAIT_EVENT)
-} XGL_WAIT_EVENT;
-
-typedef enum _XGL_MEMORY_OUTPUT_FLAGS
-{
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT = 0x00000001, // Controls output coherency of CPU writes
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT = 0x00000002, // Controls output coherency of generic shader writes
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT = 0x00000004, // Controls output coherency of color attachment writes
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000008, // Controls output coherency of depth/stencil attachment writes
- XGL_MEMORY_OUTPUT_COPY_BIT = 0x00000010, // Controls output coherency of copy operations
- XGL_MAX_ENUM(_XGL_MEMORY_OUTPUT_FLAGS)
-} XGL_MEMORY_OUTPUT_FLAGS;
-
-typedef enum _XGL_MEMORY_INPUT_FLAGS
-{
- XGL_MEMORY_INPUT_CPU_READ_BIT = 0x00000001, // Controls input coherency of CPU reads
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT = 0x00000002, // Controls input coherency of indirect command reads
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT = 0x00000004, // Controls input coherency of index fetches
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT = 0x00000008, // Controls input coherency of vertex attribute fetches
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT = 0x00000010, // Controls input coherency of uniform buffer reads
- XGL_MEMORY_INPUT_SHADER_READ_BIT = 0x00000020, // Controls input coherency of generic shader reads
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT = 0x00000040, // Controls input coherency of color attachment reads
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000080, // Controls input coherency of depth/stencil attachment reads
- XGL_MEMORY_INPUT_COPY_BIT = 0x00000100, // Controls input coherency of copy operations
- XGL_MAX_ENUM(_XGL_MEMORY_INPUT_FLAGS)
-} XGL_MEMORY_INPUT_FLAGS;
-
-typedef enum _XGL_ATTACHMENT_LOAD_OP
-{
- XGL_ATTACHMENT_LOAD_OP_LOAD = 0x00000000,
- XGL_ATTACHMENT_LOAD_OP_CLEAR = 0x00000001,
- XGL_ATTACHMENT_LOAD_OP_DONT_CARE = 0x00000002,
-
- XGL_ATTACHMENT_LOAD_OP_BEGIN_RANGE = XGL_ATTACHMENT_LOAD_OP_LOAD,
- XGL_ATTACHMENT_LOAD_OP_END_RANGE = XGL_ATTACHMENT_LOAD_OP_DONT_CARE,
- XGL_NUM_ATTACHMENT_LOAD_OP = (XGL_ATTACHMENT_LOAD_OP_END_RANGE - XGL_ATTACHMENT_LOAD_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_ATTACHMENT_LOAD_OP)
-} XGL_ATTACHMENT_LOAD_OP;
-
-typedef enum _XGL_ATTACHMENT_STORE_OP
-{
- XGL_ATTACHMENT_STORE_OP_STORE = 0x00000000,
- XGL_ATTACHMENT_STORE_OP_RESOLVE_MSAA = 0x00000001,
- XGL_ATTACHMENT_STORE_OP_DONT_CARE = 0x00000002,
-
- XGL_ATTACHMENT_STORE_OP_BEGIN_RANGE = XGL_ATTACHMENT_STORE_OP_STORE,
- XGL_ATTACHMENT_STORE_OP_END_RANGE = XGL_ATTACHMENT_STORE_OP_DONT_CARE,
- XGL_NUM_ATTACHMENT_STORE_OP = (XGL_ATTACHMENT_STORE_OP_END_RANGE - XGL_ATTACHMENT_STORE_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_ATTACHMENT_STORE_OP)
-} XGL_ATTACHMENT_STORE_OP;
-
-typedef enum _XGL_IMAGE_TYPE
-{
- XGL_IMAGE_1D = 0x00000000,
- XGL_IMAGE_2D = 0x00000001,
- XGL_IMAGE_3D = 0x00000002,
-
- XGL_IMAGE_TYPE_BEGIN_RANGE = XGL_IMAGE_1D,
- XGL_IMAGE_TYPE_END_RANGE = XGL_IMAGE_3D,
- XGL_NUM_IMAGE_TYPE = (XGL_IMAGE_TYPE_END_RANGE - XGL_IMAGE_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_TYPE)
-} XGL_IMAGE_TYPE;
-
-typedef enum _XGL_IMAGE_TILING
-{
- XGL_LINEAR_TILING = 0x00000000,
- XGL_OPTIMAL_TILING = 0x00000001,
-
- XGL_IMAGE_TILING_BEGIN_RANGE = XGL_LINEAR_TILING,
- XGL_IMAGE_TILING_END_RANGE = XGL_OPTIMAL_TILING,
- XGL_NUM_IMAGE_TILING = (XGL_IMAGE_TILING_END_RANGE - XGL_IMAGE_TILING_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_TILING)
-} XGL_IMAGE_TILING;
-
-typedef enum _XGL_IMAGE_VIEW_TYPE
-{
- XGL_IMAGE_VIEW_1D = 0x00000000,
- XGL_IMAGE_VIEW_2D = 0x00000001,
- XGL_IMAGE_VIEW_3D = 0x00000002,
- XGL_IMAGE_VIEW_CUBE = 0x00000003,
-
- XGL_IMAGE_VIEW_TYPE_BEGIN_RANGE = XGL_IMAGE_VIEW_1D,
- XGL_IMAGE_VIEW_TYPE_END_RANGE = XGL_IMAGE_VIEW_CUBE,
- XGL_NUM_IMAGE_VIEW_TYPE = (XGL_IMAGE_VIEW_TYPE_END_RANGE - XGL_IMAGE_VIEW_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_VIEW_TYPE)
-} XGL_IMAGE_VIEW_TYPE;
-
-typedef enum _XGL_IMAGE_ASPECT
-{
- XGL_IMAGE_ASPECT_COLOR = 0x00000000,
- XGL_IMAGE_ASPECT_DEPTH = 0x00000001,
- XGL_IMAGE_ASPECT_STENCIL = 0x00000002,
-
- XGL_IMAGE_ASPECT_BEGIN_RANGE = XGL_IMAGE_ASPECT_COLOR,
- XGL_IMAGE_ASPECT_END_RANGE = XGL_IMAGE_ASPECT_STENCIL,
- XGL_NUM_IMAGE_ASPECT = (XGL_IMAGE_ASPECT_END_RANGE - XGL_IMAGE_ASPECT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_ASPECT)
-} XGL_IMAGE_ASPECT;
-
-typedef enum _XGL_CHANNEL_SWIZZLE
-{
- XGL_CHANNEL_SWIZZLE_ZERO = 0x00000000,
- XGL_CHANNEL_SWIZZLE_ONE = 0x00000001,
- XGL_CHANNEL_SWIZZLE_R = 0x00000002,
- XGL_CHANNEL_SWIZZLE_G = 0x00000003,
- XGL_CHANNEL_SWIZZLE_B = 0x00000004,
- XGL_CHANNEL_SWIZZLE_A = 0x00000005,
-
- XGL_CHANNEL_SWIZZLE_BEGIN_RANGE = XGL_CHANNEL_SWIZZLE_ZERO,
- XGL_CHANNEL_SWIZZLE_END_RANGE = XGL_CHANNEL_SWIZZLE_A,
- XGL_NUM_CHANNEL_SWIZZLE = (XGL_CHANNEL_SWIZZLE_END_RANGE - XGL_CHANNEL_SWIZZLE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_CHANNEL_SWIZZLE)
-} XGL_CHANNEL_SWIZZLE;
-
-typedef enum _XGL_DESCRIPTOR_TYPE
-{
- XGL_DESCRIPTOR_TYPE_SAMPLER = 0x00000000,
- XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE = 0x00000001,
- XGL_DESCRIPTOR_TYPE_TEXTURE = 0x00000002,
- XGL_DESCRIPTOR_TYPE_TEXTURE_BUFFER = 0x00000003,
- XGL_DESCRIPTOR_TYPE_IMAGE = 0x00000004,
- XGL_DESCRIPTOR_TYPE_IMAGE_BUFFER = 0x00000005,
- XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 0x00000006,
- XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER = 0x00000007,
- XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 0x00000008,
- XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC = 0x00000009,
+typedef enum _VK_MEMORY_PRIORITY
+{
+ VK_MEMORY_PRIORITY_UNUSED = 0x0,
+ VK_MEMORY_PRIORITY_VERY_LOW = 0x1,
+ VK_MEMORY_PRIORITY_LOW = 0x2,
+ VK_MEMORY_PRIORITY_NORMAL = 0x3,
+ VK_MEMORY_PRIORITY_HIGH = 0x4,
+ VK_MEMORY_PRIORITY_VERY_HIGH = 0x5,
+
+ VK_MEMORY_PRIORITY_BEGIN_RANGE = VK_MEMORY_PRIORITY_UNUSED,
+ VK_MEMORY_PRIORITY_END_RANGE = VK_MEMORY_PRIORITY_VERY_HIGH,
+ VK_NUM_MEMORY_PRIORITY = (VK_MEMORY_PRIORITY_END_RANGE - VK_MEMORY_PRIORITY_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_MEMORY_PRIORITY)
+} VK_MEMORY_PRIORITY;
+
+typedef enum _VK_IMAGE_LAYOUT
+{
+ VK_IMAGE_LAYOUT_UNDEFINED = 0x00000000, // Implicit layout an image is when its contents are undefined due to various reasons (e.g. right after creation)
+ VK_IMAGE_LAYOUT_GENERAL = 0x00000001, // General layout when image can be used for any kind of access
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 0x00000002, // Optimal layout when image is only used for color attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 0x00000003, // Optimal layout when image is only used for depth/stencil attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 0x00000004, // Optimal layout when image is used for read only depth/stencil attachment and shader access
+ VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 0x00000005, // Optimal layout when image is used for read only shader access
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL = 0x00000006, // Optimal layout when image is used only for clear operations
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL = 0x00000007, // Optimal layout when image is used only as source of transfer operations
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL = 0x00000008, // Optimal layout when image is used only as destination of transfer operations
+
+ VK_IMAGE_LAYOUT_BEGIN_RANGE = VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_END_RANGE = VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_NUM_IMAGE_LAYOUT = (VK_IMAGE_LAYOUT_END_RANGE - VK_IMAGE_LAYOUT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_LAYOUT)
+} VK_IMAGE_LAYOUT;
+
+typedef enum _VK_PIPE_EVENT
+{
+ VK_PIPE_EVENT_TOP_OF_PIPE = 0x00000001, // Set event before the GPU starts processing subsequent command
+ VK_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE = 0x00000002, // Set event when all pending vertex processing is complete
+ VK_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE = 0x00000003, // Set event when all pending fragment shader executions are complete, within each fragment location
+ VK_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE = 0x00000004, // Set event when all pending fragment shader executions are complete
+ VK_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE = 0x00000005, // Set event when all pending graphics operations are complete
+ VK_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE = 0x00000006, // Set event when all pending compute operations are complete
+ VK_PIPE_EVENT_TRANSFER_COMPLETE = 0x00000007, // Set event when all pending transfer operations are complete
+ VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE = 0x00000008, // Set event when all pending GPU work is complete
+
+ VK_PIPE_EVENT_BEGIN_RANGE = VK_PIPE_EVENT_TOP_OF_PIPE,
+ VK_PIPE_EVENT_END_RANGE = VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE,
+ VK_NUM_PIPE_EVENT = (VK_PIPE_EVENT_END_RANGE - VK_PIPE_EVENT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPE_EVENT)
+} VK_PIPE_EVENT;
+
+typedef enum _VK_WAIT_EVENT
+{
+ VK_WAIT_EVENT_TOP_OF_PIPE = 0x00000001, // Wait event before the GPU starts processing subsequent commands
+ VK_WAIT_EVENT_BEFORE_RASTERIZATION = 0x00000002, // Wait event before rasterizing subsequent primitives
+
+ VK_WAIT_EVENT_BEGIN_RANGE = VK_WAIT_EVENT_TOP_OF_PIPE,
+ VK_WAIT_EVENT_END_RANGE = VK_WAIT_EVENT_BEFORE_RASTERIZATION,
+ VK_NUM_WAIT_EVENT = (VK_WAIT_EVENT_END_RANGE - VK_WAIT_EVENT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_WAIT_EVENT)
+} VK_WAIT_EVENT;
+
+typedef enum _VK_MEMORY_OUTPUT_FLAGS
+{
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT = 0x00000001, // Controls output coherency of CPU writes
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT = 0x00000002, // Controls output coherency of generic shader writes
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT = 0x00000004, // Controls output coherency of color attachment writes
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000008, // Controls output coherency of depth/stencil attachment writes
+ VK_MEMORY_OUTPUT_COPY_BIT = 0x00000010, // Controls output coherency of copy operations
+ VK_MAX_ENUM(_VK_MEMORY_OUTPUT_FLAGS)
+} VK_MEMORY_OUTPUT_FLAGS;
+
+typedef enum _VK_MEMORY_INPUT_FLAGS
+{
+ VK_MEMORY_INPUT_CPU_READ_BIT = 0x00000001, // Controls input coherency of CPU reads
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT = 0x00000002, // Controls input coherency of indirect command reads
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT = 0x00000004, // Controls input coherency of index fetches
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT = 0x00000008, // Controls input coherency of vertex attribute fetches
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT = 0x00000010, // Controls input coherency of uniform buffer reads
+ VK_MEMORY_INPUT_SHADER_READ_BIT = 0x00000020, // Controls input coherency of generic shader reads
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT = 0x00000040, // Controls input coherency of color attachment reads
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000080, // Controls input coherency of depth/stencil attachment reads
+ VK_MEMORY_INPUT_COPY_BIT = 0x00000100, // Controls input coherency of copy operations
+ VK_MAX_ENUM(_VK_MEMORY_INPUT_FLAGS)
+} VK_MEMORY_INPUT_FLAGS;
+
+typedef enum _VK_ATTACHMENT_LOAD_OP
+{
+ VK_ATTACHMENT_LOAD_OP_LOAD = 0x00000000,
+ VK_ATTACHMENT_LOAD_OP_CLEAR = 0x00000001,
+ VK_ATTACHMENT_LOAD_OP_DONT_CARE = 0x00000002,
+
+ VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE = VK_ATTACHMENT_LOAD_OP_LOAD,
+ VK_ATTACHMENT_LOAD_OP_END_RANGE = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
+ VK_NUM_ATTACHMENT_LOAD_OP = (VK_ATTACHMENT_LOAD_OP_END_RANGE - VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_ATTACHMENT_LOAD_OP)
+} VK_ATTACHMENT_LOAD_OP;
+
+typedef enum _VK_ATTACHMENT_STORE_OP
+{
+ VK_ATTACHMENT_STORE_OP_STORE = 0x00000000,
+ VK_ATTACHMENT_STORE_OP_RESOLVE_MSAA = 0x00000001,
+ VK_ATTACHMENT_STORE_OP_DONT_CARE = 0x00000002,
+
+ VK_ATTACHMENT_STORE_OP_BEGIN_RANGE = VK_ATTACHMENT_STORE_OP_STORE,
+ VK_ATTACHMENT_STORE_OP_END_RANGE = VK_ATTACHMENT_STORE_OP_DONT_CARE,
+ VK_NUM_ATTACHMENT_STORE_OP = (VK_ATTACHMENT_STORE_OP_END_RANGE - VK_ATTACHMENT_STORE_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_ATTACHMENT_STORE_OP)
+} VK_ATTACHMENT_STORE_OP;
+
+typedef enum _VK_IMAGE_TYPE
+{
+ VK_IMAGE_1D = 0x00000000,
+ VK_IMAGE_2D = 0x00000001,
+ VK_IMAGE_3D = 0x00000002,
+
+ VK_IMAGE_TYPE_BEGIN_RANGE = VK_IMAGE_1D,
+ VK_IMAGE_TYPE_END_RANGE = VK_IMAGE_3D,
+ VK_NUM_IMAGE_TYPE = (VK_IMAGE_TYPE_END_RANGE - VK_IMAGE_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_TYPE)
+} VK_IMAGE_TYPE;
+
+typedef enum _VK_IMAGE_TILING
+{
+ VK_LINEAR_TILING = 0x00000000,
+ VK_OPTIMAL_TILING = 0x00000001,
+
+ VK_IMAGE_TILING_BEGIN_RANGE = VK_LINEAR_TILING,
+ VK_IMAGE_TILING_END_RANGE = VK_OPTIMAL_TILING,
+ VK_NUM_IMAGE_TILING = (VK_IMAGE_TILING_END_RANGE - VK_IMAGE_TILING_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_TILING)
+} VK_IMAGE_TILING;
+
+typedef enum _VK_IMAGE_VIEW_TYPE
+{
+ VK_IMAGE_VIEW_1D = 0x00000000,
+ VK_IMAGE_VIEW_2D = 0x00000001,
+ VK_IMAGE_VIEW_3D = 0x00000002,
+ VK_IMAGE_VIEW_CUBE = 0x00000003,
+
+ VK_IMAGE_VIEW_TYPE_BEGIN_RANGE = VK_IMAGE_VIEW_1D,
+ VK_IMAGE_VIEW_TYPE_END_RANGE = VK_IMAGE_VIEW_CUBE,
+ VK_NUM_IMAGE_VIEW_TYPE = (VK_IMAGE_VIEW_TYPE_END_RANGE - VK_IMAGE_VIEW_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_VIEW_TYPE)
+} VK_IMAGE_VIEW_TYPE;
+
+typedef enum _VK_IMAGE_ASPECT
+{
+ VK_IMAGE_ASPECT_COLOR = 0x00000000,
+ VK_IMAGE_ASPECT_DEPTH = 0x00000001,
+ VK_IMAGE_ASPECT_STENCIL = 0x00000002,
+
+ VK_IMAGE_ASPECT_BEGIN_RANGE = VK_IMAGE_ASPECT_COLOR,
+ VK_IMAGE_ASPECT_END_RANGE = VK_IMAGE_ASPECT_STENCIL,
+ VK_NUM_IMAGE_ASPECT = (VK_IMAGE_ASPECT_END_RANGE - VK_IMAGE_ASPECT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_ASPECT)
+} VK_IMAGE_ASPECT;
+
+typedef enum _VK_CHANNEL_SWIZZLE
+{
+ VK_CHANNEL_SWIZZLE_ZERO = 0x00000000,
+ VK_CHANNEL_SWIZZLE_ONE = 0x00000001,
+ VK_CHANNEL_SWIZZLE_R = 0x00000002,
+ VK_CHANNEL_SWIZZLE_G = 0x00000003,
+ VK_CHANNEL_SWIZZLE_B = 0x00000004,
+ VK_CHANNEL_SWIZZLE_A = 0x00000005,
+
+ VK_CHANNEL_SWIZZLE_BEGIN_RANGE = VK_CHANNEL_SWIZZLE_ZERO,
+ VK_CHANNEL_SWIZZLE_END_RANGE = VK_CHANNEL_SWIZZLE_A,
+ VK_NUM_CHANNEL_SWIZZLE = (VK_CHANNEL_SWIZZLE_END_RANGE - VK_CHANNEL_SWIZZLE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_CHANNEL_SWIZZLE)
+} VK_CHANNEL_SWIZZLE;
+
+typedef enum _VK_DESCRIPTOR_TYPE
+{
+ VK_DESCRIPTOR_TYPE_SAMPLER = 0x00000000,
+ VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE = 0x00000001,
+ VK_DESCRIPTOR_TYPE_TEXTURE = 0x00000002,
+ VK_DESCRIPTOR_TYPE_TEXTURE_BUFFER = 0x00000003,
+ VK_DESCRIPTOR_TYPE_IMAGE = 0x00000004,
+ VK_DESCRIPTOR_TYPE_IMAGE_BUFFER = 0x00000005,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 0x00000006,
+ VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER = 0x00000007,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 0x00000008,
+ VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC = 0x00000009,
- XGL_DESCRIPTOR_TYPE_BEGIN_RANGE = XGL_DESCRIPTOR_TYPE_SAMPLER,
- XGL_DESCRIPTOR_TYPE_END_RANGE = XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC,
- XGL_NUM_DESCRIPTOR_TYPE = (XGL_DESCRIPTOR_TYPE_END_RANGE - XGL_DESCRIPTOR_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_TYPE)
-} XGL_DESCRIPTOR_TYPE;
-
-typedef enum _XGL_DESCRIPTOR_POOL_USAGE
-{
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT = 0x00000000,
- XGL_DESCRIPTOR_POOL_USAGE_DYNAMIC = 0x00000001,
-
- XGL_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE = XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT,
- XGL_DESCRIPTOR_POOL_USAGE_END_RANGE = XGL_DESCRIPTOR_POOL_USAGE_DYNAMIC,
- XGL_NUM_DESCRIPTOR_POOL_USAGE = (XGL_DESCRIPTOR_POOL_USAGE_END_RANGE - XGL_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_POOL_USAGE)
-} XGL_DESCRIPTOR_POOL_USAGE;
-
-typedef enum _XGL_DESCRIPTOR_UPDATE_MODE
-{
- XGL_DESCRIPTOR_UDPATE_MODE_COPY = 0x00000000,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST = 0x00000001,
-
- XGL_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE = XGL_DESCRIPTOR_UDPATE_MODE_COPY,
- XGL_DESCRIPTOR_UPDATE_MODE_END_RANGE = XGL_DESCRIPTOR_UPDATE_MODE_FASTEST,
- XGL_NUM_DESCRIPTOR_UPDATE_MODE = (XGL_DESCRIPTOR_UPDATE_MODE_END_RANGE - XGL_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_UPDATE_MODE)
-} XGL_DESCRIPTOR_UPDATE_MODE;
-
-typedef enum _XGL_DESCRIPTOR_SET_USAGE
-{
- XGL_DESCRIPTOR_SET_USAGE_ONE_SHOT = 0x00000000,
- XGL_DESCRIPTOR_SET_USAGE_STATIC = 0x00000001,
-
- XGL_DESCRIPTOR_SET_USAGE_BEGIN_RANGE = XGL_DESCRIPTOR_SET_USAGE_ONE_SHOT,
- XGL_DESCRIPTOR_SET_USAGE_END_RANGE = XGL_DESCRIPTOR_SET_USAGE_STATIC,
- XGL_NUM_DESCRIPTOR_SET_USAGE = (XGL_DESCRIPTOR_SET_USAGE_END_RANGE - XGL_DESCRIPTOR_SET_USAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_SET_USAGE)
-} XGL_DESCRIPTOR_SET_USAGE;
-
-typedef enum _XGL_QUERY_TYPE
-{
- XGL_QUERY_OCCLUSION = 0x00000000,
- XGL_QUERY_PIPELINE_STATISTICS = 0x00000001,
-
- XGL_QUERY_TYPE_BEGIN_RANGE = XGL_QUERY_OCCLUSION,
- XGL_QUERY_TYPE_END_RANGE = XGL_QUERY_PIPELINE_STATISTICS,
- XGL_NUM_QUERY_TYPE = (XGL_QUERY_TYPE_END_RANGE - XGL_QUERY_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_QUERY_TYPE)
-} XGL_QUERY_TYPE;
-
-typedef enum _XGL_TIMESTAMP_TYPE
-{
- XGL_TIMESTAMP_TOP = 0x00000000,
- XGL_TIMESTAMP_BOTTOM = 0x00000001,
-
- XGL_TIMESTAMP_TYPE_BEGIN_RANGE = XGL_TIMESTAMP_TOP,
- XGL_TIMESTAMP_TYPE_END_RANGE = XGL_TIMESTAMP_BOTTOM,
- XGL_NUM_TIMESTAMP_TYPE = (XGL_TIMESTAMP_TYPE_END_RANGE - XGL_TIMESTAMP_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TIMESTEAMP_TYPE)
-} XGL_TIMESTAMP_TYPE;
-
-typedef enum _XGL_BORDER_COLOR_TYPE
-{
- XGL_BORDER_COLOR_OPAQUE_WHITE = 0x00000000,
- XGL_BORDER_COLOR_TRANSPARENT_BLACK = 0x00000001,
- XGL_BORDER_COLOR_OPAQUE_BLACK = 0x00000002,
-
- XGL_BORDER_COLOR_TYPE_BEGIN_RANGE = XGL_BORDER_COLOR_OPAQUE_WHITE,
- XGL_BORDER_COLOR_TYPE_END_RANGE = XGL_BORDER_COLOR_OPAQUE_BLACK,
- XGL_NUM_BORDER_COLOR_TYPE = (XGL_BORDER_COLOR_TYPE_END_RANGE - XGL_BORDER_COLOR_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BORDER_COLOR_TYPE)
-} XGL_BORDER_COLOR_TYPE;
-
-typedef enum _XGL_PIPELINE_BIND_POINT
-{
- XGL_PIPELINE_BIND_POINT_COMPUTE = 0x00000000,
- XGL_PIPELINE_BIND_POINT_GRAPHICS = 0x00000001,
-
- XGL_PIPELINE_BIND_POINT_BEGIN_RANGE = XGL_PIPELINE_BIND_POINT_COMPUTE,
- XGL_PIPELINE_BIND_POINT_END_RANGE = XGL_PIPELINE_BIND_POINT_GRAPHICS,
- XGL_NUM_PIPELINE_BIND_POINT = (XGL_PIPELINE_BIND_POINT_END_RANGE - XGL_PIPELINE_BIND_POINT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPELINE_BIND_POINT)
-} XGL_PIPELINE_BIND_POINT;
-
-typedef enum _XGL_STATE_BIND_POINT
-{
- XGL_STATE_BIND_VIEWPORT = 0x00000000,
- XGL_STATE_BIND_RASTER = 0x00000001,
- XGL_STATE_BIND_COLOR_BLEND = 0x00000002,
- XGL_STATE_BIND_DEPTH_STENCIL = 0x00000003,
-
- XGL_STATE_BIND_POINT_BEGIN_RANGE = XGL_STATE_BIND_VIEWPORT,
- XGL_STATE_BIND_POINT_END_RANGE = XGL_STATE_BIND_DEPTH_STENCIL,
- XGL_NUM_STATE_BIND_POINT = (XGL_STATE_BIND_POINT_END_RANGE - XGL_STATE_BIND_POINT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STATE_BIND_POINT)
-} XGL_STATE_BIND_POINT;
-
-typedef enum _XGL_PRIMITIVE_TOPOLOGY
-{
- XGL_TOPOLOGY_POINT_LIST = 0x00000000,
- XGL_TOPOLOGY_LINE_LIST = 0x00000001,
- XGL_TOPOLOGY_LINE_STRIP = 0x00000002,
- XGL_TOPOLOGY_TRIANGLE_LIST = 0x00000003,
- XGL_TOPOLOGY_TRIANGLE_STRIP = 0x00000004,
- XGL_TOPOLOGY_TRIANGLE_FAN = 0x00000005,
- XGL_TOPOLOGY_LINE_LIST_ADJ = 0x00000006,
- XGL_TOPOLOGY_LINE_STRIP_ADJ = 0x00000007,
- XGL_TOPOLOGY_TRIANGLE_LIST_ADJ = 0x00000008,
- XGL_TOPOLOGY_TRIANGLE_STRIP_ADJ = 0x00000009,
- XGL_TOPOLOGY_PATCH = 0x0000000a,
-
- XGL_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = XGL_TOPOLOGY_POINT_LIST,
- XGL_PRIMITIVE_TOPOLOGY_END_RANGE = XGL_TOPOLOGY_PATCH,
- XGL_NUM_PRIMITIVE_TOPOLOGY = (XGL_PRIMITIVE_TOPOLOGY_END_RANGE - XGL_PRIMITIVE_TOPOLOGY_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PRIMITIVE_TOPOLOGY)
-} XGL_PRIMITIVE_TOPOLOGY;
-
-typedef enum _XGL_INDEX_TYPE
-{
- XGL_INDEX_8 = 0x00000000,
- XGL_INDEX_16 = 0x00000001,
- XGL_INDEX_32 = 0x00000002,
-
- XGL_INDEX_TYPE_BEGIN_RANGE = XGL_INDEX_8,
- XGL_INDEX_TYPE_END_RANGE = XGL_INDEX_32,
- XGL_NUM_INDEX_TYPE = (XGL_INDEX_TYPE_END_RANGE - XGL_INDEX_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_INDEX_TYPE)
-} XGL_INDEX_TYPE;
-
-typedef enum _XGL_TEX_FILTER
-{
- XGL_TEX_FILTER_NEAREST = 0,
- XGL_TEX_FILTER_LINEAR = 1,
-
- XGL_TEX_FILTER_BEGIN_RANGE = XGL_TEX_FILTER_NEAREST,
- XGL_TEX_FILTER_END_RANGE = XGL_TEX_FILTER_LINEAR,
- XGL_NUM_TEX_FILTER = (XGL_TEX_FILTER_END_RANGE - XGL_TEX_FILTER_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_FILTER)
-} XGL_TEX_FILTER;
-
-typedef enum _XGL_TEX_MIPMAP_MODE
-{
- XGL_TEX_MIPMAP_BASE = 0, // Always choose base level
- XGL_TEX_MIPMAP_NEAREST = 1, // Choose nearest mip level
- XGL_TEX_MIPMAP_LINEAR = 2, // Linear filter between mip levels
-
- XGL_TEX_MIPMAP_BEGIN_RANGE = XGL_TEX_MIPMAP_BASE,
- XGL_TEX_MIPMAP_END_RANGE = XGL_TEX_MIPMAP_LINEAR,
- XGL_NUM_TEX_MIPMAP = (XGL_TEX_MIPMAP_END_RANGE - XGL_TEX_MIPMAP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_MIPMAP_MODE)
-} XGL_TEX_MIPMAP_MODE;
-
-typedef enum _XGL_TEX_ADDRESS
-{
- XGL_TEX_ADDRESS_WRAP = 0x00000000,
- XGL_TEX_ADDRESS_MIRROR = 0x00000001,
- XGL_TEX_ADDRESS_CLAMP = 0x00000002,
- XGL_TEX_ADDRESS_MIRROR_ONCE = 0x00000003,
- XGL_TEX_ADDRESS_CLAMP_BORDER = 0x00000004,
-
- XGL_TEX_ADDRESS_BEGIN_RANGE = XGL_TEX_ADDRESS_WRAP,
- XGL_TEX_ADDRESS_END_RANGE = XGL_TEX_ADDRESS_CLAMP_BORDER,
- XGL_NUM_TEX_ADDRESS = (XGL_TEX_ADDRESS_END_RANGE - XGL_TEX_ADDRESS_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_ADDRESS)
-} XGL_TEX_ADDRESS;
-
-typedef enum _XGL_COMPARE_FUNC
-{
- XGL_COMPARE_NEVER = 0x00000000,
- XGL_COMPARE_LESS = 0x00000001,
- XGL_COMPARE_EQUAL = 0x00000002,
- XGL_COMPARE_LESS_EQUAL = 0x00000003,
- XGL_COMPARE_GREATER = 0x00000004,
- XGL_COMPARE_NOT_EQUAL = 0x00000005,
- XGL_COMPARE_GREATER_EQUAL = 0x00000006,
- XGL_COMPARE_ALWAYS = 0x00000007,
-
- XGL_COMPARE_FUNC_BEGIN_RANGE = XGL_COMPARE_NEVER,
- XGL_COMPARE_FUNC_END_RANGE = XGL_COMPARE_ALWAYS,
- XGL_NUM_COMPARE_FUNC = (XGL_COMPARE_FUNC_END_RANGE - XGL_COMPARE_FUNC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_COMPARE_FUNC)
-} XGL_COMPARE_FUNC;
-
-typedef enum _XGL_FILL_MODE
-{
- XGL_FILL_POINTS = 0x00000000,
- XGL_FILL_WIREFRAME = 0x00000001,
- XGL_FILL_SOLID = 0x00000002,
-
- XGL_FILL_MODE_BEGIN_RANGE = XGL_FILL_POINTS,
- XGL_FILL_MODE_END_RANGE = XGL_FILL_SOLID,
- XGL_NUM_FILL_MODE = (XGL_FILL_MODE_END_RANGE - XGL_FILL_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FILL_MODE)
-} XGL_FILL_MODE;
-
-typedef enum _XGL_CULL_MODE
-{
- XGL_CULL_NONE = 0x00000000,
- XGL_CULL_FRONT = 0x00000001,
- XGL_CULL_BACK = 0x00000002,
- XGL_CULL_FRONT_AND_BACK = 0x00000003,
-
- XGL_CULL_MODE_BEGIN_RANGE = XGL_CULL_NONE,
- XGL_CULL_MODE_END_RANGE = XGL_CULL_FRONT_AND_BACK,
- XGL_NUM_CULL_MODE = (XGL_CULL_MODE_END_RANGE - XGL_CULL_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_CULL_MODE)
-} XGL_CULL_MODE;
-
-typedef enum _XGL_FACE_ORIENTATION
-{
- XGL_FRONT_FACE_CCW = 0x00000000,
- XGL_FRONT_FACE_CW = 0x00000001,
-
- XGL_FACE_ORIENTATION_BEGIN_RANGE = XGL_FRONT_FACE_CCW,
- XGL_FACE_ORIENTATION_END_RANGE = XGL_FRONT_FACE_CW,
- XGL_NUM_FACE_ORIENTATION = (XGL_FACE_ORIENTATION_END_RANGE - XGL_FACE_ORIENTATION_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FACE_ORIENTATION)
-} XGL_FACE_ORIENTATION;
-
-typedef enum _XGL_PROVOKING_VERTEX_CONVENTION
-{
- XGL_PROVOKING_VERTEX_FIRST = 0x00000000,
- XGL_PROVOKING_VERTEX_LAST = 0x00000001,
-
- XGL_PROVOKING_VERTEX_BEGIN_RANGE = XGL_PROVOKING_VERTEX_FIRST,
- XGL_PROVOKING_VERTEX_END_RANGE = XGL_PROVOKING_VERTEX_LAST,
- XGL_NUM_PROVOKING_VERTEX_CONVENTION = (XGL_PROVOKING_VERTEX_END_RANGE - XGL_PROVOKING_VERTEX_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PROVOKING_VERTEX_CONVENTION)
-} XGL_PROVOKING_VERTEX_CONVENTION;
-
-typedef enum _XGL_COORDINATE_ORIGIN
-{
- XGL_COORDINATE_ORIGIN_UPPER_LEFT = 0x00000000,
- XGL_COORDINATE_ORIGIN_LOWER_LEFT = 0x00000001,
-
- XGL_COORDINATE_ORIGIN_BEGIN_RANGE = XGL_COORDINATE_ORIGIN_UPPER_LEFT,
- XGL_COORDINATE_ORIGIN_END_RANGE = XGL_COORDINATE_ORIGIN_LOWER_LEFT,
- XGL_NUM_COORDINATE_ORIGIN = (XGL_COORDINATE_ORIGIN_END_RANGE - XGL_COORDINATE_ORIGIN_END_RANGE + 1),
- XGL_MAX_ENUM(_XGL_COORDINATE_ORIGIN)
-} XGL_COORDINATE_ORIGIN;
-
-typedef enum _XGL_DEPTH_MODE
-{
- XGL_DEPTH_MODE_ZERO_TO_ONE = 0x00000000,
- XGL_DEPTH_MODE_NEGATIVE_ONE_TO_ONE = 0x00000001,
-
- XGL_DEPTH_MODE_BEGIN_RANGE = XGL_DEPTH_MODE_ZERO_TO_ONE,
- XGL_DEPTH_MODE_END_RANGE = XGL_DEPTH_MODE_NEGATIVE_ONE_TO_ONE,
- XGL_NUM_DEPTH_MODE = (XGL_DEPTH_MODE_END_RANGE - XGL_DEPTH_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DEPTH_MODE)
-} XGL_DEPTH_MODE;
-
-typedef enum _XGL_BLEND
-{
- XGL_BLEND_ZERO = 0x00000000,
- XGL_BLEND_ONE = 0x00000001,
- XGL_BLEND_SRC_COLOR = 0x00000002,
- XGL_BLEND_ONE_MINUS_SRC_COLOR = 0x00000003,
- XGL_BLEND_DEST_COLOR = 0x00000004,
- XGL_BLEND_ONE_MINUS_DEST_COLOR = 0x00000005,
- XGL_BLEND_SRC_ALPHA = 0x00000006,
- XGL_BLEND_ONE_MINUS_SRC_ALPHA = 0x00000007,
- XGL_BLEND_DEST_ALPHA = 0x00000008,
- XGL_BLEND_ONE_MINUS_DEST_ALPHA = 0x00000009,
- XGL_BLEND_CONSTANT_COLOR = 0x0000000a,
- XGL_BLEND_ONE_MINUS_CONSTANT_COLOR = 0x0000000b,
- XGL_BLEND_CONSTANT_ALPHA = 0x0000000c,
- XGL_BLEND_ONE_MINUS_CONSTANT_ALPHA = 0x0000000d,
- XGL_BLEND_SRC_ALPHA_SATURATE = 0x0000000e,
- XGL_BLEND_SRC1_COLOR = 0x0000000f,
- XGL_BLEND_ONE_MINUS_SRC1_COLOR = 0x00000010,
- XGL_BLEND_SRC1_ALPHA = 0x00000011,
- XGL_BLEND_ONE_MINUS_SRC1_ALPHA = 0x00000012,
-
- XGL_BLEND_BEGIN_RANGE = XGL_BLEND_ZERO,
- XGL_BLEND_END_RANGE = XGL_BLEND_ONE_MINUS_SRC1_ALPHA,
- XGL_NUM_BLEND = (XGL_BLEND_END_RANGE - XGL_BLEND_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BLEND)
-} XGL_BLEND;
-
-typedef enum _XGL_BLEND_FUNC
-{
- XGL_BLEND_FUNC_ADD = 0x00000000,
- XGL_BLEND_FUNC_SUBTRACT = 0x00000001,
- XGL_BLEND_FUNC_REVERSE_SUBTRACT = 0x00000002,
- XGL_BLEND_FUNC_MIN = 0x00000003,
- XGL_BLEND_FUNC_MAX = 0x00000004,
-
- XGL_BLEND_FUNC_BEGIN_RANGE = XGL_BLEND_FUNC_ADD,
- XGL_BLEND_FUNC_END_RANGE = XGL_BLEND_FUNC_MAX,
- XGL_NUM_BLEND_FUNC = (XGL_BLEND_FUNC_END_RANGE - XGL_BLEND_FUNC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BLEND_FUNC)
-} XGL_BLEND_FUNC;
-
-typedef enum _XGL_STENCIL_OP
-{
- XGL_STENCIL_OP_KEEP = 0x00000000,
- XGL_STENCIL_OP_ZERO = 0x00000001,
- XGL_STENCIL_OP_REPLACE = 0x00000002,
- XGL_STENCIL_OP_INC_CLAMP = 0x00000003,
- XGL_STENCIL_OP_DEC_CLAMP = 0x00000004,
- XGL_STENCIL_OP_INVERT = 0x00000005,
- XGL_STENCIL_OP_INC_WRAP = 0x00000006,
- XGL_STENCIL_OP_DEC_WRAP = 0x00000007,
-
- XGL_STENCIL_OP_BEGIN_RANGE = XGL_STENCIL_OP_KEEP,
- XGL_STENCIL_OP_END_RANGE = XGL_STENCIL_OP_DEC_WRAP,
- XGL_NUM_STENCIL_OP = (XGL_STENCIL_OP_END_RANGE - XGL_STENCIL_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STENCIL_OP)
-} XGL_STENCIL_OP;
-
-typedef enum _XGL_LOGIC_OP
-{
- XGL_LOGIC_OP_COPY = 0x00000000,
- XGL_LOGIC_OP_CLEAR = 0x00000001,
- XGL_LOGIC_OP_AND = 0x00000002,
- XGL_LOGIC_OP_AND_REVERSE = 0x00000003,
- XGL_LOGIC_OP_AND_INVERTED = 0x00000004,
- XGL_LOGIC_OP_NOOP = 0x00000005,
- XGL_LOGIC_OP_XOR = 0x00000006,
- XGL_LOGIC_OP_OR = 0x00000007,
- XGL_LOGIC_OP_NOR = 0x00000008,
- XGL_LOGIC_OP_EQUIV = 0x00000009,
- XGL_LOGIC_OP_INVERT = 0x0000000a,
- XGL_LOGIC_OP_OR_REVERSE = 0x0000000b,
- XGL_LOGIC_OP_COPY_INVERTED = 0x0000000c,
- XGL_LOGIC_OP_OR_INVERTED = 0x0000000d,
- XGL_LOGIC_OP_NAND = 0x0000000e,
- XGL_LOGIC_OP_SET = 0x0000000f,
-
- XGL_LOGIC_OP_BEGIN_RANGE = XGL_LOGIC_OP_COPY,
- XGL_LOGIC_OP_END_RANGE = XGL_LOGIC_OP_SET,
- XGL_NUM_LOGIC_OP = (XGL_LOGIC_OP_END_RANGE - XGL_LOGIC_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_LOGIC_OP)
-} XGL_LOGIC_OP;
-
-typedef enum _XGL_SYSTEM_ALLOC_TYPE
-{
- XGL_SYSTEM_ALLOC_API_OBJECT = 0x00000000,
- XGL_SYSTEM_ALLOC_INTERNAL = 0x00000001,
- XGL_SYSTEM_ALLOC_INTERNAL_TEMP = 0x00000002,
- XGL_SYSTEM_ALLOC_INTERNAL_SHADER = 0x00000003,
- XGL_SYSTEM_ALLOC_DEBUG = 0x00000004,
-
- XGL_SYSTEM_ALLOC_BEGIN_RANGE = XGL_SYSTEM_ALLOC_API_OBJECT,
- XGL_SYSTEM_ALLOC_END_RANGE = XGL_SYSTEM_ALLOC_DEBUG,
- XGL_NUM_SYSTEM_ALLOC_TYPE = (XGL_SYSTEM_ALLOC_END_RANGE - XGL_SYSTEM_ALLOC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_SYSTEM_ALLOC_TYPE)
-} XGL_SYSTEM_ALLOC_TYPE;
-
-typedef enum _XGL_PHYSICAL_GPU_TYPE
-{
- XGL_GPU_TYPE_OTHER = 0x00000000,
- XGL_GPU_TYPE_INTEGRATED = 0x00000001,
- XGL_GPU_TYPE_DISCRETE = 0x00000002,
- XGL_GPU_TYPE_VIRTUAL = 0x00000003,
-
- XGL_PHYSICAL_GPU_TYPE_BEGIN_RANGE = XGL_GPU_TYPE_OTHER,
- XGL_PHYSICAL_GPU_TYPE_END_RANGE = XGL_GPU_TYPE_VIRTUAL,
- XGL_NUM_PHYSICAL_GPU_TYPE = (XGL_PHYSICAL_GPU_TYPE_END_RANGE - XGL_PHYSICAL_GPU_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PHYSICAL_GPU_TYPE)
-} XGL_PHYSICAL_GPU_TYPE;
-
-typedef enum _XGL_PHYSICAL_GPU_INFO_TYPE
-{
- // Info type for xglGetGpuInfo()
- XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES = 0x00000000,
- XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE = 0x00000001,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES = 0x00000002,
- XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES = 0x00000003,
-
- XGL_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE = XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
- XGL_INFO_TYPE_PHYSICAL_GPU_END_RANGE = XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
- XGL_NUM_INFO_TYPE_PHYSICAL_GPU = (XGL_INFO_TYPE_PHYSICAL_GPU_END_RANGE - XGL_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PHYSICAL_GPU_INFO_TYPE)
-} XGL_PHYSICAL_GPU_INFO_TYPE;
-
-typedef enum _XGL_FORMAT_INFO_TYPE
-{
- // Info type for xglGetFormatInfo()
- XGL_INFO_TYPE_FORMAT_PROPERTIES = 0x00000000,
-
- XGL_INFO_TYPE_FORMAT_BEGIN_RANGE = XGL_INFO_TYPE_FORMAT_PROPERTIES,
- XGL_INFO_TYPE_FORMAT_END_RANGE = XGL_INFO_TYPE_FORMAT_PROPERTIES,
- XGL_NUM_INFO_TYPE_FORMAT = (XGL_INFO_TYPE_FORMAT_END_RANGE - XGL_INFO_TYPE_FORMAT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FORMAT_INFO_TYPE)
-} XGL_FORMAT_INFO_TYPE;
-
-typedef enum _XGL_SUBRESOURCE_INFO_TYPE
-{
- // Info type for xglGetImageSubresourceInfo()
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
-
- XGL_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE = XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
- XGL_INFO_TYPE_SUBRESOURCE_END_RANGE = XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
- XGL_NUM_INFO_TYPE_SUBRESOURCE = (XGL_INFO_TYPE_SUBRESOURCE_END_RANGE - XGL_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_SUBRESOURCE_INFO_TYPE)
-} XGL_SUBRESOURCE_INFO_TYPE;
-
-typedef enum _XGL_OBJECT_INFO_TYPE
-{
- // Info type for xglGetObjectInfo()
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT = 0x00000000,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS = 0x00000001,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS = 0x00000002,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS = 0x00000003,
-
- XGL_INFO_TYPE_BEGIN_RANGE = XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
- XGL_INFO_TYPE_END_RANGE = XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
- XGL_NUM_INFO_TYPE = (XGL_INFO_TYPE_END_RANGE - XGL_INFO_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_OBJECT_INFO_TYPE)
-} XGL_OBJECT_INFO_TYPE;
-
-typedef enum _XGL_VALIDATION_LEVEL
-{
- XGL_VALIDATION_LEVEL_0 = 0x00000000,
- XGL_VALIDATION_LEVEL_1 = 0x00000001,
- XGL_VALIDATION_LEVEL_2 = 0x00000002,
- XGL_VALIDATION_LEVEL_3 = 0x00000003,
- XGL_VALIDATION_LEVEL_4 = 0x00000004,
-
- XGL_VALIDATION_LEVEL_BEGIN_RANGE = XGL_VALIDATION_LEVEL_0,
- XGL_VALIDATION_LEVEL_END_RANGE = XGL_VALIDATION_LEVEL_4,
- XGL_NUM_VALIDATION_LEVEL = (XGL_VALIDATION_LEVEL_END_RANGE - XGL_VALIDATION_LEVEL_BEGIN_RANGE + 1),
-
- XGL_MAX_ENUM(_XGL_VALIDATION_LEVEL)
-} XGL_VALIDATION_LEVEL;
+ VK_DESCRIPTOR_TYPE_BEGIN_RANGE = VK_DESCRIPTOR_TYPE_SAMPLER,
+ VK_DESCRIPTOR_TYPE_END_RANGE = VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC,
+ VK_NUM_DESCRIPTOR_TYPE = (VK_DESCRIPTOR_TYPE_END_RANGE - VK_DESCRIPTOR_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_TYPE)
+} VK_DESCRIPTOR_TYPE;
+
+typedef enum _VK_DESCRIPTOR_POOL_USAGE
+{
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT = 0x00000000,
+ VK_DESCRIPTOR_POOL_USAGE_DYNAMIC = 0x00000001,
+
+ VK_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE = VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT,
+ VK_DESCRIPTOR_POOL_USAGE_END_RANGE = VK_DESCRIPTOR_POOL_USAGE_DYNAMIC,
+ VK_NUM_DESCRIPTOR_POOL_USAGE = (VK_DESCRIPTOR_POOL_USAGE_END_RANGE - VK_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_POOL_USAGE)
+} VK_DESCRIPTOR_POOL_USAGE;
+
+typedef enum _VK_DESCRIPTOR_UPDATE_MODE
+{
+ VK_DESCRIPTOR_UDPATE_MODE_COPY = 0x00000000,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST = 0x00000001,
+
+ VK_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE = VK_DESCRIPTOR_UDPATE_MODE_COPY,
+ VK_DESCRIPTOR_UPDATE_MODE_END_RANGE = VK_DESCRIPTOR_UPDATE_MODE_FASTEST,
+ VK_NUM_DESCRIPTOR_UPDATE_MODE = (VK_DESCRIPTOR_UPDATE_MODE_END_RANGE - VK_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_UPDATE_MODE)
+} VK_DESCRIPTOR_UPDATE_MODE;
+
+typedef enum _VK_DESCRIPTOR_SET_USAGE
+{
+ VK_DESCRIPTOR_SET_USAGE_ONE_SHOT = 0x00000000,
+ VK_DESCRIPTOR_SET_USAGE_STATIC = 0x00000001,
+
+ VK_DESCRIPTOR_SET_USAGE_BEGIN_RANGE = VK_DESCRIPTOR_SET_USAGE_ONE_SHOT,
+ VK_DESCRIPTOR_SET_USAGE_END_RANGE = VK_DESCRIPTOR_SET_USAGE_STATIC,
+ VK_NUM_DESCRIPTOR_SET_USAGE = (VK_DESCRIPTOR_SET_USAGE_END_RANGE - VK_DESCRIPTOR_SET_USAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_SET_USAGE)
+} VK_DESCRIPTOR_SET_USAGE;
+
+typedef enum _VK_QUERY_TYPE
+{
+ VK_QUERY_OCCLUSION = 0x00000000,
+ VK_QUERY_PIPELINE_STATISTICS = 0x00000001,
+
+ VK_QUERY_TYPE_BEGIN_RANGE = VK_QUERY_OCCLUSION,
+ VK_QUERY_TYPE_END_RANGE = VK_QUERY_PIPELINE_STATISTICS,
+ VK_NUM_QUERY_TYPE = (VK_QUERY_TYPE_END_RANGE - VK_QUERY_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_QUERY_TYPE)
+} VK_QUERY_TYPE;
+
+typedef enum _VK_TIMESTAMP_TYPE
+{
+ VK_TIMESTAMP_TOP = 0x00000000,
+ VK_TIMESTAMP_BOTTOM = 0x00000001,
+
+ VK_TIMESTAMP_TYPE_BEGIN_RANGE = VK_TIMESTAMP_TOP,
+ VK_TIMESTAMP_TYPE_END_RANGE = VK_TIMESTAMP_BOTTOM,
+ VK_NUM_TIMESTAMP_TYPE = (VK_TIMESTAMP_TYPE_END_RANGE - VK_TIMESTAMP_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TIMESTEAMP_TYPE)
+} VK_TIMESTAMP_TYPE;
+
+typedef enum _VK_BORDER_COLOR_TYPE
+{
+ VK_BORDER_COLOR_OPAQUE_WHITE = 0x00000000,
+ VK_BORDER_COLOR_TRANSPARENT_BLACK = 0x00000001,
+ VK_BORDER_COLOR_OPAQUE_BLACK = 0x00000002,
+
+ VK_BORDER_COLOR_TYPE_BEGIN_RANGE = VK_BORDER_COLOR_OPAQUE_WHITE,
+ VK_BORDER_COLOR_TYPE_END_RANGE = VK_BORDER_COLOR_OPAQUE_BLACK,
+ VK_NUM_BORDER_COLOR_TYPE = (VK_BORDER_COLOR_TYPE_END_RANGE - VK_BORDER_COLOR_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BORDER_COLOR_TYPE)
+} VK_BORDER_COLOR_TYPE;
+
+typedef enum _VK_PIPELINE_BIND_POINT
+{
+ VK_PIPELINE_BIND_POINT_COMPUTE = 0x00000000,
+ VK_PIPELINE_BIND_POINT_GRAPHICS = 0x00000001,
+
+ VK_PIPELINE_BIND_POINT_BEGIN_RANGE = VK_PIPELINE_BIND_POINT_COMPUTE,
+ VK_PIPELINE_BIND_POINT_END_RANGE = VK_PIPELINE_BIND_POINT_GRAPHICS,
+ VK_NUM_PIPELINE_BIND_POINT = (VK_PIPELINE_BIND_POINT_END_RANGE - VK_PIPELINE_BIND_POINT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPELINE_BIND_POINT)
+} VK_PIPELINE_BIND_POINT;
+
+typedef enum _VK_STATE_BIND_POINT
+{
+ VK_STATE_BIND_VIEWPORT = 0x00000000,
+ VK_STATE_BIND_RASTER = 0x00000001,
+ VK_STATE_BIND_COLOR_BLEND = 0x00000002,
+ VK_STATE_BIND_DEPTH_STENCIL = 0x00000003,
+
+ VK_STATE_BIND_POINT_BEGIN_RANGE = VK_STATE_BIND_VIEWPORT,
+ VK_STATE_BIND_POINT_END_RANGE = VK_STATE_BIND_DEPTH_STENCIL,
+ VK_NUM_STATE_BIND_POINT = (VK_STATE_BIND_POINT_END_RANGE - VK_STATE_BIND_POINT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STATE_BIND_POINT)
+} VK_STATE_BIND_POINT;
+
+typedef enum _VK_PRIMITIVE_TOPOLOGY
+{
+ VK_TOPOLOGY_POINT_LIST = 0x00000000,
+ VK_TOPOLOGY_LINE_LIST = 0x00000001,
+ VK_TOPOLOGY_LINE_STRIP = 0x00000002,
+ VK_TOPOLOGY_TRIANGLE_LIST = 0x00000003,
+ VK_TOPOLOGY_TRIANGLE_STRIP = 0x00000004,
+ VK_TOPOLOGY_TRIANGLE_FAN = 0x00000005,
+ VK_TOPOLOGY_LINE_LIST_ADJ = 0x00000006,
+ VK_TOPOLOGY_LINE_STRIP_ADJ = 0x00000007,
+ VK_TOPOLOGY_TRIANGLE_LIST_ADJ = 0x00000008,
+ VK_TOPOLOGY_TRIANGLE_STRIP_ADJ = 0x00000009,
+ VK_TOPOLOGY_PATCH = 0x0000000a,
+
+ VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = VK_TOPOLOGY_POINT_LIST,
+ VK_PRIMITIVE_TOPOLOGY_END_RANGE = VK_TOPOLOGY_PATCH,
+ VK_NUM_PRIMITIVE_TOPOLOGY = (VK_PRIMITIVE_TOPOLOGY_END_RANGE - VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PRIMITIVE_TOPOLOGY)
+} VK_PRIMITIVE_TOPOLOGY;
+
+typedef enum _VK_INDEX_TYPE
+{
+ VK_INDEX_8 = 0x00000000,
+ VK_INDEX_16 = 0x00000001,
+ VK_INDEX_32 = 0x00000002,
+
+ VK_INDEX_TYPE_BEGIN_RANGE = VK_INDEX_8,
+ VK_INDEX_TYPE_END_RANGE = VK_INDEX_32,
+ VK_NUM_INDEX_TYPE = (VK_INDEX_TYPE_END_RANGE - VK_INDEX_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_INDEX_TYPE)
+} VK_INDEX_TYPE;
+
+typedef enum _VK_TEX_FILTER
+{
+ VK_TEX_FILTER_NEAREST = 0,
+ VK_TEX_FILTER_LINEAR = 1,
+
+ VK_TEX_FILTER_BEGIN_RANGE = VK_TEX_FILTER_NEAREST,
+ VK_TEX_FILTER_END_RANGE = VK_TEX_FILTER_LINEAR,
+ VK_NUM_TEX_FILTER = (VK_TEX_FILTER_END_RANGE - VK_TEX_FILTER_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_FILTER)
+} VK_TEX_FILTER;
+
+typedef enum _VK_TEX_MIPMAP_MODE
+{
+ VK_TEX_MIPMAP_BASE = 0, // Always choose base level
+ VK_TEX_MIPMAP_NEAREST = 1, // Choose nearest mip level
+ VK_TEX_MIPMAP_LINEAR = 2, // Linear filter between mip levels
+
+ VK_TEX_MIPMAP_BEGIN_RANGE = VK_TEX_MIPMAP_BASE,
+ VK_TEX_MIPMAP_END_RANGE = VK_TEX_MIPMAP_LINEAR,
+ VK_NUM_TEX_MIPMAP = (VK_TEX_MIPMAP_END_RANGE - VK_TEX_MIPMAP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_MIPMAP_MODE)
+} VK_TEX_MIPMAP_MODE;
+
+typedef enum _VK_TEX_ADDRESS
+{
+ VK_TEX_ADDRESS_WRAP = 0x00000000,
+ VK_TEX_ADDRESS_MIRROR = 0x00000001,
+ VK_TEX_ADDRESS_CLAMP = 0x00000002,
+ VK_TEX_ADDRESS_MIRROR_ONCE = 0x00000003,
+ VK_TEX_ADDRESS_CLAMP_BORDER = 0x00000004,
+
+ VK_TEX_ADDRESS_BEGIN_RANGE = VK_TEX_ADDRESS_WRAP,
+ VK_TEX_ADDRESS_END_RANGE = VK_TEX_ADDRESS_CLAMP_BORDER,
+ VK_NUM_TEX_ADDRESS = (VK_TEX_ADDRESS_END_RANGE - VK_TEX_ADDRESS_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_ADDRESS)
+} VK_TEX_ADDRESS;
+
+typedef enum _VK_COMPARE_FUNC
+{
+ VK_COMPARE_NEVER = 0x00000000,
+ VK_COMPARE_LESS = 0x00000001,
+ VK_COMPARE_EQUAL = 0x00000002,
+ VK_COMPARE_LESS_EQUAL = 0x00000003,
+ VK_COMPARE_GREATER = 0x00000004,
+ VK_COMPARE_NOT_EQUAL = 0x00000005,
+ VK_COMPARE_GREATER_EQUAL = 0x00000006,
+ VK_COMPARE_ALWAYS = 0x00000007,
+
+ VK_COMPARE_FUNC_BEGIN_RANGE = VK_COMPARE_NEVER,
+ VK_COMPARE_FUNC_END_RANGE = VK_COMPARE_ALWAYS,
+ VK_NUM_COMPARE_FUNC = (VK_COMPARE_FUNC_END_RANGE - VK_COMPARE_FUNC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_COMPARE_FUNC)
+} VK_COMPARE_FUNC;
+
+typedef enum _VK_FILL_MODE
+{
+ VK_FILL_POINTS = 0x00000000,
+ VK_FILL_WIREFRAME = 0x00000001,
+ VK_FILL_SOLID = 0x00000002,
+
+ VK_FILL_MODE_BEGIN_RANGE = VK_FILL_POINTS,
+ VK_FILL_MODE_END_RANGE = VK_FILL_SOLID,
+ VK_NUM_FILL_MODE = (VK_FILL_MODE_END_RANGE - VK_FILL_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FILL_MODE)
+} VK_FILL_MODE;
+
+typedef enum _VK_CULL_MODE
+{
+ VK_CULL_NONE = 0x00000000,
+ VK_CULL_FRONT = 0x00000001,
+ VK_CULL_BACK = 0x00000002,
+ VK_CULL_FRONT_AND_BACK = 0x00000003,
+
+ VK_CULL_MODE_BEGIN_RANGE = VK_CULL_NONE,
+ VK_CULL_MODE_END_RANGE = VK_CULL_FRONT_AND_BACK,
+ VK_NUM_CULL_MODE = (VK_CULL_MODE_END_RANGE - VK_CULL_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_CULL_MODE)
+} VK_CULL_MODE;
+
+typedef enum _VK_FACE_ORIENTATION
+{
+ VK_FRONT_FACE_CCW = 0x00000000,
+ VK_FRONT_FACE_CW = 0x00000001,
+
+ VK_FACE_ORIENTATION_BEGIN_RANGE = VK_FRONT_FACE_CCW,
+ VK_FACE_ORIENTATION_END_RANGE = VK_FRONT_FACE_CW,
+ VK_NUM_FACE_ORIENTATION = (VK_FACE_ORIENTATION_END_RANGE - VK_FACE_ORIENTATION_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FACE_ORIENTATION)
+} VK_FACE_ORIENTATION;
+
+typedef enum _VK_PROVOKING_VERTEX_CONVENTION
+{
+ VK_PROVOKING_VERTEX_FIRST = 0x00000000,
+ VK_PROVOKING_VERTEX_LAST = 0x00000001,
+
+ VK_PROVOKING_VERTEX_BEGIN_RANGE = VK_PROVOKING_VERTEX_FIRST,
+ VK_PROVOKING_VERTEX_END_RANGE = VK_PROVOKING_VERTEX_LAST,
+ VK_NUM_PROVOKING_VERTEX_CONVENTION = (VK_PROVOKING_VERTEX_END_RANGE - VK_PROVOKING_VERTEX_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PROVOKING_VERTEX_CONVENTION)
+} VK_PROVOKING_VERTEX_CONVENTION;
+
+typedef enum _VK_COORDINATE_ORIGIN
+{
+ VK_COORDINATE_ORIGIN_UPPER_LEFT = 0x00000000,
+ VK_COORDINATE_ORIGIN_LOWER_LEFT = 0x00000001,
+
+ VK_COORDINATE_ORIGIN_BEGIN_RANGE = VK_COORDINATE_ORIGIN_UPPER_LEFT,
+ VK_COORDINATE_ORIGIN_END_RANGE = VK_COORDINATE_ORIGIN_LOWER_LEFT,
+ VK_NUM_COORDINATE_ORIGIN = (VK_COORDINATE_ORIGIN_END_RANGE - VK_COORDINATE_ORIGIN_END_RANGE + 1),
+ VK_MAX_ENUM(_VK_COORDINATE_ORIGIN)
+} VK_COORDINATE_ORIGIN;
+
+typedef enum _VK_DEPTH_MODE
+{
+ VK_DEPTH_MODE_ZERO_TO_ONE = 0x00000000,
+ VK_DEPTH_MODE_NEGATIVE_ONE_TO_ONE = 0x00000001,
+
+ VK_DEPTH_MODE_BEGIN_RANGE = VK_DEPTH_MODE_ZERO_TO_ONE,
+ VK_DEPTH_MODE_END_RANGE = VK_DEPTH_MODE_NEGATIVE_ONE_TO_ONE,
+ VK_NUM_DEPTH_MODE = (VK_DEPTH_MODE_END_RANGE - VK_DEPTH_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DEPTH_MODE)
+} VK_DEPTH_MODE;
+
+typedef enum _VK_BLEND
+{
+ VK_BLEND_ZERO = 0x00000000,
+ VK_BLEND_ONE = 0x00000001,
+ VK_BLEND_SRC_COLOR = 0x00000002,
+ VK_BLEND_ONE_MINUS_SRC_COLOR = 0x00000003,
+ VK_BLEND_DEST_COLOR = 0x00000004,
+ VK_BLEND_ONE_MINUS_DEST_COLOR = 0x00000005,
+ VK_BLEND_SRC_ALPHA = 0x00000006,
+ VK_BLEND_ONE_MINUS_SRC_ALPHA = 0x00000007,
+ VK_BLEND_DEST_ALPHA = 0x00000008,
+ VK_BLEND_ONE_MINUS_DEST_ALPHA = 0x00000009,
+ VK_BLEND_CONSTANT_COLOR = 0x0000000a,
+ VK_BLEND_ONE_MINUS_CONSTANT_COLOR = 0x0000000b,
+ VK_BLEND_CONSTANT_ALPHA = 0x0000000c,
+ VK_BLEND_ONE_MINUS_CONSTANT_ALPHA = 0x0000000d,
+ VK_BLEND_SRC_ALPHA_SATURATE = 0x0000000e,
+ VK_BLEND_SRC1_COLOR = 0x0000000f,
+ VK_BLEND_ONE_MINUS_SRC1_COLOR = 0x00000010,
+ VK_BLEND_SRC1_ALPHA = 0x00000011,
+ VK_BLEND_ONE_MINUS_SRC1_ALPHA = 0x00000012,
+
+ VK_BLEND_BEGIN_RANGE = VK_BLEND_ZERO,
+ VK_BLEND_END_RANGE = VK_BLEND_ONE_MINUS_SRC1_ALPHA,
+ VK_NUM_BLEND = (VK_BLEND_END_RANGE - VK_BLEND_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BLEND)
+} VK_BLEND;
+
+typedef enum _VK_BLEND_FUNC
+{
+ VK_BLEND_FUNC_ADD = 0x00000000,
+ VK_BLEND_FUNC_SUBTRACT = 0x00000001,
+ VK_BLEND_FUNC_REVERSE_SUBTRACT = 0x00000002,
+ VK_BLEND_FUNC_MIN = 0x00000003,
+ VK_BLEND_FUNC_MAX = 0x00000004,
+
+ VK_BLEND_FUNC_BEGIN_RANGE = VK_BLEND_FUNC_ADD,
+ VK_BLEND_FUNC_END_RANGE = VK_BLEND_FUNC_MAX,
+ VK_NUM_BLEND_FUNC = (VK_BLEND_FUNC_END_RANGE - VK_BLEND_FUNC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BLEND_FUNC)
+} VK_BLEND_FUNC;
+
+typedef enum _VK_STENCIL_OP
+{
+ VK_STENCIL_OP_KEEP = 0x00000000,
+ VK_STENCIL_OP_ZERO = 0x00000001,
+ VK_STENCIL_OP_REPLACE = 0x00000002,
+ VK_STENCIL_OP_INC_CLAMP = 0x00000003,
+ VK_STENCIL_OP_DEC_CLAMP = 0x00000004,
+ VK_STENCIL_OP_INVERT = 0x00000005,
+ VK_STENCIL_OP_INC_WRAP = 0x00000006,
+ VK_STENCIL_OP_DEC_WRAP = 0x00000007,
+
+ VK_STENCIL_OP_BEGIN_RANGE = VK_STENCIL_OP_KEEP,
+ VK_STENCIL_OP_END_RANGE = VK_STENCIL_OP_DEC_WRAP,
+ VK_NUM_STENCIL_OP = (VK_STENCIL_OP_END_RANGE - VK_STENCIL_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STENCIL_OP)
+} VK_STENCIL_OP;
+
+typedef enum _VK_LOGIC_OP
+{
+ VK_LOGIC_OP_COPY = 0x00000000,
+ VK_LOGIC_OP_CLEAR = 0x00000001,
+ VK_LOGIC_OP_AND = 0x00000002,
+ VK_LOGIC_OP_AND_REVERSE = 0x00000003,
+ VK_LOGIC_OP_AND_INVERTED = 0x00000004,
+ VK_LOGIC_OP_NOOP = 0x00000005,
+ VK_LOGIC_OP_XOR = 0x00000006,
+ VK_LOGIC_OP_OR = 0x00000007,
+ VK_LOGIC_OP_NOR = 0x00000008,
+ VK_LOGIC_OP_EQUIV = 0x00000009,
+ VK_LOGIC_OP_INVERT = 0x0000000a,
+ VK_LOGIC_OP_OR_REVERSE = 0x0000000b,
+ VK_LOGIC_OP_COPY_INVERTED = 0x0000000c,
+ VK_LOGIC_OP_OR_INVERTED = 0x0000000d,
+ VK_LOGIC_OP_NAND = 0x0000000e,
+ VK_LOGIC_OP_SET = 0x0000000f,
+
+ VK_LOGIC_OP_BEGIN_RANGE = VK_LOGIC_OP_COPY,
+ VK_LOGIC_OP_END_RANGE = VK_LOGIC_OP_SET,
+ VK_NUM_LOGIC_OP = (VK_LOGIC_OP_END_RANGE - VK_LOGIC_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_LOGIC_OP)
+} VK_LOGIC_OP;
+
+typedef enum _VK_SYSTEM_ALLOC_TYPE
+{
+ VK_SYSTEM_ALLOC_API_OBJECT = 0x00000000,
+ VK_SYSTEM_ALLOC_INTERNAL = 0x00000001,
+ VK_SYSTEM_ALLOC_INTERNAL_TEMP = 0x00000002,
+ VK_SYSTEM_ALLOC_INTERNAL_SHADER = 0x00000003,
+ VK_SYSTEM_ALLOC_DEBUG = 0x00000004,
+
+ VK_SYSTEM_ALLOC_BEGIN_RANGE = VK_SYSTEM_ALLOC_API_OBJECT,
+ VK_SYSTEM_ALLOC_END_RANGE = VK_SYSTEM_ALLOC_DEBUG,
+ VK_NUM_SYSTEM_ALLOC_TYPE = (VK_SYSTEM_ALLOC_END_RANGE - VK_SYSTEM_ALLOC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_SYSTEM_ALLOC_TYPE)
+} VK_SYSTEM_ALLOC_TYPE;
+
+typedef enum _VK_PHYSICAL_GPU_TYPE
+{
+ VK_GPU_TYPE_OTHER = 0x00000000,
+ VK_GPU_TYPE_INTEGRATED = 0x00000001,
+ VK_GPU_TYPE_DISCRETE = 0x00000002,
+ VK_GPU_TYPE_VIRTUAL = 0x00000003,
+
+ VK_PHYSICAL_GPU_TYPE_BEGIN_RANGE = VK_GPU_TYPE_OTHER,
+ VK_PHYSICAL_GPU_TYPE_END_RANGE = VK_GPU_TYPE_VIRTUAL,
+ VK_NUM_PHYSICAL_GPU_TYPE = (VK_PHYSICAL_GPU_TYPE_END_RANGE - VK_PHYSICAL_GPU_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PHYSICAL_GPU_TYPE)
+} VK_PHYSICAL_GPU_TYPE;
+
+typedef enum _VK_PHYSICAL_GPU_INFO_TYPE
+{
+ // Info type for vkGetGpuInfo()
+ VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES = 0x00000000,
+ VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE = 0x00000001,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES = 0x00000002,
+ VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES = 0x00000003,
+
+ VK_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE = VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ VK_INFO_TYPE_PHYSICAL_GPU_END_RANGE = VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
+ VK_NUM_INFO_TYPE_PHYSICAL_GPU = (VK_INFO_TYPE_PHYSICAL_GPU_END_RANGE - VK_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PHYSICAL_GPU_INFO_TYPE)
+} VK_PHYSICAL_GPU_INFO_TYPE;
+
+typedef enum _VK_FORMAT_INFO_TYPE
+{
+ // Info type for vkGetFormatInfo()
+ VK_INFO_TYPE_FORMAT_PROPERTIES = 0x00000000,
+
+ VK_INFO_TYPE_FORMAT_BEGIN_RANGE = VK_INFO_TYPE_FORMAT_PROPERTIES,
+ VK_INFO_TYPE_FORMAT_END_RANGE = VK_INFO_TYPE_FORMAT_PROPERTIES,
+ VK_NUM_INFO_TYPE_FORMAT = (VK_INFO_TYPE_FORMAT_END_RANGE - VK_INFO_TYPE_FORMAT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FORMAT_INFO_TYPE)
+} VK_FORMAT_INFO_TYPE;
+
+typedef enum _VK_SUBRESOURCE_INFO_TYPE
+{
+ // Info type for vkGetImageSubresourceInfo()
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
+
+ VK_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE = VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ VK_INFO_TYPE_SUBRESOURCE_END_RANGE = VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ VK_NUM_INFO_TYPE_SUBRESOURCE = (VK_INFO_TYPE_SUBRESOURCE_END_RANGE - VK_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_SUBRESOURCE_INFO_TYPE)
+} VK_SUBRESOURCE_INFO_TYPE;
+
+typedef enum _VK_OBJECT_INFO_TYPE
+{
+ // Info type for vkGetObjectInfo()
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT = 0x00000000,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS = 0x00000001,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS = 0x00000002,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS = 0x00000003,
+
+ VK_INFO_TYPE_BEGIN_RANGE = VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ VK_INFO_TYPE_END_RANGE = VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ VK_NUM_INFO_TYPE = (VK_INFO_TYPE_END_RANGE - VK_INFO_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_OBJECT_INFO_TYPE)
+} VK_OBJECT_INFO_TYPE;
+
+typedef enum _VK_VALIDATION_LEVEL
+{
+ VK_VALIDATION_LEVEL_0 = 0x00000000,
+ VK_VALIDATION_LEVEL_1 = 0x00000001,
+ VK_VALIDATION_LEVEL_2 = 0x00000002,
+ VK_VALIDATION_LEVEL_3 = 0x00000003,
+ VK_VALIDATION_LEVEL_4 = 0x00000004,
+
+ VK_VALIDATION_LEVEL_BEGIN_RANGE = VK_VALIDATION_LEVEL_0,
+ VK_VALIDATION_LEVEL_END_RANGE = VK_VALIDATION_LEVEL_4,
+ VK_NUM_VALIDATION_LEVEL = (VK_VALIDATION_LEVEL_END_RANGE - VK_VALIDATION_LEVEL_BEGIN_RANGE + 1),
+
+ VK_MAX_ENUM(_VK_VALIDATION_LEVEL)
+} VK_VALIDATION_LEVEL;
// ------------------------------------------------------------------------------------------------
// Error and return codes
-typedef enum _XGL_RESULT
+typedef enum _VK_RESULT
{
// Return codes for successful operation execution (>= 0)
- XGL_SUCCESS = 0x0000000,
- XGL_UNSUPPORTED = 0x0000001,
- XGL_NOT_READY = 0x0000002,
- XGL_TIMEOUT = 0x0000003,
- XGL_EVENT_SET = 0x0000004,
- XGL_EVENT_RESET = 0x0000005,
+ VK_SUCCESS = 0x0000000,
+ VK_UNSUPPORTED = 0x0000001,
+ VK_NOT_READY = 0x0000002,
+ VK_TIMEOUT = 0x0000003,
+ VK_EVENT_SET = 0x0000004,
+ VK_EVENT_RESET = 0x0000005,
// Error codes (negative values)
- XGL_ERROR_UNKNOWN = -(0x00000001),
- XGL_ERROR_UNAVAILABLE = -(0x00000002),
- XGL_ERROR_INITIALIZATION_FAILED = -(0x00000003),
- XGL_ERROR_OUT_OF_MEMORY = -(0x00000004),
- XGL_ERROR_OUT_OF_GPU_MEMORY = -(0x00000005),
- XGL_ERROR_DEVICE_ALREADY_CREATED = -(0x00000006),
- XGL_ERROR_DEVICE_LOST = -(0x00000007),
- XGL_ERROR_INVALID_POINTER = -(0x00000008),
- XGL_ERROR_INVALID_VALUE = -(0x00000009),
- XGL_ERROR_INVALID_HANDLE = -(0x0000000A),
- XGL_ERROR_INVALID_ORDINAL = -(0x0000000B),
- XGL_ERROR_INVALID_MEMORY_SIZE = -(0x0000000C),
- XGL_ERROR_INVALID_EXTENSION = -(0x0000000D),
- XGL_ERROR_INVALID_FLAGS = -(0x0000000E),
- XGL_ERROR_INVALID_ALIGNMENT = -(0x0000000F),
- XGL_ERROR_INVALID_FORMAT = -(0x00000010),
- XGL_ERROR_INVALID_IMAGE = -(0x00000011),
- XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA = -(0x00000012),
- XGL_ERROR_INVALID_QUEUE_TYPE = -(0x00000013),
- XGL_ERROR_INVALID_OBJECT_TYPE = -(0x00000014),
- XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION = -(0x00000015),
- XGL_ERROR_BAD_SHADER_CODE = -(0x00000016),
- XGL_ERROR_BAD_PIPELINE_DATA = -(0x00000017),
- XGL_ERROR_TOO_MANY_MEMORY_REFERENCES = -(0x00000018),
- XGL_ERROR_NOT_MAPPABLE = -(0x00000019),
- XGL_ERROR_MEMORY_MAP_FAILED = -(0x0000001A),
- XGL_ERROR_MEMORY_UNMAP_FAILED = -(0x0000001B),
- XGL_ERROR_INCOMPATIBLE_DEVICE = -(0x0000001C),
- XGL_ERROR_INCOMPATIBLE_DRIVER = -(0x0000001D),
- XGL_ERROR_INCOMPLETE_COMMAND_BUFFER = -(0x0000001E),
- XGL_ERROR_BUILDING_COMMAND_BUFFER = -(0x0000001F),
- XGL_ERROR_MEMORY_NOT_BOUND = -(0x00000020),
- XGL_ERROR_INCOMPATIBLE_QUEUE = -(0x00000021),
- XGL_ERROR_NOT_SHAREABLE = -(0x00000022),
- XGL_MAX_ENUM(_XGL_RESULT_CODE)
-} XGL_RESULT;
+ VK_ERROR_UNKNOWN = -(0x00000001),
+ VK_ERROR_UNAVAILABLE = -(0x00000002),
+ VK_ERROR_INITIALIZATION_FAILED = -(0x00000003),
+ VK_ERROR_OUT_OF_MEMORY = -(0x00000004),
+ VK_ERROR_OUT_OF_GPU_MEMORY = -(0x00000005),
+ VK_ERROR_DEVICE_ALREADY_CREATED = -(0x00000006),
+ VK_ERROR_DEVICE_LOST = -(0x00000007),
+ VK_ERROR_INVALID_POINTER = -(0x00000008),
+ VK_ERROR_INVALID_VALUE = -(0x00000009),
+ VK_ERROR_INVALID_HANDLE = -(0x0000000A),
+ VK_ERROR_INVALID_ORDINAL = -(0x0000000B),
+ VK_ERROR_INVALID_MEMORY_SIZE = -(0x0000000C),
+ VK_ERROR_INVALID_EXTENSION = -(0x0000000D),
+ VK_ERROR_INVALID_FLAGS = -(0x0000000E),
+ VK_ERROR_INVALID_ALIGNMENT = -(0x0000000F),
+ VK_ERROR_INVALID_FORMAT = -(0x00000010),
+ VK_ERROR_INVALID_IMAGE = -(0x00000011),
+ VK_ERROR_INVALID_DESCRIPTOR_SET_DATA = -(0x00000012),
+ VK_ERROR_INVALID_QUEUE_TYPE = -(0x00000013),
+ VK_ERROR_INVALID_OBJECT_TYPE = -(0x00000014),
+ VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION = -(0x00000015),
+ VK_ERROR_BAD_SHADER_CODE = -(0x00000016),
+ VK_ERROR_BAD_PIPELINE_DATA = -(0x00000017),
+ VK_ERROR_TOO_MANY_MEMORY_REFERENCES = -(0x00000018),
+ VK_ERROR_NOT_MAPPABLE = -(0x00000019),
+ VK_ERROR_MEMORY_MAP_FAILED = -(0x0000001A),
+ VK_ERROR_MEMORY_UNMAP_FAILED = -(0x0000001B),
+ VK_ERROR_INCOMPATIBLE_DEVICE = -(0x0000001C),
+ VK_ERROR_INCOMPATIBLE_DRIVER = -(0x0000001D),
+ VK_ERROR_INCOMPLETE_COMMAND_BUFFER = -(0x0000001E),
+ VK_ERROR_BUILDING_COMMAND_BUFFER = -(0x0000001F),
+ VK_ERROR_MEMORY_NOT_BOUND = -(0x00000020),
+ VK_ERROR_INCOMPATIBLE_QUEUE = -(0x00000021),
+ VK_ERROR_NOT_SHAREABLE = -(0x00000022),
+ VK_MAX_ENUM(_VK_RESULT_CODE)
+} VK_RESULT;
// ------------------------------------------------------------------------------------------------
-// XGL format definitions
-
-typedef enum _XGL_VERTEX_INPUT_STEP_RATE
-{
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX = 0x0,
- XGL_VERTEX_INPUT_STEP_RATE_INSTANCE = 0x1,
- XGL_VERTEX_INPUT_STEP_RATE_DRAW = 0x2, //Optional
-
- XGL_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE = XGL_VERTEX_INPUT_STEP_RATE_VERTEX,
- XGL_VERTEX_INPUT_STEP_RATE_END_RANGE = XGL_VERTEX_INPUT_STEP_RATE_DRAW,
- XGL_NUM_VERTEX_INPUT_STEP_RATE = (XGL_VERTEX_INPUT_STEP_RATE_END_RANGE - XGL_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_VERTEX_INPUT_STEP_RATE)
-} XGL_VERTEX_INPUT_STEP_RATE;
-
-typedef enum _XGL_FORMAT
-{
- XGL_FMT_UNDEFINED = 0x00000000,
- XGL_FMT_R4G4_UNORM = 0x00000001,
- XGL_FMT_R4G4_USCALED = 0x00000002,
- XGL_FMT_R4G4B4A4_UNORM = 0x00000003,
- XGL_FMT_R4G4B4A4_USCALED = 0x00000004,
- XGL_FMT_R5G6B5_UNORM = 0x00000005,
- XGL_FMT_R5G6B5_USCALED = 0x00000006,
- XGL_FMT_R5G5B5A1_UNORM = 0x00000007,
- XGL_FMT_R5G5B5A1_USCALED = 0x00000008,
- XGL_FMT_R8_UNORM = 0x00000009,
- XGL_FMT_R8_SNORM = 0x0000000A,
- XGL_FMT_R8_USCALED = 0x0000000B,
- XGL_FMT_R8_SSCALED = 0x0000000C,
- XGL_FMT_R8_UINT = 0x0000000D,
- XGL_FMT_R8_SINT = 0x0000000E,
- XGL_FMT_R8_SRGB = 0x0000000F,
- XGL_FMT_R8G8_UNORM = 0x00000010,
- XGL_FMT_R8G8_SNORM = 0x00000011,
- XGL_FMT_R8G8_USCALED = 0x00000012,
- XGL_FMT_R8G8_SSCALED = 0x00000013,
- XGL_FMT_R8G8_UINT = 0x00000014,
- XGL_FMT_R8G8_SINT = 0x00000015,
- XGL_FMT_R8G8_SRGB = 0x00000016,
- XGL_FMT_R8G8B8_UNORM = 0x00000017,
- XGL_FMT_R8G8B8_SNORM = 0x00000018,
- XGL_FMT_R8G8B8_USCALED = 0x00000019,
- XGL_FMT_R8G8B8_SSCALED = 0x0000001A,
- XGL_FMT_R8G8B8_UINT = 0x0000001B,
- XGL_FMT_R8G8B8_SINT = 0x0000001C,
- XGL_FMT_R8G8B8_SRGB = 0x0000001D,
- XGL_FMT_R8G8B8A8_UNORM = 0x0000001E,
- XGL_FMT_R8G8B8A8_SNORM = 0x0000001F,
- XGL_FMT_R8G8B8A8_USCALED = 0x00000020,
- XGL_FMT_R8G8B8A8_SSCALED = 0x00000021,
- XGL_FMT_R8G8B8A8_UINT = 0x00000022,
- XGL_FMT_R8G8B8A8_SINT = 0x00000023,
- XGL_FMT_R8G8B8A8_SRGB = 0x00000024,
- XGL_FMT_R10G10B10A2_UNORM = 0x00000025,
- XGL_FMT_R10G10B10A2_SNORM = 0x00000026,
- XGL_FMT_R10G10B10A2_USCALED = 0x00000027,
- XGL_FMT_R10G10B10A2_SSCALED = 0x00000028,
- XGL_FMT_R10G10B10A2_UINT = 0x00000029,
- XGL_FMT_R10G10B10A2_SINT = 0x0000002A,
- XGL_FMT_R16_UNORM = 0x0000002B,
- XGL_FMT_R16_SNORM = 0x0000002C,
- XGL_FMT_R16_USCALED = 0x0000002D,
- XGL_FMT_R16_SSCALED = 0x0000002E,
- XGL_FMT_R16_UINT = 0x0000002F,
- XGL_FMT_R16_SINT = 0x00000030,
- XGL_FMT_R16_SFLOAT = 0x00000031,
- XGL_FMT_R16G16_UNORM = 0x00000032,
- XGL_FMT_R16G16_SNORM = 0x00000033,
- XGL_FMT_R16G16_USCALED = 0x00000034,
- XGL_FMT_R16G16_SSCALED = 0x00000035,
- XGL_FMT_R16G16_UINT = 0x00000036,
- XGL_FMT_R16G16_SINT = 0x00000037,
- XGL_FMT_R16G16_SFLOAT = 0x00000038,
- XGL_FMT_R16G16B16_UNORM = 0x00000039,
- XGL_FMT_R16G16B16_SNORM = 0x0000003A,
- XGL_FMT_R16G16B16_USCALED = 0x0000003B,
- XGL_FMT_R16G16B16_SSCALED = 0x0000003C,
- XGL_FMT_R16G16B16_UINT = 0x0000003D,
- XGL_FMT_R16G16B16_SINT = 0x0000003E,
- XGL_FMT_R16G16B16_SFLOAT = 0x0000003F,
- XGL_FMT_R16G16B16A16_UNORM = 0x00000040,
- XGL_FMT_R16G16B16A16_SNORM = 0x00000041,
- XGL_FMT_R16G16B16A16_USCALED = 0x00000042,
- XGL_FMT_R16G16B16A16_SSCALED = 0x00000043,
- XGL_FMT_R16G16B16A16_UINT = 0x00000044,
- XGL_FMT_R16G16B16A16_SINT = 0x00000045,
- XGL_FMT_R16G16B16A16_SFLOAT = 0x00000046,
- XGL_FMT_R32_UINT = 0x00000047,
- XGL_FMT_R32_SINT = 0x00000048,
- XGL_FMT_R32_SFLOAT = 0x00000049,
- XGL_FMT_R32G32_UINT = 0x0000004A,
- XGL_FMT_R32G32_SINT = 0x0000004B,
- XGL_FMT_R32G32_SFLOAT = 0x0000004C,
- XGL_FMT_R32G32B32_UINT = 0x0000004D,
- XGL_FMT_R32G32B32_SINT = 0x0000004E,
- XGL_FMT_R32G32B32_SFLOAT = 0x0000004F,
- XGL_FMT_R32G32B32A32_UINT = 0x00000050,
- XGL_FMT_R32G32B32A32_SINT = 0x00000051,
- XGL_FMT_R32G32B32A32_SFLOAT = 0x00000052,
- XGL_FMT_R64_SFLOAT = 0x00000053,
- XGL_FMT_R64G64_SFLOAT = 0x00000054,
- XGL_FMT_R64G64B64_SFLOAT = 0x00000055,
- XGL_FMT_R64G64B64A64_SFLOAT = 0x00000056,
- XGL_FMT_R11G11B10_UFLOAT = 0x00000057,
- XGL_FMT_R9G9B9E5_UFLOAT = 0x00000058,
- XGL_FMT_D16_UNORM = 0x00000059,
- XGL_FMT_D24_UNORM = 0x0000005A,
- XGL_FMT_D32_SFLOAT = 0x0000005B,
- XGL_FMT_S8_UINT = 0x0000005C,
- XGL_FMT_D16_UNORM_S8_UINT = 0x0000005D,
- XGL_FMT_D24_UNORM_S8_UINT = 0x0000005E,
- XGL_FMT_D32_SFLOAT_S8_UINT = 0x0000005F,
- XGL_FMT_BC1_RGB_UNORM = 0x00000060,
- XGL_FMT_BC1_RGB_SRGB = 0x00000061,
- XGL_FMT_BC1_RGBA_UNORM = 0x00000062,
- XGL_FMT_BC1_RGBA_SRGB = 0x00000063,
- XGL_FMT_BC2_UNORM = 0x00000064,
- XGL_FMT_BC2_SRGB = 0x00000065,
- XGL_FMT_BC3_UNORM = 0x00000066,
- XGL_FMT_BC3_SRGB = 0x00000067,
- XGL_FMT_BC4_UNORM = 0x00000068,
- XGL_FMT_BC4_SNORM = 0x00000069,
- XGL_FMT_BC5_UNORM = 0x0000006A,
- XGL_FMT_BC5_SNORM = 0x0000006B,
- XGL_FMT_BC6H_UFLOAT = 0x0000006C,
- XGL_FMT_BC6H_SFLOAT = 0x0000006D,
- XGL_FMT_BC7_UNORM = 0x0000006E,
- XGL_FMT_BC7_SRGB = 0x0000006F,
- XGL_FMT_ETC2_R8G8B8_UNORM = 0x00000070,
- XGL_FMT_ETC2_R8G8B8_SRGB = 0x00000071,
- XGL_FMT_ETC2_R8G8B8A1_UNORM = 0x00000072,
- XGL_FMT_ETC2_R8G8B8A1_SRGB = 0x00000073,
- XGL_FMT_ETC2_R8G8B8A8_UNORM = 0x00000074,
- XGL_FMT_ETC2_R8G8B8A8_SRGB = 0x00000075,
- XGL_FMT_EAC_R11_UNORM = 0x00000076,
- XGL_FMT_EAC_R11_SNORM = 0x00000077,
- XGL_FMT_EAC_R11G11_UNORM = 0x00000078,
- XGL_FMT_EAC_R11G11_SNORM = 0x00000079,
- XGL_FMT_ASTC_4x4_UNORM = 0x0000007A,
- XGL_FMT_ASTC_4x4_SRGB = 0x0000007B,
- XGL_FMT_ASTC_5x4_UNORM = 0x0000007C,
- XGL_FMT_ASTC_5x4_SRGB = 0x0000007D,
- XGL_FMT_ASTC_5x5_UNORM = 0x0000007E,
- XGL_FMT_ASTC_5x5_SRGB = 0x0000007F,
- XGL_FMT_ASTC_6x5_UNORM = 0x00000080,
- XGL_FMT_ASTC_6x5_SRGB = 0x00000081,
- XGL_FMT_ASTC_6x6_UNORM = 0x00000082,
- XGL_FMT_ASTC_6x6_SRGB = 0x00000083,
- XGL_FMT_ASTC_8x5_UNORM = 0x00000084,
- XGL_FMT_ASTC_8x5_SRGB = 0x00000085,
- XGL_FMT_ASTC_8x6_UNORM = 0x00000086,
- XGL_FMT_ASTC_8x6_SRGB = 0x00000087,
- XGL_FMT_ASTC_8x8_UNORM = 0x00000088,
- XGL_FMT_ASTC_8x8_SRGB = 0x00000089,
- XGL_FMT_ASTC_10x5_UNORM = 0x0000008A,
- XGL_FMT_ASTC_10x5_SRGB = 0x0000008B,
- XGL_FMT_ASTC_10x6_UNORM = 0x0000008C,
- XGL_FMT_ASTC_10x6_SRGB = 0x0000008D,
- XGL_FMT_ASTC_10x8_UNORM = 0x0000008E,
- XGL_FMT_ASTC_10x8_SRGB = 0x0000008F,
- XGL_FMT_ASTC_10x10_UNORM = 0x00000090,
- XGL_FMT_ASTC_10x10_SRGB = 0x00000091,
- XGL_FMT_ASTC_12x10_UNORM = 0x00000092,
- XGL_FMT_ASTC_12x10_SRGB = 0x00000093,
- XGL_FMT_ASTC_12x12_UNORM = 0x00000094,
- XGL_FMT_ASTC_12x12_SRGB = 0x00000095,
- XGL_FMT_B4G4R4A4_UNORM = 0x00000096,
- XGL_FMT_B5G5R5A1_UNORM = 0x00000097,
- XGL_FMT_B5G6R5_UNORM = 0x00000098,
- XGL_FMT_B5G6R5_USCALED = 0x00000099,
- XGL_FMT_B8G8R8_UNORM = 0x0000009A,
- XGL_FMT_B8G8R8_SNORM = 0x0000009B,
- XGL_FMT_B8G8R8_USCALED = 0x0000009C,
- XGL_FMT_B8G8R8_SSCALED = 0x0000009D,
- XGL_FMT_B8G8R8_UINT = 0x0000009E,
- XGL_FMT_B8G8R8_SINT = 0x0000009F,
- XGL_FMT_B8G8R8_SRGB = 0x000000A0,
- XGL_FMT_B8G8R8A8_UNORM = 0x000000A1,
- XGL_FMT_B8G8R8A8_SNORM = 0x000000A2,
- XGL_FMT_B8G8R8A8_USCALED = 0x000000A3,
- XGL_FMT_B8G8R8A8_SSCALED = 0x000000A4,
- XGL_FMT_B8G8R8A8_UINT = 0x000000A5,
- XGL_FMT_B8G8R8A8_SINT = 0x000000A6,
- XGL_FMT_B8G8R8A8_SRGB = 0x000000A7,
- XGL_FMT_B10G10R10A2_UNORM = 0x000000A8,
- XGL_FMT_B10G10R10A2_SNORM = 0x000000A9,
- XGL_FMT_B10G10R10A2_USCALED = 0x000000AA,
- XGL_FMT_B10G10R10A2_SSCALED = 0x000000AB,
- XGL_FMT_B10G10R10A2_UINT = 0x000000AC,
- XGL_FMT_B10G10R10A2_SINT = 0x000000AD,
-
- XGL_FMT_BEGIN_RANGE = XGL_FMT_UNDEFINED,
- XGL_FMT_END_RANGE = XGL_FMT_B10G10R10A2_SINT,
- XGL_NUM_FMT = (XGL_FMT_END_RANGE - XGL_FMT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FORMAT)
-} XGL_FORMAT;
+// VK format definitions
+
+typedef enum _VK_VERTEX_INPUT_STEP_RATE
+{
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX = 0x0,
+ VK_VERTEX_INPUT_STEP_RATE_INSTANCE = 0x1,
+ VK_VERTEX_INPUT_STEP_RATE_DRAW = 0x2, //Optional
+
+ VK_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE = VK_VERTEX_INPUT_STEP_RATE_VERTEX,
+ VK_VERTEX_INPUT_STEP_RATE_END_RANGE = VK_VERTEX_INPUT_STEP_RATE_DRAW,
+ VK_NUM_VERTEX_INPUT_STEP_RATE = (VK_VERTEX_INPUT_STEP_RATE_END_RANGE - VK_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_VERTEX_INPUT_STEP_RATE)
+} VK_VERTEX_INPUT_STEP_RATE;
+
+typedef enum _VK_FORMAT
+{
+ VK_FMT_UNDEFINED = 0x00000000,
+ VK_FMT_R4G4_UNORM = 0x00000001,
+ VK_FMT_R4G4_USCALED = 0x00000002,
+ VK_FMT_R4G4B4A4_UNORM = 0x00000003,
+ VK_FMT_R4G4B4A4_USCALED = 0x00000004,
+ VK_FMT_R5G6B5_UNORM = 0x00000005,
+ VK_FMT_R5G6B5_USCALED = 0x00000006,
+ VK_FMT_R5G5B5A1_UNORM = 0x00000007,
+ VK_FMT_R5G5B5A1_USCALED = 0x00000008,
+ VK_FMT_R8_UNORM = 0x00000009,
+ VK_FMT_R8_SNORM = 0x0000000A,
+ VK_FMT_R8_USCALED = 0x0000000B,
+ VK_FMT_R8_SSCALED = 0x0000000C,
+ VK_FMT_R8_UINT = 0x0000000D,
+ VK_FMT_R8_SINT = 0x0000000E,
+ VK_FMT_R8_SRGB = 0x0000000F,
+ VK_FMT_R8G8_UNORM = 0x00000010,
+ VK_FMT_R8G8_SNORM = 0x00000011,
+ VK_FMT_R8G8_USCALED = 0x00000012,
+ VK_FMT_R8G8_SSCALED = 0x00000013,
+ VK_FMT_R8G8_UINT = 0x00000014,
+ VK_FMT_R8G8_SINT = 0x00000015,
+ VK_FMT_R8G8_SRGB = 0x00000016,
+ VK_FMT_R8G8B8_UNORM = 0x00000017,
+ VK_FMT_R8G8B8_SNORM = 0x00000018,
+ VK_FMT_R8G8B8_USCALED = 0x00000019,
+ VK_FMT_R8G8B8_SSCALED = 0x0000001A,
+ VK_FMT_R8G8B8_UINT = 0x0000001B,
+ VK_FMT_R8G8B8_SINT = 0x0000001C,
+ VK_FMT_R8G8B8_SRGB = 0x0000001D,
+ VK_FMT_R8G8B8A8_UNORM = 0x0000001E,
+ VK_FMT_R8G8B8A8_SNORM = 0x0000001F,
+ VK_FMT_R8G8B8A8_USCALED = 0x00000020,
+ VK_FMT_R8G8B8A8_SSCALED = 0x00000021,
+ VK_FMT_R8G8B8A8_UINT = 0x00000022,
+ VK_FMT_R8G8B8A8_SINT = 0x00000023,
+ VK_FMT_R8G8B8A8_SRGB = 0x00000024,
+ VK_FMT_R10G10B10A2_UNORM = 0x00000025,
+ VK_FMT_R10G10B10A2_SNORM = 0x00000026,
+ VK_FMT_R10G10B10A2_USCALED = 0x00000027,
+ VK_FMT_R10G10B10A2_SSCALED = 0x00000028,
+ VK_FMT_R10G10B10A2_UINT = 0x00000029,
+ VK_FMT_R10G10B10A2_SINT = 0x0000002A,
+ VK_FMT_R16_UNORM = 0x0000002B,
+ VK_FMT_R16_SNORM = 0x0000002C,
+ VK_FMT_R16_USCALED = 0x0000002D,
+ VK_FMT_R16_SSCALED = 0x0000002E,
+ VK_FMT_R16_UINT = 0x0000002F,
+ VK_FMT_R16_SINT = 0x00000030,
+ VK_FMT_R16_SFLOAT = 0x00000031,
+ VK_FMT_R16G16_UNORM = 0x00000032,
+ VK_FMT_R16G16_SNORM = 0x00000033,
+ VK_FMT_R16G16_USCALED = 0x00000034,
+ VK_FMT_R16G16_SSCALED = 0x00000035,
+ VK_FMT_R16G16_UINT = 0x00000036,
+ VK_FMT_R16G16_SINT = 0x00000037,
+ VK_FMT_R16G16_SFLOAT = 0x00000038,
+ VK_FMT_R16G16B16_UNORM = 0x00000039,
+ VK_FMT_R16G16B16_SNORM = 0x0000003A,
+ VK_FMT_R16G16B16_USCALED = 0x0000003B,
+ VK_FMT_R16G16B16_SSCALED = 0x0000003C,
+ VK_FMT_R16G16B16_UINT = 0x0000003D,
+ VK_FMT_R16G16B16_SINT = 0x0000003E,
+ VK_FMT_R16G16B16_SFLOAT = 0x0000003F,
+ VK_FMT_R16G16B16A16_UNORM = 0x00000040,
+ VK_FMT_R16G16B16A16_SNORM = 0x00000041,
+ VK_FMT_R16G16B16A16_USCALED = 0x00000042,
+ VK_FMT_R16G16B16A16_SSCALED = 0x00000043,
+ VK_FMT_R16G16B16A16_UINT = 0x00000044,
+ VK_FMT_R16G16B16A16_SINT = 0x00000045,
+ VK_FMT_R16G16B16A16_SFLOAT = 0x00000046,
+ VK_FMT_R32_UINT = 0x00000047,
+ VK_FMT_R32_SINT = 0x00000048,
+ VK_FMT_R32_SFLOAT = 0x00000049,
+ VK_FMT_R32G32_UINT = 0x0000004A,
+ VK_FMT_R32G32_SINT = 0x0000004B,
+ VK_FMT_R32G32_SFLOAT = 0x0000004C,
+ VK_FMT_R32G32B32_UINT = 0x0000004D,
+ VK_FMT_R32G32B32_SINT = 0x0000004E,
+ VK_FMT_R32G32B32_SFLOAT = 0x0000004F,
+ VK_FMT_R32G32B32A32_UINT = 0x00000050,
+ VK_FMT_R32G32B32A32_SINT = 0x00000051,
+ VK_FMT_R32G32B32A32_SFLOAT = 0x00000052,
+ VK_FMT_R64_SFLOAT = 0x00000053,
+ VK_FMT_R64G64_SFLOAT = 0x00000054,
+ VK_FMT_R64G64B64_SFLOAT = 0x00000055,
+ VK_FMT_R64G64B64A64_SFLOAT = 0x00000056,
+ VK_FMT_R11G11B10_UFLOAT = 0x00000057,
+ VK_FMT_R9G9B9E5_UFLOAT = 0x00000058,
+ VK_FMT_D16_UNORM = 0x00000059,
+ VK_FMT_D24_UNORM = 0x0000005A,
+ VK_FMT_D32_SFLOAT = 0x0000005B,
+ VK_FMT_S8_UINT = 0x0000005C,
+ VK_FMT_D16_UNORM_S8_UINT = 0x0000005D,
+ VK_FMT_D24_UNORM_S8_UINT = 0x0000005E,
+ VK_FMT_D32_SFLOAT_S8_UINT = 0x0000005F,
+ VK_FMT_BC1_RGB_UNORM = 0x00000060,
+ VK_FMT_BC1_RGB_SRGB = 0x00000061,
+ VK_FMT_BC1_RGBA_UNORM = 0x00000062,
+ VK_FMT_BC1_RGBA_SRGB = 0x00000063,
+ VK_FMT_BC2_UNORM = 0x00000064,
+ VK_FMT_BC2_SRGB = 0x00000065,
+ VK_FMT_BC3_UNORM = 0x00000066,
+ VK_FMT_BC3_SRGB = 0x00000067,
+ VK_FMT_BC4_UNORM = 0x00000068,
+ VK_FMT_BC4_SNORM = 0x00000069,
+ VK_FMT_BC5_UNORM = 0x0000006A,
+ VK_FMT_BC5_SNORM = 0x0000006B,
+ VK_FMT_BC6H_UFLOAT = 0x0000006C,
+ VK_FMT_BC6H_SFLOAT = 0x0000006D,
+ VK_FMT_BC7_UNORM = 0x0000006E,
+ VK_FMT_BC7_SRGB = 0x0000006F,
+ VK_FMT_ETC2_R8G8B8_UNORM = 0x00000070,
+ VK_FMT_ETC2_R8G8B8_SRGB = 0x00000071,
+ VK_FMT_ETC2_R8G8B8A1_UNORM = 0x00000072,
+ VK_FMT_ETC2_R8G8B8A1_SRGB = 0x00000073,
+ VK_FMT_ETC2_R8G8B8A8_UNORM = 0x00000074,
+ VK_FMT_ETC2_R8G8B8A8_SRGB = 0x00000075,
+ VK_FMT_EAC_R11_UNORM = 0x00000076,
+ VK_FMT_EAC_R11_SNORM = 0x00000077,
+ VK_FMT_EAC_R11G11_UNORM = 0x00000078,
+ VK_FMT_EAC_R11G11_SNORM = 0x00000079,
+ VK_FMT_ASTC_4x4_UNORM = 0x0000007A,
+ VK_FMT_ASTC_4x4_SRGB = 0x0000007B,
+ VK_FMT_ASTC_5x4_UNORM = 0x0000007C,
+ VK_FMT_ASTC_5x4_SRGB = 0x0000007D,
+ VK_FMT_ASTC_5x5_UNORM = 0x0000007E,
+ VK_FMT_ASTC_5x5_SRGB = 0x0000007F,
+ VK_FMT_ASTC_6x5_UNORM = 0x00000080,
+ VK_FMT_ASTC_6x5_SRGB = 0x00000081,
+ VK_FMT_ASTC_6x6_UNORM = 0x00000082,
+ VK_FMT_ASTC_6x6_SRGB = 0x00000083,
+ VK_FMT_ASTC_8x5_UNORM = 0x00000084,
+ VK_FMT_ASTC_8x5_SRGB = 0x00000085,
+ VK_FMT_ASTC_8x6_UNORM = 0x00000086,
+ VK_FMT_ASTC_8x6_SRGB = 0x00000087,
+ VK_FMT_ASTC_8x8_UNORM = 0x00000088,
+ VK_FMT_ASTC_8x8_SRGB = 0x00000089,
+ VK_FMT_ASTC_10x5_UNORM = 0x0000008A,
+ VK_FMT_ASTC_10x5_SRGB = 0x0000008B,
+ VK_FMT_ASTC_10x6_UNORM = 0x0000008C,
+ VK_FMT_ASTC_10x6_SRGB = 0x0000008D,
+ VK_FMT_ASTC_10x8_UNORM = 0x0000008E,
+ VK_FMT_ASTC_10x8_SRGB = 0x0000008F,
+ VK_FMT_ASTC_10x10_UNORM = 0x00000090,
+ VK_FMT_ASTC_10x10_SRGB = 0x00000091,
+ VK_FMT_ASTC_12x10_UNORM = 0x00000092,
+ VK_FMT_ASTC_12x10_SRGB = 0x00000093,
+ VK_FMT_ASTC_12x12_UNORM = 0x00000094,
+ VK_FMT_ASTC_12x12_SRGB = 0x00000095,
+ VK_FMT_B4G4R4A4_UNORM = 0x00000096,
+ VK_FMT_B5G5R5A1_UNORM = 0x00000097,
+ VK_FMT_B5G6R5_UNORM = 0x00000098,
+ VK_FMT_B5G6R5_USCALED = 0x00000099,
+ VK_FMT_B8G8R8_UNORM = 0x0000009A,
+ VK_FMT_B8G8R8_SNORM = 0x0000009B,
+ VK_FMT_B8G8R8_USCALED = 0x0000009C,
+ VK_FMT_B8G8R8_SSCALED = 0x0000009D,
+ VK_FMT_B8G8R8_UINT = 0x0000009E,
+ VK_FMT_B8G8R8_SINT = 0x0000009F,
+ VK_FMT_B8G8R8_SRGB = 0x000000A0,
+ VK_FMT_B8G8R8A8_UNORM = 0x000000A1,
+ VK_FMT_B8G8R8A8_SNORM = 0x000000A2,
+ VK_FMT_B8G8R8A8_USCALED = 0x000000A3,
+ VK_FMT_B8G8R8A8_SSCALED = 0x000000A4,
+ VK_FMT_B8G8R8A8_UINT = 0x000000A5,
+ VK_FMT_B8G8R8A8_SINT = 0x000000A6,
+ VK_FMT_B8G8R8A8_SRGB = 0x000000A7,
+ VK_FMT_B10G10R10A2_UNORM = 0x000000A8,
+ VK_FMT_B10G10R10A2_SNORM = 0x000000A9,
+ VK_FMT_B10G10R10A2_USCALED = 0x000000AA,
+ VK_FMT_B10G10R10A2_SSCALED = 0x000000AB,
+ VK_FMT_B10G10R10A2_UINT = 0x000000AC,
+ VK_FMT_B10G10R10A2_SINT = 0x000000AD,
+
+ VK_FMT_BEGIN_RANGE = VK_FMT_UNDEFINED,
+ VK_FMT_END_RANGE = VK_FMT_B10G10R10A2_SINT,
+ VK_NUM_FMT = (VK_FMT_END_RANGE - VK_FMT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FORMAT)
+} VK_FORMAT;
// Shader stage enumerant
-typedef enum _XGL_PIPELINE_SHADER_STAGE
-{
- XGL_SHADER_STAGE_VERTEX = 0,
- XGL_SHADER_STAGE_TESS_CONTROL = 1,
- XGL_SHADER_STAGE_TESS_EVALUATION = 2,
- XGL_SHADER_STAGE_GEOMETRY = 3,
- XGL_SHADER_STAGE_FRAGMENT = 4,
- XGL_SHADER_STAGE_COMPUTE = 5,
-
- XGL_SHADER_STAGE_BEGIN_RANGE = XGL_SHADER_STAGE_VERTEX,
- XGL_SHADER_STAGE_END_RANGE = XGL_SHADER_STAGE_COMPUTE,
- XGL_NUM_SHADER_STAGE = (XGL_SHADER_STAGE_END_RANGE - XGL_SHADER_STAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPELINE_SHADER_STAGE)
-} XGL_PIPELINE_SHADER_STAGE;
-
-typedef enum _XGL_SHADER_STAGE_FLAGS
-{
- XGL_SHADER_STAGE_FLAGS_VERTEX_BIT = 0x00000001,
- XGL_SHADER_STAGE_FLAGS_TESS_CONTROL_BIT = 0x00000002,
- XGL_SHADER_STAGE_FLAGS_TESS_EVALUATION_BIT = 0x00000004,
- XGL_SHADER_STAGE_FLAGS_GEOMETRY_BIT = 0x00000008,
- XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT = 0x00000010,
- XGL_SHADER_STAGE_FLAGS_COMPUTE_BIT = 0x00000020,
-
- XGL_SHADER_STAGE_FLAGS_ALL = 0x7FFFFFFF,
- XGL_MAX_ENUM(_XGL_SHADER_STAGE_FLAGS)
-} XGL_SHADER_STAGE_FLAGS;
+typedef enum _VK_PIPELINE_SHADER_STAGE
+{
+ VK_SHADER_STAGE_VERTEX = 0,
+ VK_SHADER_STAGE_TESS_CONTROL = 1,
+ VK_SHADER_STAGE_TESS_EVALUATION = 2,
+ VK_SHADER_STAGE_GEOMETRY = 3,
+ VK_SHADER_STAGE_FRAGMENT = 4,
+ VK_SHADER_STAGE_COMPUTE = 5,
+
+ VK_SHADER_STAGE_BEGIN_RANGE = VK_SHADER_STAGE_VERTEX,
+ VK_SHADER_STAGE_END_RANGE = VK_SHADER_STAGE_COMPUTE,
+ VK_NUM_SHADER_STAGE = (VK_SHADER_STAGE_END_RANGE - VK_SHADER_STAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPELINE_SHADER_STAGE)
+} VK_PIPELINE_SHADER_STAGE;
+
+typedef enum _VK_SHADER_STAGE_FLAGS
+{
+ VK_SHADER_STAGE_FLAGS_VERTEX_BIT = 0x00000001,
+ VK_SHADER_STAGE_FLAGS_TESS_CONTROL_BIT = 0x00000002,
+ VK_SHADER_STAGE_FLAGS_TESS_EVALUATION_BIT = 0x00000004,
+ VK_SHADER_STAGE_FLAGS_GEOMETRY_BIT = 0x00000008,
+ VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT = 0x00000010,
+ VK_SHADER_STAGE_FLAGS_COMPUTE_BIT = 0x00000020,
+
+ VK_SHADER_STAGE_FLAGS_ALL = 0x7FFFFFFF,
+ VK_MAX_ENUM(_VK_SHADER_STAGE_FLAGS)
+} VK_SHADER_STAGE_FLAGS;
// Structure type enumerant
-typedef enum _XGL_STRUCTURE_TYPE
-{
- XGL_STRUCTURE_TYPE_APPLICATION_INFO = 0,
- XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 1,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO = 2,
- XGL_STRUCTURE_TYPE_MEMORY_OPEN_INFO = 4,
- XGL_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO = 5,
- XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO = 6,
- XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO = 7,
- XGL_STRUCTURE_TYPE_EVENT_WAIT_INFO = 8,
- XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 9,
- XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO = 10,
- XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO = 11,
- XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO = 12,
- XGL_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 13,
- XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 14,
- XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 15,
- XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO = 16,
- XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO = 17,
- XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO = 18,
- XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO = 19,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO = 20,
- XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO = 21,
- XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO = 22,
- XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 23,
- XGL_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO = 24,
- XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 25,
- XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 26,
- XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 27,
- XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO = 28,
- XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO = 29,
- XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO = 30,
- XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO = 31,
- XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO = 32,
- XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO = 33,
- XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO = 34,
- XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO = 35,
- XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 36,
- XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 37,
- XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 38,
- XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 39,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO = 40,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO = 41,
- XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 42,
- XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO = 43,
- XGL_STRUCTURE_TYPE_PIPELINE_BARRIER = 44,
- XGL_STRUCTURE_TYPE_MEMORY_BARRIER = 45,
- XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 46,
- XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 47,
- XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 48,
- XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS = 49,
- XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES = 50,
- XGL_STRUCTURE_TYPE_UPDATE_IMAGES = 51,
- XGL_STRUCTURE_TYPE_UPDATE_BUFFERS = 52,
- XGL_STRUCTURE_TYPE_UPDATE_AS_COPY = 53,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO = 54,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO = 55,
- XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 56,
- XGL_STRUCTURE_TYPE_BEGIN_RANGE = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
- XGL_STRUCTURE_TYPE_END_RANGE = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
- XGL_NUM_STRUCTURE_TYPE = (XGL_STRUCTURE_TYPE_END_RANGE - XGL_STRUCTURE_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STRUCTURE_TYPE)
-} XGL_STRUCTURE_TYPE;
+typedef enum _VK_STRUCTURE_TYPE
+{
+ VK_STRUCTURE_TYPE_APPLICATION_INFO = 0,
+ VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 1,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO = 2,
+ VK_STRUCTURE_TYPE_MEMORY_OPEN_INFO = 4,
+ VK_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO = 5,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO = 6,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO = 7,
+ VK_STRUCTURE_TYPE_EVENT_WAIT_INFO = 8,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 9,
+ VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO = 10,
+ VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO = 11,
+ VK_STRUCTURE_TYPE_SHADER_CREATE_INFO = 12,
+ VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 13,
+ VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 14,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 15,
+ VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO = 16,
+ VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO = 17,
+ VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO = 18,
+ VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO = 19,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO = 20,
+ VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 21,
+ VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 22,
+ VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 23,
+ VK_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO = 24,
+ VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 25,
+ VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 26,
+ VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 27,
+ VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO = 28,
+ VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO = 29,
+ VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO = 30,
+ VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO = 31,
+ VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO = 32,
+ VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO = 33,
+ VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO = 34,
+ VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO = 35,
+ VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 36,
+ VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 37,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 38,
+ VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 39,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO = 40,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO = 41,
+ VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 42,
+ VK_STRUCTURE_TYPE_LAYER_CREATE_INFO = 43,
+ VK_STRUCTURE_TYPE_PIPELINE_BARRIER = 44,
+ VK_STRUCTURE_TYPE_MEMORY_BARRIER = 45,
+ VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 46,
+ VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 47,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 48,
+ VK_STRUCTURE_TYPE_UPDATE_SAMPLERS = 49,
+ VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES = 50,
+ VK_STRUCTURE_TYPE_UPDATE_IMAGES = 51,
+ VK_STRUCTURE_TYPE_UPDATE_BUFFERS = 52,
+ VK_STRUCTURE_TYPE_UPDATE_AS_COPY = 53,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO = 54,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO = 55,
+ VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 56,
+ VK_STRUCTURE_TYPE_BEGIN_RANGE = VK_STRUCTURE_TYPE_APPLICATION_INFO,
+ VK_STRUCTURE_TYPE_END_RANGE = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ VK_NUM_STRUCTURE_TYPE = (VK_STRUCTURE_TYPE_END_RANGE - VK_STRUCTURE_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STRUCTURE_TYPE)
+} VK_STRUCTURE_TYPE;
// ------------------------------------------------------------------------------------------------
// Flags
// Device creation flags
-typedef enum _XGL_DEVICE_CREATE_FLAGS
+typedef enum _VK_DEVICE_CREATE_FLAGS
{
- XGL_DEVICE_CREATE_VALIDATION_BIT = 0x00000001,
- XGL_DEVICE_CREATE_MGPU_IQ_MATCH_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_DEVICE_CREATE_FLAGS)
-} XGL_DEVICE_CREATE_FLAGS;
+ VK_DEVICE_CREATE_VALIDATION_BIT = 0x00000001,
+ VK_DEVICE_CREATE_MGPU_IQ_MATCH_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_DEVICE_CREATE_FLAGS)
+} VK_DEVICE_CREATE_FLAGS;
// Queue capabilities
-typedef enum _XGL_QUEUE_FLAGS
-{
- XGL_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
- XGL_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
- XGL_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
- XGL_QUEUE_EXTENDED_BIT = 0x40000000, // Extended queue
- XGL_MAX_ENUM(_XGL_QUEUE_FLAGS)
-} XGL_QUEUE_FLAGS;
-
-// memory properties passed into xglAllocMemory().
-typedef enum _XGL_MEMORY_PROPERTY_FLAGS
-{
- XGL_MEMORY_PROPERTY_GPU_ONLY = 0x00000000, // If not set, then allocate memory on device (GPU)
- XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT = 0x00000001,
- XGL_MEMORY_PROPERTY_CPU_GPU_COHERENT_BIT = 0x00000002,
- XGL_MEMORY_PROPERTY_CPU_UNCACHED_BIT = 0x00000004,
- XGL_MEMORY_PROPERTY_CPU_WRITE_COMBINED_BIT = 0x00000008,
- XGL_MEMORY_PROPERTY_PREFER_CPU_LOCAL = 0x00000010, // all else being equal, prefer CPU access
- XGL_MEMORY_PROPERTY_SHAREABLE_BIT = 0x00000020,
- XGL_MAX_ENUM(_XGL_MEMORY_PROPERTY_FLAGS)
-} XGL_MEMORY_PROPERTY_FLAGS;
-
-typedef enum _XGL_MEMORY_TYPE
-{
- XGL_MEMORY_TYPE_OTHER = 0x00000000, // device memory that is not any of the others
- XGL_MEMORY_TYPE_BUFFER = 0x00000001, // memory for buffers and associated information
- XGL_MEMORY_TYPE_IMAGE = 0x00000002, // memory for images and associated information
-
- XGL_MEMORY_TYPE_BEGIN_RANGE = XGL_MEMORY_TYPE_OTHER,
- XGL_MEMORY_TYPE_END_RANGE = XGL_MEMORY_TYPE_IMAGE,
- XGL_NUM_MEMORY_TYPE = (XGL_MEMORY_TYPE_END_RANGE - XGL_MEMORY_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_MEMORY_TYPE)
-} XGL_MEMORY_TYPE;
+typedef enum _VK_QUEUE_FLAGS
+{
+ VK_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
+ VK_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
+ VK_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
+ VK_QUEUE_EXTENDED_BIT = 0x40000000, // Extended queue
+ VK_MAX_ENUM(_VK_QUEUE_FLAGS)
+} VK_QUEUE_FLAGS;
+
+// memory properties passed into vkAllocMemory().
+typedef enum _VK_MEMORY_PROPERTY_FLAGS
+{
+ VK_MEMORY_PROPERTY_GPU_ONLY = 0x00000000, // If not set, then allocate memory on device (GPU)
+ VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT = 0x00000001,
+ VK_MEMORY_PROPERTY_CPU_GPU_COHERENT_BIT = 0x00000002,
+ VK_MEMORY_PROPERTY_CPU_UNCACHED_BIT = 0x00000004,
+ VK_MEMORY_PROPERTY_CPU_WRITE_COMBINED_BIT = 0x00000008,
+ VK_MEMORY_PROPERTY_PREFER_CPU_LOCAL = 0x00000010, // all else being equal, prefer CPU access
+ VK_MEMORY_PROPERTY_SHAREABLE_BIT = 0x00000020,
+ VK_MAX_ENUM(_VK_MEMORY_PROPERTY_FLAGS)
+} VK_MEMORY_PROPERTY_FLAGS;
+
+typedef enum _VK_MEMORY_TYPE
+{
+ VK_MEMORY_TYPE_OTHER = 0x00000000, // device memory that is not any of the others
+ VK_MEMORY_TYPE_BUFFER = 0x00000001, // memory for buffers and associated information
+ VK_MEMORY_TYPE_IMAGE = 0x00000002, // memory for images and associated information
+
+ VK_MEMORY_TYPE_BEGIN_RANGE = VK_MEMORY_TYPE_OTHER,
+ VK_MEMORY_TYPE_END_RANGE = VK_MEMORY_TYPE_IMAGE,
+ VK_NUM_MEMORY_TYPE = (VK_MEMORY_TYPE_END_RANGE - VK_MEMORY_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_MEMORY_TYPE)
+} VK_MEMORY_TYPE;
// Buffer and buffer allocation usage flags
-typedef enum _XGL_BUFFER_USAGE_FLAGS
-{
- XGL_BUFFER_USAGE_GENERAL = 0x00000000, // no special usage
- XGL_BUFFER_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // Shader read (e.g. TBO, image buffer, UBO, SSBO)
- XGL_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // Shader write (e.g. image buffer, SSBO)
- XGL_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // Shader atomic operations (e.g. image buffer, SSBO)
- XGL_BUFFER_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
- XGL_BUFFER_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
- XGL_BUFFER_USAGE_UNIFORM_READ_BIT = 0x00000020, // Uniform read (UBO)
- XGL_BUFFER_USAGE_INDEX_FETCH_BIT = 0x00000040, // Fixed function index fetch (index buffer)
- XGL_BUFFER_USAGE_VERTEX_FETCH_BIT = 0x00000080, // Fixed function vertex fetch (VBO)
- XGL_BUFFER_USAGE_SHADER_STORAGE_BIT = 0x00000100, // Shader storage buffer (SSBO)
- XGL_BUFFER_USAGE_INDIRECT_PARAMETER_FETCH_BIT = 0x00000200, // Can be the source of indirect parameters (e.g. indirect buffer, parameter buffer)
- XGL_BUFFER_USAGE_TEXTURE_BUFFER_BIT = 0x00000400, // texture buffer (TBO)
- XGL_BUFFER_USAGE_IMAGE_BUFFER_BIT = 0x00000800, // image buffer (load/store)
- XGL_MAX_ENUM(_XGL_BUFFER_USAGE_FLAGS)
-} XGL_BUFFER_USAGE_FLAGS;
+typedef enum _VK_BUFFER_USAGE_FLAGS
+{
+ VK_BUFFER_USAGE_GENERAL = 0x00000000, // no special usage
+ VK_BUFFER_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // Shader read (e.g. TBO, image buffer, UBO, SSBO)
+ VK_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // Shader write (e.g. image buffer, SSBO)
+ VK_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // Shader atomic operations (e.g. image buffer, SSBO)
+ VK_BUFFER_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
+ VK_BUFFER_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
+ VK_BUFFER_USAGE_UNIFORM_READ_BIT = 0x00000020, // Uniform read (UBO)
+ VK_BUFFER_USAGE_INDEX_FETCH_BIT = 0x00000040, // Fixed function index fetch (index buffer)
+ VK_BUFFER_USAGE_VERTEX_FETCH_BIT = 0x00000080, // Fixed function vertex fetch (VBO)
+ VK_BUFFER_USAGE_SHADER_STORAGE_BIT = 0x00000100, // Shader storage buffer (SSBO)
+ VK_BUFFER_USAGE_INDIRECT_PARAMETER_FETCH_BIT = 0x00000200, // Can be the source of indirect parameters (e.g. indirect buffer, parameter buffer)
+ VK_BUFFER_USAGE_TEXTURE_BUFFER_BIT = 0x00000400, // texture buffer (TBO)
+ VK_BUFFER_USAGE_IMAGE_BUFFER_BIT = 0x00000800, // image buffer (load/store)
+ VK_MAX_ENUM(_VK_BUFFER_USAGE_FLAGS)
+} VK_BUFFER_USAGE_FLAGS;
// Buffer flags
-typedef enum _XGL_BUFFER_CREATE_FLAGS
+typedef enum _VK_BUFFER_CREATE_FLAGS
{
- XGL_BUFFER_CREATE_SHAREABLE_BIT = 0x00000001,
- XGL_BUFFER_CREATE_SPARSE_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_BUFFER_CREATE_FLAGS)
-} XGL_BUFFER_CREATE_FLAGS;
+ VK_BUFFER_CREATE_SHAREABLE_BIT = 0x00000001,
+ VK_BUFFER_CREATE_SPARSE_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_BUFFER_CREATE_FLAGS)
+} VK_BUFFER_CREATE_FLAGS;
-typedef enum _XGL_BUFFER_VIEW_TYPE
+typedef enum _VK_BUFFER_VIEW_TYPE
{
- XGL_BUFFER_VIEW_RAW = 0x00000000, // Raw buffer without special structure (e.g. UBO, SSBO, indirect and parameter buffers)
- XGL_BUFFER_VIEW_TYPED = 0x00000001, // Typed buffer, format and channels are used (TBO, image buffer)
+ VK_BUFFER_VIEW_RAW = 0x00000000, // Raw buffer without special structure (e.g. UBO, SSBO, indirect and parameter buffers)
+ VK_BUFFER_VIEW_TYPED = 0x00000001, // Typed buffer, format and channels are used (TBO, image buffer)
- XGL_BUFFER_VIEW_TYPE_BEGIN_RANGE = XGL_BUFFER_VIEW_RAW,
- XGL_BUFFER_VIEW_TYPE_END_RANGE = XGL_BUFFER_VIEW_TYPED,
- XGL_NUM_BUFFER_VIEW_TYPE = (XGL_BUFFER_VIEW_TYPE_END_RANGE - XGL_BUFFER_VIEW_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BUFFER_VIEW_TYPE)
-} XGL_BUFFER_VIEW_TYPE;
+ VK_BUFFER_VIEW_TYPE_BEGIN_RANGE = VK_BUFFER_VIEW_RAW,
+ VK_BUFFER_VIEW_TYPE_END_RANGE = VK_BUFFER_VIEW_TYPED,
+ VK_NUM_BUFFER_VIEW_TYPE = (VK_BUFFER_VIEW_TYPE_END_RANGE - VK_BUFFER_VIEW_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BUFFER_VIEW_TYPE)
+} VK_BUFFER_VIEW_TYPE;
// Images memory allocations can be used for resources of a given format class.
-typedef enum _XGL_IMAGE_FORMAT_CLASS
-{
- XGL_IMAGE_FORMAT_CLASS_128_BITS = 1, // color formats
- XGL_IMAGE_FORMAT_CLASS_96_BITS = 2,
- XGL_IMAGE_FORMAT_CLASS_64_BITS = 3,
- XGL_IMAGE_FORMAT_CLASS_48_BITS = 4,
- XGL_IMAGE_FORMAT_CLASS_32_BITS = 5,
- XGL_IMAGE_FORMAT_CLASS_24_BITS = 6,
- XGL_IMAGE_FORMAT_CLASS_16_BITS = 7,
- XGL_IMAGE_FORMAT_CLASS_8_BITS = 8,
- XGL_IMAGE_FORMAT_CLASS_128_BIT_BLOCK = 9, // 128-bit block compressed formats
- XGL_IMAGE_FORMAT_CLASS_64_BIT_BLOCK = 10, // 64-bit block compressed formats
- XGL_IMAGE_FORMAT_CLASS_D32 = 11, // D32_SFLOAT
- XGL_IMAGE_FORMAT_CLASS_D24 = 12, // D24_UNORM
- XGL_IMAGE_FORMAT_CLASS_D16 = 13, // D16_UNORM
- XGL_IMAGE_FORMAT_CLASS_S8 = 14, // S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D32S8 = 15, // D32_SFLOAT_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D24S8 = 16, // D24_UNORM_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D16S8 = 17, // D16_UNORM_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_LINEAR = 18, // used for pitch-linear (transparent) textures
-
- XGL_IMAGE_FORMAT_CLASS_BEGIN_RANGE = XGL_IMAGE_FORMAT_CLASS_128_BITS,
- XGL_IMAGE_FORMAT_CLASS_END_RANGE = XGL_IMAGE_FORMAT_CLASS_LINEAR,
- XGL_NUM_IMAGE_FORMAT_CLASS = (XGL_IMAGE_FORMAT_CLASS_END_RANGE - XGL_IMAGE_FORMAT_CLASS_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_FORMAT_CLASS)
-} XGL_IMAGE_FORMAT_CLASS;
+typedef enum _VK_IMAGE_FORMAT_CLASS
+{
+ VK_IMAGE_FORMAT_CLASS_128_BITS = 1, // color formats
+ VK_IMAGE_FORMAT_CLASS_96_BITS = 2,
+ VK_IMAGE_FORMAT_CLASS_64_BITS = 3,
+ VK_IMAGE_FORMAT_CLASS_48_BITS = 4,
+ VK_IMAGE_FORMAT_CLASS_32_BITS = 5,
+ VK_IMAGE_FORMAT_CLASS_24_BITS = 6,
+ VK_IMAGE_FORMAT_CLASS_16_BITS = 7,
+ VK_IMAGE_FORMAT_CLASS_8_BITS = 8,
+ VK_IMAGE_FORMAT_CLASS_128_BIT_BLOCK = 9, // 128-bit block compressed formats
+ VK_IMAGE_FORMAT_CLASS_64_BIT_BLOCK = 10, // 64-bit block compressed formats
+ VK_IMAGE_FORMAT_CLASS_D32 = 11, // D32_SFLOAT
+ VK_IMAGE_FORMAT_CLASS_D24 = 12, // D24_UNORM
+ VK_IMAGE_FORMAT_CLASS_D16 = 13, // D16_UNORM
+ VK_IMAGE_FORMAT_CLASS_S8 = 14, // S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D32S8 = 15, // D32_SFLOAT_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D24S8 = 16, // D24_UNORM_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D16S8 = 17, // D16_UNORM_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_LINEAR = 18, // used for pitch-linear (transparent) textures
+
+ VK_IMAGE_FORMAT_CLASS_BEGIN_RANGE = VK_IMAGE_FORMAT_CLASS_128_BITS,
+ VK_IMAGE_FORMAT_CLASS_END_RANGE = VK_IMAGE_FORMAT_CLASS_LINEAR,
+ VK_NUM_IMAGE_FORMAT_CLASS = (VK_IMAGE_FORMAT_CLASS_END_RANGE - VK_IMAGE_FORMAT_CLASS_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_FORMAT_CLASS)
+} VK_IMAGE_FORMAT_CLASS;
// Image and image allocation usage flags
-typedef enum _XGL_IMAGE_USAGE_FLAGS
-{
- XGL_IMAGE_USAGE_GENERAL = 0x00000000, // no special usage
- XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // shader read (e.g. texture, image)
- XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // shader write (e.g. image)
- XGL_IMAGE_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // shader atomic operations (e.g. image)
- XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
- XGL_IMAGE_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
- XGL_IMAGE_USAGE_TEXTURE_BIT = 0x00000020, // opaque texture (2d, 3d, etc.)
- XGL_IMAGE_USAGE_IMAGE_BIT = 0x00000040, // opaque image (2d, 3d, etc.)
- XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000080, // framebuffer color attachment
- XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000100, // framebuffer depth/stencil
- XGL_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000200, // image data not needed outside of rendering.
- XGL_MAX_ENUM(_XGL_IMAGE_USAGE_FLAGS)
-} XGL_IMAGE_USAGE_FLAGS;
+typedef enum _VK_IMAGE_USAGE_FLAGS
+{
+ VK_IMAGE_USAGE_GENERAL = 0x00000000, // no special usage
+ VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // shader read (e.g. texture, image)
+ VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // shader write (e.g. image)
+ VK_IMAGE_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // shader atomic operations (e.g. image)
+ VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
+ VK_IMAGE_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
+ VK_IMAGE_USAGE_TEXTURE_BIT = 0x00000020, // opaque texture (2d, 3d, etc.)
+ VK_IMAGE_USAGE_IMAGE_BIT = 0x00000040, // opaque image (2d, 3d, etc.)
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000080, // framebuffer color attachment
+ VK_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000100, // framebuffer depth/stencil
+ VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000200, // image data not needed outside of rendering.
+ VK_MAX_ENUM(_VK_IMAGE_USAGE_FLAGS)
+} VK_IMAGE_USAGE_FLAGS;
// Image flags
-typedef enum _XGL_IMAGE_CREATE_FLAGS
+typedef enum _VK_IMAGE_CREATE_FLAGS
{
- XGL_IMAGE_CREATE_INVARIANT_DATA_BIT = 0x00000001,
- XGL_IMAGE_CREATE_CLONEABLE_BIT = 0x00000002,
- XGL_IMAGE_CREATE_SHAREABLE_BIT = 0x00000004,
- XGL_IMAGE_CREATE_SPARSE_BIT = 0x00000008,
- XGL_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000010, // Allows image views to have different format than the base image
- XGL_MAX_ENUM(_XGL_IMAGE_CREATE_FLAGS)
-} XGL_IMAGE_CREATE_FLAGS;
+ VK_IMAGE_CREATE_INVARIANT_DATA_BIT = 0x00000001,
+ VK_IMAGE_CREATE_CLONEABLE_BIT = 0x00000002,
+ VK_IMAGE_CREATE_SHAREABLE_BIT = 0x00000004,
+ VK_IMAGE_CREATE_SPARSE_BIT = 0x00000008,
+ VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000010, // Allows image views to have different format than the base image
+ VK_MAX_ENUM(_VK_IMAGE_CREATE_FLAGS)
+} VK_IMAGE_CREATE_FLAGS;
// Depth-stencil view creation flags
-typedef enum _XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS
+typedef enum _VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS
{
- XGL_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_DEPTH_BIT = 0x00000001,
- XGL_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_STENCIL_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS)
-} XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS;
+ VK_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_DEPTH_BIT = 0x00000001,
+ VK_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_STENCIL_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS)
+} VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS;
// Pipeline creation flags
-typedef enum _XGL_PIPELINE_CREATE_FLAGS
+typedef enum _VK_PIPELINE_CREATE_FLAGS
{
- XGL_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
- XGL_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_PIPELINE_CREATE_FLAGS)
-} XGL_PIPELINE_CREATE_FLAGS;
+ VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
+ VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_PIPELINE_CREATE_FLAGS)
+} VK_PIPELINE_CREATE_FLAGS;
// Fence creation flags
-typedef enum _XGL_FENCE_CREATE_FLAGS
+typedef enum _VK_FENCE_CREATE_FLAGS
{
- XGL_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_FENCE_CREATE_FLAGS)
-} XGL_FENCE_CREATE_FLAGS;
+ VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_FENCE_CREATE_FLAGS)
+} VK_FENCE_CREATE_FLAGS;
// Semaphore creation flags
-typedef enum _XGL_SEMAPHORE_CREATE_FLAGS
+typedef enum _VK_SEMAPHORE_CREATE_FLAGS
{
- XGL_SEMAPHORE_CREATE_SHAREABLE_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_SEMAPHORE_CREATE_FLAGS)
-} XGL_SEMAPHORE_CREATE_FLAGS;
+ VK_SEMAPHORE_CREATE_SHAREABLE_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_SEMAPHORE_CREATE_FLAGS)
+} VK_SEMAPHORE_CREATE_FLAGS;
// Format capability flags
-typedef enum _XGL_FORMAT_FEATURE_FLAGS
-{
- XGL_FORMAT_IMAGE_SHADER_READ_BIT = 0x00000001,
- XGL_FORMAT_IMAGE_SHADER_WRITE_BIT = 0x00000002,
- XGL_FORMAT_IMAGE_COPY_BIT = 0x00000004,
- XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT = 0x00000008,
- XGL_FORMAT_COLOR_ATTACHMENT_WRITE_BIT = 0x00000010,
- XGL_FORMAT_COLOR_ATTACHMENT_BLEND_BIT = 0x00000020,
- XGL_FORMAT_DEPTH_ATTACHMENT_BIT = 0x00000040,
- XGL_FORMAT_STENCIL_ATTACHMENT_BIT = 0x00000080,
- XGL_FORMAT_MSAA_ATTACHMENT_BIT = 0x00000100,
- XGL_FORMAT_CONVERSION_BIT = 0x00000200,
- XGL_MAX_ENUM(_XGL_FORMAT_FEATURE_FLAGS)
-} XGL_FORMAT_FEATURE_FLAGS;
+typedef enum _VK_FORMAT_FEATURE_FLAGS
+{
+ VK_FORMAT_IMAGE_SHADER_READ_BIT = 0x00000001,
+ VK_FORMAT_IMAGE_SHADER_WRITE_BIT = 0x00000002,
+ VK_FORMAT_IMAGE_COPY_BIT = 0x00000004,
+ VK_FORMAT_MEMORY_SHADER_ACCESS_BIT = 0x00000008,
+ VK_FORMAT_COLOR_ATTACHMENT_WRITE_BIT = 0x00000010,
+ VK_FORMAT_COLOR_ATTACHMENT_BLEND_BIT = 0x00000020,
+ VK_FORMAT_DEPTH_ATTACHMENT_BIT = 0x00000040,
+ VK_FORMAT_STENCIL_ATTACHMENT_BIT = 0x00000080,
+ VK_FORMAT_MSAA_ATTACHMENT_BIT = 0x00000100,
+ VK_FORMAT_CONVERSION_BIT = 0x00000200,
+ VK_MAX_ENUM(_VK_FORMAT_FEATURE_FLAGS)
+} VK_FORMAT_FEATURE_FLAGS;
// Query flags
-typedef enum _XGL_QUERY_CONTROL_FLAGS
+typedef enum _VK_QUERY_CONTROL_FLAGS
{
- XGL_QUERY_IMPRECISE_DATA_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_QUERY_CONTROL_FLAGS)
-} XGL_QUERY_CONTROL_FLAGS;
+ VK_QUERY_IMPRECISE_DATA_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_QUERY_CONTROL_FLAGS)
+} VK_QUERY_CONTROL_FLAGS;
// GPU compatibility flags
-typedef enum _XGL_GPU_COMPATIBILITY_FLAGS
-{
- XGL_GPU_COMPAT_ASIC_FEATURES_BIT = 0x00000001,
- XGL_GPU_COMPAT_IQ_MATCH_BIT = 0x00000002,
- XGL_GPU_COMPAT_PEER_TRANSFER_BIT = 0x00000004,
- XGL_GPU_COMPAT_SHARED_MEMORY_BIT = 0x00000008,
- XGL_GPU_COMPAT_SHARED_SYNC_BIT = 0x00000010,
- XGL_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT = 0x00000020,
- XGL_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT = 0x00000040,
- XGL_MAX_ENUM(_XGL_GPU_COMPATIBILITY_FLAGS)
-} XGL_GPU_COMPATIBILITY_FLAGS;
+typedef enum _VK_GPU_COMPATIBILITY_FLAGS
+{
+ VK_GPU_COMPAT_ASIC_FEATURES_BIT = 0x00000001,
+ VK_GPU_COMPAT_IQ_MATCH_BIT = 0x00000002,
+ VK_GPU_COMPAT_PEER_TRANSFER_BIT = 0x00000004,
+ VK_GPU_COMPAT_SHARED_MEMORY_BIT = 0x00000008,
+ VK_GPU_COMPAT_SHARED_SYNC_BIT = 0x00000010,
+ VK_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT = 0x00000020,
+ VK_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT = 0x00000040,
+ VK_MAX_ENUM(_VK_GPU_COMPATIBILITY_FLAGS)
+} VK_GPU_COMPATIBILITY_FLAGS;
// Command buffer building flags
-typedef enum _XGL_CMD_BUFFER_BUILD_FLAGS
+typedef enum _VK_CMD_BUFFER_BUILD_FLAGS
{
- XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT = 0x00000001,
- XGL_CMD_BUFFER_OPTIMIZE_PIPELINE_SWITCH_BIT = 0x00000002,
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT = 0x00000004,
- XGL_CMD_BUFFER_OPTIMIZE_DESCRIPTOR_SET_SWITCH_BIT = 0x00000008,
- XGL_MAX_ENUM(_XGL_CMD_BUFFER_BUILD_FLAGS)
-} XGL_CMD_BUFFER_BUILD_FLAGS;
+ VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT = 0x00000001,
+ VK_CMD_BUFFER_OPTIMIZE_PIPELINE_SWITCH_BIT = 0x00000002,
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT = 0x00000004,
+ VK_CMD_BUFFER_OPTIMIZE_DESCRIPTOR_SET_SWITCH_BIT = 0x00000008,
+ VK_MAX_ENUM(_VK_CMD_BUFFER_BUILD_FLAGS)
+} VK_CMD_BUFFER_BUILD_FLAGS;
// ------------------------------------------------------------------------------------------------
-// XGL structures
+// VK structures
-typedef struct _XGL_OFFSET2D
+typedef struct _VK_OFFSET2D
{
int32_t x;
int32_t y;
-} XGL_OFFSET2D;
+} VK_OFFSET2D;
-typedef struct _XGL_OFFSET3D
+typedef struct _VK_OFFSET3D
{
int32_t x;
int32_t y;
int32_t z;
-} XGL_OFFSET3D;
+} VK_OFFSET3D;
-typedef struct _XGL_EXTENT2D
+typedef struct _VK_EXTENT2D
{
int32_t width;
int32_t height;
-} XGL_EXTENT2D;
+} VK_EXTENT2D;
-typedef struct _XGL_EXTENT3D
+typedef struct _VK_EXTENT3D
{
int32_t width;
int32_t height;
int32_t depth;
-} XGL_EXTENT3D;
+} VK_EXTENT3D;
-typedef struct _XGL_VIEWPORT
+typedef struct _VK_VIEWPORT
{
float originX;
float originY;
float height;
float minDepth;
float maxDepth;
-} XGL_VIEWPORT;
+} VK_VIEWPORT;
-typedef struct _XGL_RECT
+typedef struct _VK_RECT
{
- XGL_OFFSET2D offset;
- XGL_EXTENT2D extent;
-} XGL_RECT;
+ VK_OFFSET2D offset;
+ VK_EXTENT2D extent;
+} VK_RECT;
-typedef struct _XGL_CHANNEL_MAPPING
+typedef struct _VK_CHANNEL_MAPPING
{
- XGL_CHANNEL_SWIZZLE r;
- XGL_CHANNEL_SWIZZLE g;
- XGL_CHANNEL_SWIZZLE b;
- XGL_CHANNEL_SWIZZLE a;
-} XGL_CHANNEL_MAPPING;
+ VK_CHANNEL_SWIZZLE r;
+ VK_CHANNEL_SWIZZLE g;
+ VK_CHANNEL_SWIZZLE b;
+ VK_CHANNEL_SWIZZLE a;
+} VK_CHANNEL_MAPPING;
-typedef struct _XGL_PHYSICAL_GPU_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_PROPERTIES
{
uint32_t apiVersion;
uint32_t driverVersion;
uint32_t vendorId;
uint32_t deviceId;
- XGL_PHYSICAL_GPU_TYPE gpuType;
- char gpuName[XGL_MAX_PHYSICAL_GPU_NAME];
- XGL_GPU_SIZE maxInlineMemoryUpdateSize;
+ VK_PHYSICAL_GPU_TYPE gpuType;
+ char gpuName[VK_MAX_PHYSICAL_GPU_NAME];
+ VK_GPU_SIZE maxInlineMemoryUpdateSize;
uint32_t maxBoundDescriptorSets;
uint32_t maxThreadGroupSize;
uint64_t timestampFrequency;
uint32_t maxDescriptorSets; // at least 2?
uint32_t maxViewports; // at least 16?
uint32_t maxColorAttachments; // at least 8?
-} XGL_PHYSICAL_GPU_PROPERTIES;
+} VK_PHYSICAL_GPU_PROPERTIES;
-typedef struct _XGL_PHYSICAL_GPU_PERFORMANCE
+typedef struct _VK_PHYSICAL_GPU_PERFORMANCE
{
float maxGpuClock;
float aluPerClock;
float texPerClock;
float primsPerClock;
float pixelsPerClock;
-} XGL_PHYSICAL_GPU_PERFORMANCE;
+} VK_PHYSICAL_GPU_PERFORMANCE;
-typedef struct _XGL_GPU_COMPATIBILITY_INFO
+typedef struct _VK_GPU_COMPATIBILITY_INFO
{
- XGL_FLAGS compatibilityFlags; // XGL_GPU_COMPATIBILITY_FLAGS
-} XGL_GPU_COMPATIBILITY_INFO;
+ VK_FLAGS compatibilityFlags; // VK_GPU_COMPATIBILITY_FLAGS
+} VK_GPU_COMPATIBILITY_INFO;
-typedef struct _XGL_APPLICATION_INFO
+typedef struct _VK_APPLICATION_INFO
{
- XGL_STRUCTURE_TYPE sType; // Type of structure. Should be XGL_STRUCTURE_TYPE_APPLICATION_INFO
+ VK_STRUCTURE_TYPE sType; // Type of structure. Should be VK_STRUCTURE_TYPE_APPLICATION_INFO
const void* pNext; // Next structure in chain
const char* pAppName;
uint32_t appVersion;
const char* pEngineName;
uint32_t engineVersion;
uint32_t apiVersion;
-} XGL_APPLICATION_INFO;
+} VK_APPLICATION_INFO;
-typedef void* (XGLAPI *XGL_ALLOC_FUNCTION)(
+typedef void* (VKAPI *VK_ALLOC_FUNCTION)(
void* pUserData,
size_t size,
size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE allocType);
+ VK_SYSTEM_ALLOC_TYPE allocType);
-typedef void (XGLAPI *XGL_FREE_FUNCTION)(
+typedef void (VKAPI *VK_FREE_FUNCTION)(
void* pUserData,
void* pMem);
-typedef struct _XGL_ALLOC_CALLBACKS
+typedef struct _VK_ALLOC_CALLBACKS
{
void* pUserData;
- XGL_ALLOC_FUNCTION pfnAlloc;
- XGL_FREE_FUNCTION pfnFree;
-} XGL_ALLOC_CALLBACKS;
+ VK_ALLOC_FUNCTION pfnAlloc;
+ VK_FREE_FUNCTION pfnFree;
+} VK_ALLOC_CALLBACKS;
-typedef struct _XGL_DEVICE_QUEUE_CREATE_INFO
+typedef struct _VK_DEVICE_QUEUE_CREATE_INFO
{
uint32_t queueNodeIndex;
uint32_t queueCount;
-} XGL_DEVICE_QUEUE_CREATE_INFO;
+} VK_DEVICE_QUEUE_CREATE_INFO;
-typedef struct _XGL_DEVICE_CREATE_INFO
+typedef struct _VK_DEVICE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t queueRecordCount;
- const XGL_DEVICE_QUEUE_CREATE_INFO* pRequestedQueues;
+ const VK_DEVICE_QUEUE_CREATE_INFO* pRequestedQueues;
uint32_t extensionCount;
const char*const* ppEnabledExtensionNames;
- XGL_VALIDATION_LEVEL maxValidationLevel;
- XGL_FLAGS flags; // XGL_DEVICE_CREATE_FLAGS
-} XGL_DEVICE_CREATE_INFO;
+ VK_VALIDATION_LEVEL maxValidationLevel;
+ VK_FLAGS flags; // VK_DEVICE_CREATE_FLAGS
+} VK_DEVICE_CREATE_INFO;
-typedef struct _XGL_INSTANCE_CREATE_INFO
+typedef struct _VK_INSTANCE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
const void* pNext; // Pointer to next structure
- const XGL_APPLICATION_INFO* pAppInfo;
- const XGL_ALLOC_CALLBACKS* pAllocCb;
+ const VK_APPLICATION_INFO* pAppInfo;
+ const VK_ALLOC_CALLBACKS* pAllocCb;
uint32_t extensionCount;
const char*const* ppEnabledExtensionNames; // layer or extension name to be enabled
-} XGL_INSTANCE_CREATE_INFO;
+} VK_INSTANCE_CREATE_INFO;
-// can be added to XGL_DEVICE_CREATE_INFO or XGL_INSTANCE_CREATE_INFO via pNext
-typedef struct _XGL_LAYER_CREATE_INFO
+// can be added to VK_DEVICE_CREATE_INFO or VK_INSTANCE_CREATE_INFO via pNext
+typedef struct _VK_LAYER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_LAYER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t layerCount;
- const char *const* ppActiveLayerNames; // layer name from the layer's xglEnumerateLayers())
-} XGL_LAYER_CREATE_INFO;
+ const char *const* ppActiveLayerNames; // layer name from the layer's vkEnumerateLayers())
+} VK_LAYER_CREATE_INFO;
-typedef struct _XGL_PHYSICAL_GPU_QUEUE_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_QUEUE_PROPERTIES
{
- XGL_FLAGS queueFlags; // XGL_QUEUE_FLAGS
+ VK_FLAGS queueFlags; // VK_QUEUE_FLAGS
uint32_t queueCount;
uint32_t maxAtomicCounters;
bool32_t supportsTimestamps;
uint32_t maxMemReferences; // Tells how many memory references can be active for the given queue
-} XGL_PHYSICAL_GPU_QUEUE_PROPERTIES;
+} VK_PHYSICAL_GPU_QUEUE_PROPERTIES;
-typedef struct _XGL_PHYSICAL_GPU_MEMORY_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_MEMORY_PROPERTIES
{
bool32_t supportsMigration;
bool32_t supportsPinning;
-} XGL_PHYSICAL_GPU_MEMORY_PROPERTIES;
+} VK_PHYSICAL_GPU_MEMORY_PROPERTIES;
-typedef struct _XGL_MEMORY_ALLOC_INFO
+typedef struct _VK_MEMORY_ALLOC_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_SIZE allocationSize; // Size of memory allocation
- XGL_FLAGS memProps; // XGL_MEMORY_PROPERTY_FLAGS
- XGL_MEMORY_TYPE memType;
- XGL_MEMORY_PRIORITY memPriority;
-} XGL_MEMORY_ALLOC_INFO;
+ VK_GPU_SIZE allocationSize; // Size of memory allocation
+ VK_FLAGS memProps; // VK_MEMORY_PROPERTY_FLAGS
+ VK_MEMORY_TYPE memType;
+ VK_MEMORY_PRIORITY memPriority;
+} VK_MEMORY_ALLOC_INFO;
-// This structure is included in the XGL_MEMORY_ALLOC_INFO chain
+// This structure is included in the VK_MEMORY_ALLOC_INFO chain
// for memory regions allocated for buffer usage.
-typedef struct _XGL_MEMORY_ALLOC_BUFFER_INFO
+typedef struct _VK_MEMORY_ALLOC_BUFFER_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
-} XGL_MEMORY_ALLOC_BUFFER_INFO;
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+} VK_MEMORY_ALLOC_BUFFER_INFO;
-// This structure is included in the XGL_MEMORY_ALLOC_INFO chain
+// This structure is included in the VK_MEMORY_ALLOC_INFO chain
// for memory regions allocated for image usage.
-typedef struct _XGL_MEMORY_ALLOC_IMAGE_INFO
+typedef struct _VK_MEMORY_ALLOC_IMAGE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_IMAGE_FORMAT_CLASS formatClass;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_IMAGE_FORMAT_CLASS formatClass;
uint32_t samples;
-} XGL_MEMORY_ALLOC_IMAGE_INFO;
+} VK_MEMORY_ALLOC_IMAGE_INFO;
-typedef struct _XGL_MEMORY_OPEN_INFO
+typedef struct _VK_MEMORY_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_MEMORY sharedMem;
-} XGL_MEMORY_OPEN_INFO;
+ VK_GPU_MEMORY sharedMem;
+} VK_MEMORY_OPEN_INFO;
-typedef struct _XGL_PEER_MEMORY_OPEN_INFO
+typedef struct _VK_PEER_MEMORY_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_MEMORY originalMem;
-} XGL_PEER_MEMORY_OPEN_INFO;
+ VK_GPU_MEMORY originalMem;
+} VK_PEER_MEMORY_OPEN_INFO;
-typedef struct _XGL_MEMORY_REQUIREMENTS
+typedef struct _VK_MEMORY_REQUIREMENTS
{
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_GPU_SIZE alignment; // Specified in bytes
- XGL_GPU_SIZE granularity; // Granularity on which xglBindObjectMemoryRange can bind sub-ranges of memory specified in bytes (usually the page size)
- XGL_FLAGS memProps; // XGL_MEMORY_PROPERTY_FLAGS
- XGL_MEMORY_TYPE memType;
-} XGL_MEMORY_REQUIREMENTS;
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_GPU_SIZE alignment; // Specified in bytes
+ VK_GPU_SIZE granularity; // Granularity on which vkBindObjectMemoryRange can bind sub-ranges of memory specified in bytes (usually the page size)
+ VK_FLAGS memProps; // VK_MEMORY_PROPERTY_FLAGS
+ VK_MEMORY_TYPE memType;
+} VK_MEMORY_REQUIREMENTS;
-typedef struct _XGL_BUFFER_MEMORY_REQUIREMENTS
+typedef struct _VK_BUFFER_MEMORY_REQUIREMENTS
{
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
-} XGL_BUFFER_MEMORY_REQUIREMENTS;
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+} VK_BUFFER_MEMORY_REQUIREMENTS;
-typedef struct _XGL_IMAGE_MEMORY_REQUIREMENTS
+typedef struct _VK_IMAGE_MEMORY_REQUIREMENTS
{
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_IMAGE_FORMAT_CLASS formatClass;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_IMAGE_FORMAT_CLASS formatClass;
uint32_t samples;
-} XGL_IMAGE_MEMORY_REQUIREMENTS;
+} VK_IMAGE_MEMORY_REQUIREMENTS;
-typedef struct _XGL_FORMAT_PROPERTIES
+typedef struct _VK_FORMAT_PROPERTIES
{
- XGL_FLAGS linearTilingFeatures; // XGL_FORMAT_FEATURE_FLAGS
- XGL_FLAGS optimalTilingFeatures; // XGL_FORMAT_FEATURE_FLAGS
-} XGL_FORMAT_PROPERTIES;
+ VK_FLAGS linearTilingFeatures; // VK_FORMAT_FEATURE_FLAGS
+ VK_FLAGS optimalTilingFeatures; // VK_FORMAT_FEATURE_FLAGS
+} VK_FORMAT_PROPERTIES;
-typedef struct _XGL_BUFFER_VIEW_ATTACH_INFO
+typedef struct _VK_BUFFER_VIEW_ATTACH_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO
const void* pNext; // Pointer to next structure
- XGL_BUFFER_VIEW view;
-} XGL_BUFFER_VIEW_ATTACH_INFO;
+ VK_BUFFER_VIEW view;
+} VK_BUFFER_VIEW_ATTACH_INFO;
-typedef struct _XGL_IMAGE_VIEW_ATTACH_INFO
+typedef struct _VK_IMAGE_VIEW_ATTACH_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_IMAGE_VIEW_ATTACH_INFO;
+ VK_IMAGE_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_IMAGE_VIEW_ATTACH_INFO;
-typedef struct _XGL_UPDATE_SAMPLERS
+typedef struct _VK_UPDATE_SAMPLERS
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_SAMPLERS
const void* pNext; // Pointer to next structure
uint32_t binding; // Binding of the sampler (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_SAMPLER* pSamplers;
-} XGL_UPDATE_SAMPLERS;
+ const VK_SAMPLER* pSamplers;
+} VK_UPDATE_SAMPLERS;
-typedef struct _XGL_SAMPLER_IMAGE_VIEW_INFO
+typedef struct _VK_SAMPLER_IMAGE_VIEW_INFO
{
- XGL_SAMPLER sampler;
- const XGL_IMAGE_VIEW_ATTACH_INFO* pImageView;
-} XGL_SAMPLER_IMAGE_VIEW_INFO;
+ VK_SAMPLER sampler;
+ const VK_IMAGE_VIEW_ATTACH_INFO* pImageView;
+} VK_SAMPLER_IMAGE_VIEW_INFO;
-typedef struct _XGL_UPDATE_SAMPLER_TEXTURES
+typedef struct _VK_UPDATE_SAMPLER_TEXTURES
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES
const void* pNext; // Pointer to next structure
uint32_t binding; // Binding of the combined texture sampler (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_SAMPLER_IMAGE_VIEW_INFO* pSamplerImageViews;
-} XGL_UPDATE_SAMPLER_TEXTURES;
+ const VK_SAMPLER_IMAGE_VIEW_INFO* pSamplerImageViews;
+} VK_UPDATE_SAMPLER_TEXTURES;
-typedef struct _XGL_UPDATE_IMAGES
+typedef struct _VK_UPDATE_IMAGES
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_IMAGES
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_IMAGES
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t binding; // Binding of the image (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_IMAGE_VIEW_ATTACH_INFO* pImageViews;
-} XGL_UPDATE_IMAGES;
+ const VK_IMAGE_VIEW_ATTACH_INFO* pImageViews;
+} VK_UPDATE_IMAGES;
-typedef struct _XGL_UPDATE_BUFFERS
+typedef struct _VK_UPDATE_BUFFERS
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_BUFFERS
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_BUFFERS
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t binding; // Binding of the buffer (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_BUFFER_VIEW_ATTACH_INFO* pBufferViews;
-} XGL_UPDATE_BUFFERS;
+ const VK_BUFFER_VIEW_ATTACH_INFO* pBufferViews;
+} VK_UPDATE_BUFFERS;
-typedef struct _XGL_UPDATE_AS_COPY
+typedef struct _VK_UPDATE_AS_COPY
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_AS_COPY
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_AS_COPY
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
- XGL_DESCRIPTOR_SET descriptorSet;
+ VK_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_SET descriptorSet;
uint32_t binding;
uint32_t arrayElement;
uint32_t count;
-} XGL_UPDATE_AS_COPY;
+} VK_UPDATE_AS_COPY;
-typedef struct _XGL_BUFFER_CREATE_INFO
+typedef struct _VK_BUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
- XGL_FLAGS flags; // XGL_BUFFER_CREATE_FLAGS
-} XGL_BUFFER_CREATE_INFO;
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+ VK_FLAGS flags; // VK_BUFFER_CREATE_FLAGS
+} VK_BUFFER_CREATE_INFO;
-typedef struct _XGL_BUFFER_VIEW_CREATE_INFO
+typedef struct _VK_BUFFER_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_BUFFER buffer;
- XGL_BUFFER_VIEW_TYPE viewType;
- XGL_FORMAT format; // Optionally specifies format of elements
- XGL_GPU_SIZE offset; // Specified in bytes
- XGL_GPU_SIZE range; // View size specified in bytes
-} XGL_BUFFER_VIEW_CREATE_INFO;
+ VK_BUFFER buffer;
+ VK_BUFFER_VIEW_TYPE viewType;
+ VK_FORMAT format; // Optionally specifies format of elements
+ VK_GPU_SIZE offset; // Specified in bytes
+ VK_GPU_SIZE range; // View size specified in bytes
+} VK_BUFFER_VIEW_CREATE_INFO;
-typedef struct _XGL_IMAGE_SUBRESOURCE
+typedef struct _VK_IMAGE_SUBRESOURCE
{
- XGL_IMAGE_ASPECT aspect;
+ VK_IMAGE_ASPECT aspect;
uint32_t mipLevel;
uint32_t arraySlice;
-} XGL_IMAGE_SUBRESOURCE;
+} VK_IMAGE_SUBRESOURCE;
-typedef struct _XGL_IMAGE_SUBRESOURCE_RANGE
+typedef struct _VK_IMAGE_SUBRESOURCE_RANGE
{
- XGL_IMAGE_ASPECT aspect;
+ VK_IMAGE_ASPECT aspect;
uint32_t baseMipLevel;
uint32_t mipLevels;
uint32_t baseArraySlice;
uint32_t arraySize;
-} XGL_IMAGE_SUBRESOURCE_RANGE;
+} VK_IMAGE_SUBRESOURCE_RANGE;
-typedef struct _XGL_EVENT_WAIT_INFO
+typedef struct _VK_EVENT_WAIT_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_WAIT_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_WAIT_INFO
const void* pNext; // Pointer to next structure.
uint32_t eventCount; // Number of events to wait on
- const XGL_EVENT* pEvents; // Array of event objects to wait on
+ const VK_EVENT* pEvents; // Array of event objects to wait on
- XGL_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
+ VK_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
uint32_t memBarrierCount; // Number of memory barriers
- const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either XGL_MEMORY_BARRIER, XGL_BUFFER_MEMORY_BARRIER, or XGL_IMAGE_MEMORY_BARRIER)
-} XGL_EVENT_WAIT_INFO;
+ const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either VK_MEMORY_BARRIER, VK_BUFFER_MEMORY_BARRIER, or VK_IMAGE_MEMORY_BARRIER)
+} VK_EVENT_WAIT_INFO;
-typedef struct _XGL_PIPELINE_BARRIER
+typedef struct _VK_PIPELINE_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_BARRIER
const void* pNext; // Pointer to next structure.
uint32_t eventCount; // Number of events to wait on
- const XGL_PIPE_EVENT* pEvents; // Array of pipeline events to wait on
+ const VK_PIPE_EVENT* pEvents; // Array of pipeline events to wait on
- XGL_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
+ VK_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
uint32_t memBarrierCount; // Number of memory barriers
- const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either XGL_MEMORY_BARRIER, XGL_BUFFER_MEMORY_BARRIER, or XGL_IMAGE_MEMORY_BARRIER)
-} XGL_PIPELINE_BARRIER;
+ const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either VK_MEMORY_BARRIER, VK_BUFFER_MEMORY_BARRIER, or VK_IMAGE_MEMORY_BARRIER)
+} VK_PIPELINE_BARRIER;
-typedef struct _XGL_MEMORY_BARRIER
+typedef struct _VK_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
-} XGL_MEMORY_BARRIER;
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
+} VK_MEMORY_BARRIER;
-typedef struct _XGL_BUFFER_MEMORY_BARRIER
+typedef struct _VK_BUFFER_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
- XGL_BUFFER buffer; // Buffer to sync
+ VK_BUFFER buffer; // Buffer to sync
- XGL_GPU_SIZE offset; // Offset within the buffer to sync
- XGL_GPU_SIZE size; // Amount of bytes to sync
-} XGL_BUFFER_MEMORY_BARRIER;
+ VK_GPU_SIZE offset; // Offset within the buffer to sync
+ VK_GPU_SIZE size; // Amount of bytes to sync
+} VK_BUFFER_MEMORY_BARRIER;
-typedef struct _XGL_IMAGE_MEMORY_BARRIER
+typedef struct _VK_IMAGE_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
- XGL_IMAGE_LAYOUT oldLayout; // Current layout of the image
- XGL_IMAGE_LAYOUT newLayout; // New layout to transition the image to
+ VK_IMAGE_LAYOUT oldLayout; // Current layout of the image
+ VK_IMAGE_LAYOUT newLayout; // New layout to transition the image to
- XGL_IMAGE image; // Image to sync
+ VK_IMAGE image; // Image to sync
- XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange; // Subresource range to sync
-} XGL_IMAGE_MEMORY_BARRIER;
+ VK_IMAGE_SUBRESOURCE_RANGE subresourceRange; // Subresource range to sync
+} VK_IMAGE_MEMORY_BARRIER;
-typedef struct _XGL_IMAGE_CREATE_INFO
+typedef struct _VK_IMAGE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_IMAGE_TYPE imageType;
- XGL_FORMAT format;
- XGL_EXTENT3D extent;
+ VK_IMAGE_TYPE imageType;
+ VK_FORMAT format;
+ VK_EXTENT3D extent;
uint32_t mipLevels;
uint32_t arraySize;
uint32_t samples;
- XGL_IMAGE_TILING tiling;
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_FLAGS flags; // XGL_IMAGE_CREATE_FLAGS
-} XGL_IMAGE_CREATE_INFO;
+ VK_IMAGE_TILING tiling;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_FLAGS flags; // VK_IMAGE_CREATE_FLAGS
+} VK_IMAGE_CREATE_INFO;
-typedef struct _XGL_PEER_IMAGE_OPEN_INFO
+typedef struct _VK_PEER_IMAGE_OPEN_INFO
{
- XGL_IMAGE originalImage;
-} XGL_PEER_IMAGE_OPEN_INFO;
+ VK_IMAGE originalImage;
+} VK_PEER_IMAGE_OPEN_INFO;
-typedef struct _XGL_SUBRESOURCE_LAYOUT
+typedef struct _VK_SUBRESOURCE_LAYOUT
{
- XGL_GPU_SIZE offset; // Specified in bytes
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_GPU_SIZE rowPitch; // Specified in bytes
- XGL_GPU_SIZE depthPitch; // Specified in bytes
-} XGL_SUBRESOURCE_LAYOUT;
+ VK_GPU_SIZE offset; // Specified in bytes
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_GPU_SIZE rowPitch; // Specified in bytes
+ VK_GPU_SIZE depthPitch; // Specified in bytes
+} VK_SUBRESOURCE_LAYOUT;
-typedef struct _XGL_IMAGE_VIEW_CREATE_INFO
+typedef struct _VK_IMAGE_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
- XGL_IMAGE_VIEW_TYPE viewType;
- XGL_FORMAT format;
- XGL_CHANNEL_MAPPING channels;
- XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange;
+ VK_IMAGE image;
+ VK_IMAGE_VIEW_TYPE viewType;
+ VK_FORMAT format;
+ VK_CHANNEL_MAPPING channels;
+ VK_IMAGE_SUBRESOURCE_RANGE subresourceRange;
float minLod;
-} XGL_IMAGE_VIEW_CREATE_INFO;
+} VK_IMAGE_VIEW_CREATE_INFO;
-typedef struct _XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO
+typedef struct _VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
- XGL_FORMAT format;
+ VK_IMAGE image;
+ VK_FORMAT format;
uint32_t mipLevel;
uint32_t baseArraySlice;
uint32_t arraySize;
- XGL_IMAGE msaaResolveImage;
- XGL_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
-} XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
+ VK_IMAGE msaaResolveImage;
+ VK_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
+} VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
-typedef struct _XGL_DEPTH_STENCIL_VIEW_CREATE_INFO
+typedef struct _VK_DEPTH_STENCIL_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t mipLevel;
uint32_t baseArraySlice;
uint32_t arraySize;
- XGL_IMAGE msaaResolveImage;
- XGL_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
- XGL_FLAGS flags; // XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS
-} XGL_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ VK_IMAGE msaaResolveImage;
+ VK_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
+ VK_FLAGS flags; // VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS
+} VK_DEPTH_STENCIL_VIEW_CREATE_INFO;
-typedef struct _XGL_COLOR_ATTACHMENT_BIND_INFO
+typedef struct _VK_COLOR_ATTACHMENT_BIND_INFO
{
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_COLOR_ATTACHMENT_BIND_INFO;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_COLOR_ATTACHMENT_BIND_INFO;
-typedef struct _XGL_DEPTH_STENCIL_BIND_INFO
+typedef struct _VK_DEPTH_STENCIL_BIND_INFO
{
- XGL_DEPTH_STENCIL_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_DEPTH_STENCIL_BIND_INFO;
+ VK_DEPTH_STENCIL_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_DEPTH_STENCIL_BIND_INFO;
-typedef struct _XGL_BUFFER_COPY
+typedef struct _VK_BUFFER_COPY
{
- XGL_GPU_SIZE srcOffset; // Specified in bytes
- XGL_GPU_SIZE destOffset; // Specified in bytes
- XGL_GPU_SIZE copySize; // Specified in bytes
-} XGL_BUFFER_COPY;
+ VK_GPU_SIZE srcOffset; // Specified in bytes
+ VK_GPU_SIZE destOffset; // Specified in bytes
+ VK_GPU_SIZE copySize; // Specified in bytes
+} VK_BUFFER_COPY;
-typedef struct _XGL_IMAGE_MEMORY_BIND_INFO
+typedef struct _VK_IMAGE_MEMORY_BIND_INFO
{
- XGL_IMAGE_SUBRESOURCE subresource;
- XGL_OFFSET3D offset;
- XGL_EXTENT3D extent;
-} XGL_IMAGE_MEMORY_BIND_INFO;
+ VK_IMAGE_SUBRESOURCE subresource;
+ VK_OFFSET3D offset;
+ VK_EXTENT3D extent;
+} VK_IMAGE_MEMORY_BIND_INFO;
-typedef struct _XGL_IMAGE_COPY
+typedef struct _VK_IMAGE_COPY
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET3D srcOffset;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET3D destOffset;
- XGL_EXTENT3D extent;
-} XGL_IMAGE_COPY;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET3D srcOffset;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET3D destOffset;
+ VK_EXTENT3D extent;
+} VK_IMAGE_COPY;
-typedef struct _XGL_IMAGE_BLIT
+typedef struct _VK_IMAGE_BLIT
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET3D srcOffset;
- XGL_EXTENT3D srcExtent;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET3D destOffset;
- XGL_EXTENT3D destExtent;
-} XGL_IMAGE_BLIT;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET3D srcOffset;
+ VK_EXTENT3D srcExtent;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET3D destOffset;
+ VK_EXTENT3D destExtent;
+} VK_IMAGE_BLIT;
-typedef struct _XGL_BUFFER_IMAGE_COPY
+typedef struct _VK_BUFFER_IMAGE_COPY
{
- XGL_GPU_SIZE bufferOffset; // Specified in bytes
- XGL_IMAGE_SUBRESOURCE imageSubresource;
- XGL_OFFSET3D imageOffset;
- XGL_EXTENT3D imageExtent;
-} XGL_BUFFER_IMAGE_COPY;
+ VK_GPU_SIZE bufferOffset; // Specified in bytes
+ VK_IMAGE_SUBRESOURCE imageSubresource;
+ VK_OFFSET3D imageOffset;
+ VK_EXTENT3D imageExtent;
+} VK_BUFFER_IMAGE_COPY;
-typedef struct _XGL_IMAGE_RESOLVE
+typedef struct _VK_IMAGE_RESOLVE
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET2D srcOffset;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET2D destOffset;
- XGL_EXTENT2D extent;
-} XGL_IMAGE_RESOLVE;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET2D srcOffset;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET2D destOffset;
+ VK_EXTENT2D extent;
+} VK_IMAGE_RESOLVE;
-typedef struct _XGL_SHADER_CREATE_INFO
+typedef struct _VK_SHADER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SHADER_CREATE_INFO
const void* pNext; // Pointer to next structure
size_t codeSize; // Specified in bytes
const void* pCode;
- XGL_FLAGS flags; // Reserved
-} XGL_SHADER_CREATE_INFO;
+ VK_FLAGS flags; // Reserved
+} VK_SHADER_CREATE_INFO;
-typedef struct _XGL_DESCRIPTOR_SET_LAYOUT_BINDING
+typedef struct _VK_DESCRIPTOR_SET_LAYOUT_BINDING
{
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t count;
- XGL_FLAGS stageFlags; // XGL_SHADER_STAGE_FLAGS
- const XGL_SAMPLER* pImmutableSamplers;
-} XGL_DESCRIPTOR_SET_LAYOUT_BINDING;
+ VK_FLAGS stageFlags; // VK_SHADER_STAGE_FLAGS
+ const VK_SAMPLER* pImmutableSamplers;
+} VK_DESCRIPTOR_SET_LAYOUT_BINDING;
-typedef struct _XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
+typedef struct _VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t count; // Number of bindings in the descriptor set layout
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING* pBinding; // Array of descriptor set layout bindings
-} XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING* pBinding; // Array of descriptor set layout bindings
+} VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
-typedef struct _XGL_DESCRIPTOR_TYPE_COUNT
+typedef struct _VK_DESCRIPTOR_TYPE_COUNT
{
- XGL_DESCRIPTOR_TYPE type;
+ VK_DESCRIPTOR_TYPE type;
uint32_t count;
-} XGL_DESCRIPTOR_TYPE_COUNT;
+} VK_DESCRIPTOR_TYPE_COUNT;
-typedef struct _XGL_DESCRIPTOR_POOL_CREATE_INFO
+typedef struct _VK_DESCRIPTOR_POOL_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t count;
- const XGL_DESCRIPTOR_TYPE_COUNT* pTypeCount;
-} XGL_DESCRIPTOR_POOL_CREATE_INFO;
+ const VK_DESCRIPTOR_TYPE_COUNT* pTypeCount;
+} VK_DESCRIPTOR_POOL_CREATE_INFO;
-typedef struct _XGL_LINK_CONST_BUFFER
+typedef struct _VK_LINK_CONST_BUFFER
{
uint32_t bufferId;
size_t bufferSize;
const void* pBufferData;
-} XGL_LINK_CONST_BUFFER;
+} VK_LINK_CONST_BUFFER;
-typedef struct _XGL_SPECIALIZATION_MAP_ENTRY
+typedef struct _VK_SPECIALIZATION_MAP_ENTRY
{
uint32_t constantId; // The SpecConstant ID specified in the BIL
uint32_t offset; // Offset of the value in the data block
-} XGL_SPECIALIZATION_MAP_ENTRY;
+} VK_SPECIALIZATION_MAP_ENTRY;
-typedef struct _XGL_SPECIALIZATION_INFO
+typedef struct _VK_SPECIALIZATION_INFO
{
uint32_t mapEntryCount;
- const XGL_SPECIALIZATION_MAP_ENTRY* pMap; // mapEntryCount entries
+ const VK_SPECIALIZATION_MAP_ENTRY* pMap; // mapEntryCount entries
const void* pData;
-} XGL_SPECIALIZATION_INFO;
+} VK_SPECIALIZATION_INFO;
-typedef struct _XGL_PIPELINE_SHADER
+typedef struct _VK_PIPELINE_SHADER
{
- XGL_PIPELINE_SHADER_STAGE stage;
- XGL_SHADER shader;
+ VK_PIPELINE_SHADER_STAGE stage;
+ VK_SHADER shader;
uint32_t linkConstBufferCount;
- const XGL_LINK_CONST_BUFFER* pLinkConstBufferInfo;
- const XGL_SPECIALIZATION_INFO* pSpecializationInfo;
-} XGL_PIPELINE_SHADER;
+ const VK_LINK_CONST_BUFFER* pLinkConstBufferInfo;
+ const VK_SPECIALIZATION_INFO* pSpecializationInfo;
+} VK_PIPELINE_SHADER;
-typedef struct _XGL_COMPUTE_PIPELINE_CREATE_INFO
+typedef struct _VK_COMPUTE_PIPELINE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PIPELINE_SHADER cs;
- XGL_FLAGS flags; // XGL_PIPELINE_CREATE_FLAGS
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN setLayoutChain; // For local size fields zero is treated an invalid value
+ VK_PIPELINE_SHADER cs;
+ VK_FLAGS flags; // VK_PIPELINE_CREATE_FLAGS
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN setLayoutChain; // For local size fields zero is treated an invalid value
uint32_t localSizeX;
uint32_t localSizeY;
uint32_t localSizeZ;
-} XGL_COMPUTE_PIPELINE_CREATE_INFO;
+} VK_COMPUTE_PIPELINE_CREATE_INFO;
-typedef struct _XGL_VERTEX_INPUT_BINDING_DESCRIPTION
+typedef struct _VK_VERTEX_INPUT_BINDING_DESCRIPTION
{
uint32_t binding; // Vertex buffer binding id
uint32_t strideInBytes; // Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE stepRate; // Rate at which binding is incremented
-} XGL_VERTEX_INPUT_BINDING_DESCRIPTION;
+ VK_VERTEX_INPUT_STEP_RATE stepRate; // Rate at which binding is incremented
+} VK_VERTEX_INPUT_BINDING_DESCRIPTION;
-typedef struct _XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION
+typedef struct _VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION
{
uint32_t location; // location of the shader vertex attrib
uint32_t binding; // Vertex buffer binding id
- XGL_FORMAT format; // format of source data
+ VK_FORMAT format; // format of source data
uint32_t offsetInBytes; // Offset of first element in bytes from base of vertex
-} XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION;
+} VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION;
-typedef struct _XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO
+typedef struct _VK_PIPELINE_VERTEX_INPUT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t bindingCount; // number of bindings
- const XGL_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
+ const VK_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
uint32_t attributeCount; // number of attributes
- const XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
-} XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO;
+ const VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
+} VK_PIPELINE_VERTEX_INPUT_CREATE_INFO;
-typedef struct _XGL_PIPELINE_IA_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_IA_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PRIMITIVE_TOPOLOGY topology;
+ VK_PRIMITIVE_TOPOLOGY topology;
bool32_t disableVertexReuse; // optional
bool32_t primitiveRestartEnable;
uint32_t primitiveRestartIndex; // optional (GL45)
-} XGL_PIPELINE_IA_STATE_CREATE_INFO;
+} VK_PIPELINE_IA_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_TESS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_TESS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t patchControlPoints;
-} XGL_PIPELINE_TESS_STATE_CREATE_INFO;
+} VK_PIPELINE_TESS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_VP_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_VP_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t numViewports;
- XGL_COORDINATE_ORIGIN clipOrigin; // optional (GL45)
- XGL_DEPTH_MODE depthMode; // optional (GL45)
-} XGL_PIPELINE_VP_STATE_CREATE_INFO;
+ VK_COORDINATE_ORIGIN clipOrigin; // optional (GL45)
+ VK_DEPTH_MODE depthMode; // optional (GL45)
+} VK_PIPELINE_VP_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_RS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_RS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
bool32_t depthClipEnable;
bool32_t rasterizerDiscardEnable;
bool32_t programPointSize; // optional (GL45)
- XGL_COORDINATE_ORIGIN pointOrigin; // optional (GL45)
- XGL_PROVOKING_VERTEX_CONVENTION provokingVertex; // optional (GL45)
- XGL_FILL_MODE fillMode; // optional (GL45)
- XGL_CULL_MODE cullMode;
- XGL_FACE_ORIENTATION frontFace;
-} XGL_PIPELINE_RS_STATE_CREATE_INFO;
+ VK_COORDINATE_ORIGIN pointOrigin; // optional (GL45)
+ VK_PROVOKING_VERTEX_CONVENTION provokingVertex; // optional (GL45)
+ VK_FILL_MODE fillMode; // optional (GL45)
+ VK_CULL_MODE cullMode;
+ VK_FACE_ORIENTATION frontFace;
+} VK_PIPELINE_RS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_MS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_MS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t samples;
bool32_t multisampleEnable; // optional (GL45)
bool32_t sampleShadingEnable; // optional (GL45)
float minSampleShading; // optional (GL45)
- XGL_SAMPLE_MASK sampleMask;
-} XGL_PIPELINE_MS_STATE_CREATE_INFO;
+ VK_SAMPLE_MASK sampleMask;
+} VK_PIPELINE_MS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_CB_ATTACHMENT_STATE
+typedef struct _VK_PIPELINE_CB_ATTACHMENT_STATE
{
bool32_t blendEnable;
- XGL_FORMAT format;
- XGL_BLEND srcBlendColor;
- XGL_BLEND destBlendColor;
- XGL_BLEND_FUNC blendFuncColor;
- XGL_BLEND srcBlendAlpha;
- XGL_BLEND destBlendAlpha;
- XGL_BLEND_FUNC blendFuncAlpha;
+ VK_FORMAT format;
+ VK_BLEND srcBlendColor;
+ VK_BLEND destBlendColor;
+ VK_BLEND_FUNC blendFuncColor;
+ VK_BLEND srcBlendAlpha;
+ VK_BLEND destBlendAlpha;
+ VK_BLEND_FUNC blendFuncAlpha;
uint8_t channelWriteMask;
-} XGL_PIPELINE_CB_ATTACHMENT_STATE;
+} VK_PIPELINE_CB_ATTACHMENT_STATE;
-typedef struct _XGL_PIPELINE_CB_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_CB_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
bool32_t alphaToCoverageEnable;
bool32_t logicOpEnable;
- XGL_LOGIC_OP logicOp;
+ VK_LOGIC_OP logicOp;
uint32_t attachmentCount; // # of pAttachments
- const XGL_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
-} XGL_PIPELINE_CB_STATE_CREATE_INFO;
+ const VK_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
+} VK_PIPELINE_CB_STATE_CREATE_INFO;
-typedef struct _XGL_STENCIL_OP_STATE
+typedef struct _VK_STENCIL_OP_STATE
{
- XGL_STENCIL_OP stencilFailOp;
- XGL_STENCIL_OP stencilPassOp;
- XGL_STENCIL_OP stencilDepthFailOp;
- XGL_COMPARE_FUNC stencilFunc;
-} XGL_STENCIL_OP_STATE;
+ VK_STENCIL_OP stencilFailOp;
+ VK_STENCIL_OP stencilPassOp;
+ VK_STENCIL_OP stencilDepthFailOp;
+ VK_COMPARE_FUNC stencilFunc;
+} VK_STENCIL_OP_STATE;
-typedef struct _XGL_PIPELINE_DS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_DS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FORMAT format;
+ VK_FORMAT format;
bool32_t depthTestEnable;
bool32_t depthWriteEnable;
- XGL_COMPARE_FUNC depthFunc;
+ VK_COMPARE_FUNC depthFunc;
bool32_t depthBoundsEnable; // optional (depth_bounds_test)
bool32_t stencilTestEnable;
- XGL_STENCIL_OP_STATE front;
- XGL_STENCIL_OP_STATE back;
-} XGL_PIPELINE_DS_STATE_CREATE_INFO;
+ VK_STENCIL_OP_STATE front;
+ VK_STENCIL_OP_STATE back;
+} VK_PIPELINE_DS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_SHADER_STAGE_CREATE_INFO
+typedef struct _VK_PIPELINE_SHADER_STAGE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PIPELINE_SHADER shader;
-} XGL_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ VK_PIPELINE_SHADER shader;
+} VK_PIPELINE_SHADER_STAGE_CREATE_INFO;
-typedef struct _XGL_GRAPHICS_PIPELINE_CREATE_INFO
+typedef struct _VK_GRAPHICS_PIPELINE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // XGL_PIPELINE_CREATE_FLAGS
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN pSetLayoutChain;
-} XGL_GRAPHICS_PIPELINE_CREATE_INFO;
+ VK_FLAGS flags; // VK_PIPELINE_CREATE_FLAGS
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN pSetLayoutChain;
+} VK_GRAPHICS_PIPELINE_CREATE_INFO;
-typedef struct _XGL_SAMPLER_CREATE_INFO
+typedef struct _VK_SAMPLER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_TEX_FILTER magFilter; // Filter mode for magnification
- XGL_TEX_FILTER minFilter; // Filter mode for minifiation
- XGL_TEX_MIPMAP_MODE mipMode; // Mipmap selection mode
- XGL_TEX_ADDRESS addressU;
- XGL_TEX_ADDRESS addressV;
- XGL_TEX_ADDRESS addressW;
+ VK_TEX_FILTER magFilter; // Filter mode for magnification
+ VK_TEX_FILTER minFilter; // Filter mode for minifiation
+ VK_TEX_MIPMAP_MODE mipMode; // Mipmap selection mode
+ VK_TEX_ADDRESS addressU;
+ VK_TEX_ADDRESS addressV;
+ VK_TEX_ADDRESS addressW;
float mipLodBias;
uint32_t maxAnisotropy;
- XGL_COMPARE_FUNC compareFunc;
+ VK_COMPARE_FUNC compareFunc;
float minLod;
float maxLod;
- XGL_BORDER_COLOR_TYPE borderColorType;
-} XGL_SAMPLER_CREATE_INFO;
+ VK_BORDER_COLOR_TYPE borderColorType;
+} VK_SAMPLER_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_VP_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_VP_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t viewportAndScissorCount; // number of entries in pViewports and pScissors
- const XGL_VIEWPORT* pViewports;
- const XGL_RECT* pScissors;
-} XGL_DYNAMIC_VP_STATE_CREATE_INFO;
+ const VK_VIEWPORT* pViewports;
+ const VK_RECT* pScissors;
+} VK_DYNAMIC_VP_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_RS_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_RS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float depthBias;
float depthBiasClamp;
float pointSize; // optional (GL45) - Size of points
float pointFadeThreshold; // optional (GL45) - Size of point fade threshold
float lineWidth; // optional (GL45) - Width of lines
-} XGL_DYNAMIC_RS_STATE_CREATE_INFO;
+} VK_DYNAMIC_RS_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_CB_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_CB_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float blendConst[4];
-} XGL_DYNAMIC_CB_STATE_CREATE_INFO;
+} VK_DYNAMIC_CB_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_DS_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_DS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float minDepth; // optional (depth_bounds_test)
float maxDepth; // optional (depth_bounds_test)
uint32_t stencilWriteMask;
uint32_t stencilFrontRef;
uint32_t stencilBackRef;
-} XGL_DYNAMIC_DS_STATE_CREATE_INFO;
+} VK_DYNAMIC_DS_STATE_CREATE_INFO;
-typedef struct _XGL_CMD_BUFFER_CREATE_INFO
+typedef struct _VK_CMD_BUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t queueNodeIndex;
- XGL_FLAGS flags;
-} XGL_CMD_BUFFER_CREATE_INFO;
+ VK_FLAGS flags;
+} VK_CMD_BUFFER_CREATE_INFO;
-typedef struct _XGL_CMD_BUFFER_BEGIN_INFO
+typedef struct _VK_CMD_BUFFER_BEGIN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // XGL_CMD_BUFFER_BUILD_FLAGS
-} XGL_CMD_BUFFER_BEGIN_INFO;
+ VK_FLAGS flags; // VK_CMD_BUFFER_BUILD_FLAGS
+} VK_CMD_BUFFER_BEGIN_INFO;
-typedef struct _XGL_RENDER_PASS_BEGIN
+typedef struct _VK_RENDER_PASS_BEGIN
{
- XGL_RENDER_PASS renderPass;
- XGL_FRAMEBUFFER framebuffer;
-} XGL_RENDER_PASS_BEGIN;
+ VK_RENDER_PASS renderPass;
+ VK_FRAMEBUFFER framebuffer;
+} VK_RENDER_PASS_BEGIN;
-typedef struct _XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO
+typedef struct _VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO
const void* pNext; // Pointer to next structure
- XGL_RENDER_PASS_BEGIN renderPassContinue; // Only needed when a render pass is split across two command buffers
-} XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
+ VK_RENDER_PASS_BEGIN renderPassContinue; // Only needed when a render pass is split across two command buffers
+} VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
// Union allowing specification of floating point or raw color data. Actual value selected is based on image being cleared.
-typedef union _XGL_CLEAR_COLOR_VALUE
+typedef union _VK_CLEAR_COLOR_VALUE
{
float floatColor[4];
uint32_t rawColor[4];
-} XGL_CLEAR_COLOR_VALUE;
+} VK_CLEAR_COLOR_VALUE;
-typedef struct _XGL_CLEAR_COLOR
+typedef struct _VK_CLEAR_COLOR
{
- XGL_CLEAR_COLOR_VALUE color;
+ VK_CLEAR_COLOR_VALUE color;
bool32_t useRawValue;
-} XGL_CLEAR_COLOR;
+} VK_CLEAR_COLOR;
-typedef struct _XGL_RENDER_PASS_CREATE_INFO
+typedef struct _VK_RENDER_PASS_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_RECT renderArea;
+ VK_RECT renderArea;
uint32_t colorAttachmentCount;
- XGL_EXTENT2D extent;
+ VK_EXTENT2D extent;
uint32_t sampleCount;
uint32_t layers;
- const XGL_FORMAT* pColorFormats;
- const XGL_IMAGE_LAYOUT* pColorLayouts;
- const XGL_ATTACHMENT_LOAD_OP* pColorLoadOps;
- const XGL_ATTACHMENT_STORE_OP* pColorStoreOps;
- const XGL_CLEAR_COLOR* pColorLoadClearValues;
- XGL_FORMAT depthStencilFormat;
- XGL_IMAGE_LAYOUT depthStencilLayout;
- XGL_ATTACHMENT_LOAD_OP depthLoadOp;
+ const VK_FORMAT* pColorFormats;
+ const VK_IMAGE_LAYOUT* pColorLayouts;
+ const VK_ATTACHMENT_LOAD_OP* pColorLoadOps;
+ const VK_ATTACHMENT_STORE_OP* pColorStoreOps;
+ const VK_CLEAR_COLOR* pColorLoadClearValues;
+ VK_FORMAT depthStencilFormat;
+ VK_IMAGE_LAYOUT depthStencilLayout;
+ VK_ATTACHMENT_LOAD_OP depthLoadOp;
float depthLoadClearValue;
- XGL_ATTACHMENT_STORE_OP depthStoreOp;
- XGL_ATTACHMENT_LOAD_OP stencilLoadOp;
+ VK_ATTACHMENT_STORE_OP depthStoreOp;
+ VK_ATTACHMENT_LOAD_OP stencilLoadOp;
uint32_t stencilLoadClearValue;
- XGL_ATTACHMENT_STORE_OP stencilStoreOp;
-} XGL_RENDER_PASS_CREATE_INFO;
+ VK_ATTACHMENT_STORE_OP stencilStoreOp;
+} VK_RENDER_PASS_CREATE_INFO;
-typedef struct _XGL_EVENT_CREATE_INFO
+typedef struct _VK_EVENT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // Reserved
-} XGL_EVENT_CREATE_INFO;
+ VK_FLAGS flags; // Reserved
+} VK_EVENT_CREATE_INFO;
-typedef struct _XGL_FENCE_CREATE_INFO
+typedef struct _VK_FENCE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_FENCE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FENCE_CREATE_FLAGS flags; // XGL_FENCE_CREATE_FLAGS
-} XGL_FENCE_CREATE_INFO;
+ VK_FENCE_CREATE_FLAGS flags; // VK_FENCE_CREATE_FLAGS
+} VK_FENCE_CREATE_INFO;
-typedef struct _XGL_SEMAPHORE_CREATE_INFO
+typedef struct _VK_SEMAPHORE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t initialCount;
- XGL_FLAGS flags; // XGL_SEMAPHORE_CREATE_FLAGS
-} XGL_SEMAPHORE_CREATE_INFO;
+ VK_FLAGS flags; // VK_SEMAPHORE_CREATE_FLAGS
+} VK_SEMAPHORE_CREATE_INFO;
-typedef struct _XGL_SEMAPHORE_OPEN_INFO
+typedef struct _VK_SEMAPHORE_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_SEMAPHORE sharedSemaphore;
-} XGL_SEMAPHORE_OPEN_INFO;
+ VK_SEMAPHORE sharedSemaphore;
+} VK_SEMAPHORE_OPEN_INFO;
-typedef struct _XGL_PIPELINE_STATISTICS_DATA
+typedef struct _VK_PIPELINE_STATISTICS_DATA
{
uint64_t fsInvocations; // Fragment shader invocations
uint64_t cPrimitives; // Clipper primitives
uint64_t tcsInvocations; // Tessellation control shader invocations
uint64_t tesInvocations; // Tessellation evaluation shader invocations
uint64_t csInvocations; // Compute shader invocations
-} XGL_PIPELINE_STATISTICS_DATA;
+} VK_PIPELINE_STATISTICS_DATA;
-typedef struct _XGL_QUERY_POOL_CREATE_INFO
+typedef struct _VK_QUERY_POOL_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_QUERY_TYPE queryType;
+ VK_QUERY_TYPE queryType;
uint32_t slots;
-} XGL_QUERY_POOL_CREATE_INFO;
+} VK_QUERY_POOL_CREATE_INFO;
-typedef struct _XGL_FRAMEBUFFER_CREATE_INFO
+typedef struct _VK_FRAMEBUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t colorAttachmentCount;
- const XGL_COLOR_ATTACHMENT_BIND_INFO* pColorAttachments;
- const XGL_DEPTH_STENCIL_BIND_INFO* pDepthStencilAttachment;
+ const VK_COLOR_ATTACHMENT_BIND_INFO* pColorAttachments;
+ const VK_DEPTH_STENCIL_BIND_INFO* pDepthStencilAttachment;
uint32_t sampleCount;
uint32_t width;
uint32_t height;
uint32_t layers;
-} XGL_FRAMEBUFFER_CREATE_INFO;
+} VK_FRAMEBUFFER_CREATE_INFO;
-typedef struct _XGL_DRAW_INDIRECT_CMD
+typedef struct _VK_DRAW_INDIRECT_CMD
{
uint32_t vertexCount;
uint32_t instanceCount;
uint32_t firstVertex;
uint32_t firstInstance;
-} XGL_DRAW_INDIRECT_CMD;
+} VK_DRAW_INDIRECT_CMD;
-typedef struct _XGL_DRAW_INDEXED_INDIRECT_CMD
+typedef struct _VK_DRAW_INDEXED_INDIRECT_CMD
{
uint32_t indexCount;
uint32_t instanceCount;
uint32_t firstIndex;
int32_t vertexOffset;
uint32_t firstInstance;
-} XGL_DRAW_INDEXED_INDIRECT_CMD;
+} VK_DRAW_INDEXED_INDIRECT_CMD;
-typedef struct _XGL_DISPATCH_INDIRECT_CMD
+typedef struct _VK_DISPATCH_INDIRECT_CMD
{
uint32_t x;
uint32_t y;
uint32_t z;
-} XGL_DISPATCH_INDIRECT_CMD;
+} VK_DISPATCH_INDIRECT_CMD;
// ------------------------------------------------------------------------------------------------
// API functions
-typedef XGL_RESULT (XGLAPI *xglCreateInstanceType)(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance);
-typedef XGL_RESULT (XGLAPI *xglDestroyInstanceType)(XGL_INSTANCE instance);
-typedef XGL_RESULT (XGLAPI *xglEnumerateGpusType)(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus);
-typedef XGL_RESULT (XGLAPI *xglGetGpuInfoType)(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef void * (XGLAPI *xglGetProcAddrType)(XGL_PHYSICAL_GPU gpu, const char * pName);
-typedef XGL_RESULT (XGLAPI *xglCreateDeviceType)(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice);
-typedef XGL_RESULT (XGLAPI *xglDestroyDeviceType)(XGL_DEVICE device);
-typedef XGL_RESULT (XGLAPI *xglGetExtensionSupportType)(XGL_PHYSICAL_GPU gpu, const char* pExtName);
-typedef XGL_RESULT (XGLAPI *xglEnumerateLayersType)(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved);
-typedef XGL_RESULT (XGLAPI *xglGetDeviceQueueType)(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue);
-typedef XGL_RESULT (XGLAPI *xglQueueSubmitType)(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence);
-typedef XGL_RESULT (XGLAPI *xglQueueAddMemReferenceType)(XGL_QUEUE queue, XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglQueueRemoveMemReferenceType)(XGL_QUEUE queue, XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglQueueWaitIdleType)(XGL_QUEUE queue);
-typedef XGL_RESULT (XGLAPI *xglDeviceWaitIdleType)(XGL_DEVICE device);
-typedef XGL_RESULT (XGLAPI *xglAllocMemoryType)(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglFreeMemoryType)(XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglSetMemoryPriorityType)(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority);
-typedef XGL_RESULT (XGLAPI *xglMapMemoryType)(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData);
-typedef XGL_RESULT (XGLAPI *xglUnmapMemoryType)(XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglPinSystemMemoryType)(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglGetMultiGpuCompatibilityType)(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo);
-typedef XGL_RESULT (XGLAPI *xglOpenSharedMemoryType)(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglOpenSharedSemaphoreType)(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore);
-typedef XGL_RESULT (XGLAPI *xglOpenPeerMemoryType)(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglOpenPeerImageType)(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglDestroyObjectType)(XGL_OBJECT object);
-typedef XGL_RESULT (XGLAPI *xglGetObjectInfoType)(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglBindObjectMemoryType)(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset);
-typedef XGL_RESULT (XGLAPI *xglBindObjectMemoryRangeType)(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset,XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset);
-typedef XGL_RESULT (XGLAPI *xglBindImageMemoryRangeType)(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset);
-typedef XGL_RESULT (XGLAPI *xglCreateFenceType)(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence);
-typedef XGL_RESULT (XGLAPI *xglResetFencesType)(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences);
-typedef XGL_RESULT (XGLAPI *xglGetFenceStatusType)(XGL_FENCE fence);
-typedef XGL_RESULT (XGLAPI *xglWaitForFencesType)(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout);
-typedef XGL_RESULT (XGLAPI *xglCreateSemaphoreType)(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore);
-typedef XGL_RESULT (XGLAPI *xglQueueSignalSemaphoreType)(XGL_QUEUE queue, XGL_SEMAPHORE semaphore);
-typedef XGL_RESULT (XGLAPI *xglQueueWaitSemaphoreType)(XGL_QUEUE queue, XGL_SEMAPHORE semaphore);
-typedef XGL_RESULT (XGLAPI *xglCreateEventType)(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent);
-typedef XGL_RESULT (XGLAPI *xglGetEventStatusType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglSetEventType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglResetEventType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglCreateQueryPoolType)(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool);
-typedef XGL_RESULT (XGLAPI *xglGetQueryPoolResultsType)(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglGetFormatInfoType)(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglCreateBufferType)(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer);
-typedef XGL_RESULT (XGLAPI *xglCreateBufferViewType)(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateImageType)(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage);
-typedef XGL_RESULT (XGLAPI *xglGetImageSubresourceInfoType)(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglCreateImageViewType)(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateColorAttachmentViewType)(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateDepthStencilViewType)(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateShaderType)(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader);
-typedef XGL_RESULT (XGLAPI *xglCreateGraphicsPipelineType)(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateGraphicsPipelineDerivativeType)(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateComputePipelineType)(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglStorePipelineType)(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglLoadPipelineType)(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglLoadPipelineDerivativeType)(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateSamplerType)(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorSetLayoutType)(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorSetLayoutChainType)(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
-typedef XGL_RESULT (XGLAPI *xglBeginDescriptorPoolUpdateType)(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode);
-typedef XGL_RESULT (XGLAPI *xglEndDescriptorPoolUpdateType)(XGL_DEVICE device, XGL_CMD_BUFFER cmd);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorPoolType)(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool);
-typedef XGL_RESULT (XGLAPI *xglResetDescriptorPoolType)(XGL_DESCRIPTOR_POOL descriptorPool);
-typedef XGL_RESULT (XGLAPI *xglAllocDescriptorSetsType)(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount);
-typedef void (XGLAPI *xglClearDescriptorSetsType)(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets);
-typedef void (XGLAPI *xglUpdateDescriptorsType)(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicViewportStateType)(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicRasterStateType)(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicColorBlendStateType)(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicDepthStencilStateType)(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateCommandBufferType)(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer);
-typedef XGL_RESULT (XGLAPI *xglBeginCommandBufferType)(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
-typedef XGL_RESULT (XGLAPI *xglEndCommandBufferType)(XGL_CMD_BUFFER cmdBuffer);
-typedef XGL_RESULT (XGLAPI *xglResetCommandBufferType)(XGL_CMD_BUFFER cmdBuffer);
-typedef void (XGLAPI *xglCmdBindPipelineType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline);
-typedef void (XGLAPI *xglCmdBindDynamicStateObjectType)(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state);
-typedef void (XGLAPI *xglCmdBindDescriptorSetsType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData);
-typedef void (XGLAPI *xglCmdBindIndexBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType);
-typedef void (XGLAPI *xglCmdBindVertexBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding);
-typedef void (XGLAPI *xglCmdDrawType)(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
-typedef void (XGLAPI *xglCmdDrawIndexedType)(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
-typedef void (XGLAPI *xglCmdDrawIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride);
-typedef void (XGLAPI *xglCmdDrawIndexedIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride);
-typedef void (XGLAPI *xglCmdDispatchType)(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z);
-typedef void (XGLAPI *xglCmdDispatchIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset);
-typedef void (XGLAPI *xglCmdCopyBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCopyImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdBlitImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions);
-typedef void (XGLAPI *xglCmdCopyBufferToImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCopyImageToBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCloneImageDataType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout);
-typedef void (XGLAPI *xglCmdUpdateBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData);
-typedef void (XGLAPI *xglCmdFillBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data);
-typedef void (XGLAPI *xglCmdClearColorImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-typedef void (XGLAPI *xglCmdClearDepthStencilType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-typedef void (XGLAPI *xglCmdResolveImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects);
-typedef void (XGLAPI *xglCmdSetEventType)(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent);
-typedef void (XGLAPI *xglCmdResetEventType)(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent);
-typedef void (XGLAPI *xglCmdWaitEventsType)(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo);
-typedef void (XGLAPI *xglCmdPipelineBarrierType)(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier);
-typedef void (XGLAPI *xglCmdBeginQueryType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags);
-typedef void (XGLAPI *xglCmdEndQueryType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot);
-typedef void (XGLAPI *xglCmdResetQueryPoolType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount);
-typedef void (XGLAPI *xglCmdWriteTimestampType)(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset);
-typedef void (XGLAPI *xglCmdInitAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData);
-typedef void (XGLAPI *xglCmdLoadAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset);
-typedef void (XGLAPI *xglCmdSaveAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset);
-typedef XGL_RESULT (XGLAPI *xglCreateFramebufferType)(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer);
-typedef XGL_RESULT (XGLAPI *xglCreateRenderPassType)(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass);
-typedef void (XGLAPI *xglCmdBeginRenderPassType)(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN* pRenderPassBegin);
-typedef void (XGLAPI *xglCmdEndRenderPassType)(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass);
-
-#ifdef XGL_PROTOTYPES
+typedef VK_RESULT (VKAPI *vkCreateInstanceType)(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance);
+typedef VK_RESULT (VKAPI *vkDestroyInstanceType)(VK_INSTANCE instance);
+typedef VK_RESULT (VKAPI *vkEnumerateGpusType)(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus);
+typedef VK_RESULT (VKAPI *vkGetGpuInfoType)(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef void * (VKAPI *vkGetProcAddrType)(VK_PHYSICAL_GPU gpu, const char * pName);
+typedef VK_RESULT (VKAPI *vkCreateDeviceType)(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice);
+typedef VK_RESULT (VKAPI *vkDestroyDeviceType)(VK_DEVICE device);
+typedef VK_RESULT (VKAPI *vkGetExtensionSupportType)(VK_PHYSICAL_GPU gpu, const char* pExtName);
+typedef VK_RESULT (VKAPI *vkEnumerateLayersType)(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved);
+typedef VK_RESULT (VKAPI *vkGetDeviceQueueType)(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue);
+typedef VK_RESULT (VKAPI *vkQueueSubmitType)(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence);
+typedef VK_RESULT (VKAPI *vkQueueAddMemReferenceType)(VK_QUEUE queue, VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkQueueRemoveMemReferenceType)(VK_QUEUE queue, VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkQueueWaitIdleType)(VK_QUEUE queue);
+typedef VK_RESULT (VKAPI *vkDeviceWaitIdleType)(VK_DEVICE device);
+typedef VK_RESULT (VKAPI *vkAllocMemoryType)(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkFreeMemoryType)(VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkSetMemoryPriorityType)(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority);
+typedef VK_RESULT (VKAPI *vkMapMemoryType)(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData);
+typedef VK_RESULT (VKAPI *vkUnmapMemoryType)(VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkPinSystemMemoryType)(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkGetMultiGpuCompatibilityType)(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo);
+typedef VK_RESULT (VKAPI *vkOpenSharedMemoryType)(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkOpenSharedSemaphoreType)(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore);
+typedef VK_RESULT (VKAPI *vkOpenPeerMemoryType)(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkOpenPeerImageType)(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkDestroyObjectType)(VK_OBJECT object);
+typedef VK_RESULT (VKAPI *vkGetObjectInfoType)(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkBindObjectMemoryType)(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset);
+typedef VK_RESULT (VKAPI *vkBindObjectMemoryRangeType)(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset,VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset);
+typedef VK_RESULT (VKAPI *vkBindImageMemoryRangeType)(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset);
+typedef VK_RESULT (VKAPI *vkCreateFenceType)(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence);
+typedef VK_RESULT (VKAPI *vkResetFencesType)(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences);
+typedef VK_RESULT (VKAPI *vkGetFenceStatusType)(VK_FENCE fence);
+typedef VK_RESULT (VKAPI *vkWaitForFencesType)(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout);
+typedef VK_RESULT (VKAPI *vkCreateSemaphoreType)(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore);
+typedef VK_RESULT (VKAPI *vkQueueSignalSemaphoreType)(VK_QUEUE queue, VK_SEMAPHORE semaphore);
+typedef VK_RESULT (VKAPI *vkQueueWaitSemaphoreType)(VK_QUEUE queue, VK_SEMAPHORE semaphore);
+typedef VK_RESULT (VKAPI *vkCreateEventType)(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent);
+typedef VK_RESULT (VKAPI *vkGetEventStatusType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkSetEventType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkResetEventType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkCreateQueryPoolType)(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool);
+typedef VK_RESULT (VKAPI *vkGetQueryPoolResultsType)(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkGetFormatInfoType)(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkCreateBufferType)(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer);
+typedef VK_RESULT (VKAPI *vkCreateBufferViewType)(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateImageType)(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage);
+typedef VK_RESULT (VKAPI *vkGetImageSubresourceInfoType)(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkCreateImageViewType)(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateColorAttachmentViewType)(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateDepthStencilViewType)(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateShaderType)(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader);
+typedef VK_RESULT (VKAPI *vkCreateGraphicsPipelineType)(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateGraphicsPipelineDerivativeType)(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateComputePipelineType)(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkStorePipelineType)(VK_PIPELINE pipeline, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkLoadPipelineType)(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkLoadPipelineDerivativeType)(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateSamplerType)(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorSetLayoutType)(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorSetLayoutChainType)(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
+typedef VK_RESULT (VKAPI *vkBeginDescriptorPoolUpdateType)(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode);
+typedef VK_RESULT (VKAPI *vkEndDescriptorPoolUpdateType)(VK_DEVICE device, VK_CMD_BUFFER cmd);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorPoolType)(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool);
+typedef VK_RESULT (VKAPI *vkResetDescriptorPoolType)(VK_DESCRIPTOR_POOL descriptorPool);
+typedef VK_RESULT (VKAPI *vkAllocDescriptorSetsType)(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount);
+typedef void (VKAPI *vkClearDescriptorSetsType)(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets);
+typedef void (VKAPI *vkUpdateDescriptorsType)(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray);
+typedef VK_RESULT (VKAPI *vkCreateDynamicViewportStateType)(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicRasterStateType)(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicColorBlendStateType)(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicDepthStencilStateType)(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateCommandBufferType)(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer);
+typedef VK_RESULT (VKAPI *vkBeginCommandBufferType)(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
+typedef VK_RESULT (VKAPI *vkEndCommandBufferType)(VK_CMD_BUFFER cmdBuffer);
+typedef VK_RESULT (VKAPI *vkResetCommandBufferType)(VK_CMD_BUFFER cmdBuffer);
+typedef void (VKAPI *vkCmdBindPipelineType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline);
+typedef void (VKAPI *vkCmdBindDynamicStateObjectType)(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state);
+typedef void (VKAPI *vkCmdBindDescriptorSetsType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData);
+typedef void (VKAPI *vkCmdBindIndexBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType);
+typedef void (VKAPI *vkCmdBindVertexBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding);
+typedef void (VKAPI *vkCmdDrawType)(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
+typedef void (VKAPI *vkCmdDrawIndexedType)(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
+typedef void (VKAPI *vkCmdDrawIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride);
+typedef void (VKAPI *vkCmdDrawIndexedIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride);
+typedef void (VKAPI *vkCmdDispatchType)(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z);
+typedef void (VKAPI *vkCmdDispatchIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset);
+typedef void (VKAPI *vkCmdCopyBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions);
+typedef void (VKAPI *vkCmdCopyImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdBlitImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_BLIT* pRegions);
+typedef void (VKAPI *vkCmdCopyBufferToImageType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdCopyImageToBufferType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdCloneImageDataType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout);
+typedef void (VKAPI *vkCmdUpdateBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData);
+typedef void (VKAPI *vkCmdFillBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data);
+typedef void (VKAPI *vkCmdClearColorImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+typedef void (VKAPI *vkCmdClearDepthStencilType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+typedef void (VKAPI *vkCmdResolveImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects);
+typedef void (VKAPI *vkCmdSetEventType)(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent);
+typedef void (VKAPI *vkCmdResetEventType)(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent);
+typedef void (VKAPI *vkCmdWaitEventsType)(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo);
+typedef void (VKAPI *vkCmdPipelineBarrierType)(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier);
+typedef void (VKAPI *vkCmdBeginQueryType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags);
+typedef void (VKAPI *vkCmdEndQueryType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot);
+typedef void (VKAPI *vkCmdResetQueryPoolType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount);
+typedef void (VKAPI *vkCmdWriteTimestampType)(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset);
+typedef void (VKAPI *vkCmdInitAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData);
+typedef void (VKAPI *vkCmdLoadAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset);
+typedef void (VKAPI *vkCmdSaveAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset);
+typedef VK_RESULT (VKAPI *vkCreateFramebufferType)(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer);
+typedef VK_RESULT (VKAPI *vkCreateRenderPassType)(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass);
+typedef void (VKAPI *vkCmdBeginRenderPassType)(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN* pRenderPassBegin);
+typedef void (VKAPI *vkCmdEndRenderPassType)(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass);
+
+#ifdef VK_PROTOTYPES
// GPU initialization
-XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance);
+VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance);
-XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE instance);
+VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE instance);
-XGL_RESULT XGLAPI xglEnumerateGpus(
- XGL_INSTANCE instance,
+VK_RESULT VKAPI vkEnumerateGpus(
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus);
+ VK_PHYSICAL_GPU* pGpus);
-XGL_RESULT XGLAPI xglGetGpuInfo(
- XGL_PHYSICAL_GPU gpu,
- XGL_PHYSICAL_GPU_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetGpuInfo(
+ VK_PHYSICAL_GPU gpu,
+ VK_PHYSICAL_GPU_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
-void * XGLAPI xglGetProcAddr(
- XGL_PHYSICAL_GPU gpu,
+void * VKAPI vkGetProcAddr(
+ VK_PHYSICAL_GPU gpu,
const char* pName);
// Device functions
-XGL_RESULT XGLAPI xglCreateDevice(
- XGL_PHYSICAL_GPU gpu,
- const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice);
+VK_RESULT VKAPI vkCreateDevice(
+ VK_PHYSICAL_GPU gpu,
+ const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice);
-XGL_RESULT XGLAPI xglDestroyDevice(
- XGL_DEVICE device);
+VK_RESULT VKAPI vkDestroyDevice(
+ VK_DEVICE device);
// Extension discovery functions
-XGL_RESULT XGLAPI xglGetExtensionSupport(
- XGL_PHYSICAL_GPU gpu,
+VK_RESULT VKAPI vkGetExtensionSupport(
+ VK_PHYSICAL_GPU gpu,
const char* pExtName);
// Layer discovery functions
-XGL_RESULT XGLAPI xglEnumerateLayers(
- XGL_PHYSICAL_GPU gpu,
+VK_RESULT VKAPI vkEnumerateLayers(
+ VK_PHYSICAL_GPU gpu,
size_t maxLayerCount,
size_t maxStringSize,
size_t* pOutLayerCount,
// Queue functions
-XGL_RESULT XGLAPI xglGetDeviceQueue(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkGetDeviceQueue(
+ VK_DEVICE device,
uint32_t queueNodeIndex,
uint32_t queueIndex,
- XGL_QUEUE* pQueue);
+ VK_QUEUE* pQueue);
-XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue,
+VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER* pCmdBuffers,
- XGL_FENCE fence);
+ const VK_CMD_BUFFER* pCmdBuffers,
+ VK_FENCE fence);
-XGL_RESULT XGLAPI xglQueueAddMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkQueueAddMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglQueueRemoveMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkQueueRemoveMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglQueueWaitIdle(
- XGL_QUEUE queue);
+VK_RESULT VKAPI vkQueueWaitIdle(
+ VK_QUEUE queue);
-XGL_RESULT XGLAPI xglDeviceWaitIdle(
- XGL_DEVICE device);
+VK_RESULT VKAPI vkDeviceWaitIdle(
+ VK_DEVICE device);
// Memory functions
-XGL_RESULT XGLAPI xglAllocMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkAllocMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglFreeMemory(
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkFreeMemory(
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglSetMemoryPriority(
- XGL_GPU_MEMORY mem,
- XGL_MEMORY_PRIORITY priority);
+VK_RESULT VKAPI vkSetMemoryPriority(
+ VK_GPU_MEMORY mem,
+ VK_MEMORY_PRIORITY priority);
-XGL_RESULT XGLAPI xglMapMemory(
- XGL_GPU_MEMORY mem,
- XGL_FLAGS flags, // Reserved
+VK_RESULT VKAPI vkMapMemory(
+ VK_GPU_MEMORY mem,
+ VK_FLAGS flags, // Reserved
void** ppData);
-XGL_RESULT XGLAPI xglUnmapMemory(
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkUnmapMemory(
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglPinSystemMemory(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkPinSystemMemory(
+ VK_DEVICE device,
const void* pSysMem,
size_t memSize,
- XGL_GPU_MEMORY* pMem);
+ VK_GPU_MEMORY* pMem);
// Multi-device functions
-XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(
- XGL_PHYSICAL_GPU gpu0,
- XGL_PHYSICAL_GPU gpu1,
- XGL_GPU_COMPATIBILITY_INFO* pInfo);
+VK_RESULT VKAPI vkGetMultiGpuCompatibility(
+ VK_PHYSICAL_GPU gpu0,
+ VK_PHYSICAL_GPU gpu1,
+ VK_GPU_COMPATIBILITY_INFO* pInfo);
-XGL_RESULT XGLAPI xglOpenSharedMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenSharedMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglOpenSharedSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo,
- XGL_SEMAPHORE* pSemaphore);
+VK_RESULT VKAPI vkOpenSharedSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_OPEN_INFO* pOpenInfo,
+ VK_SEMAPHORE* pSemaphore);
-XGL_RESULT XGLAPI xglOpenPeerMemory(
- XGL_DEVICE device,
- const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenPeerMemory(
+ VK_DEVICE device,
+ const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglOpenPeerImage(
- XGL_DEVICE device,
- const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenPeerImage(
+ VK_DEVICE device,
+ const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem);
// Generic API object functions
-XGL_RESULT XGLAPI xglDestroyObject(
- XGL_OBJECT object);
+VK_RESULT VKAPI vkDestroyObject(
+ VK_OBJECT object);
-XGL_RESULT XGLAPI xglGetObjectInfo(
- XGL_BASE_OBJECT object,
- XGL_OBJECT_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetObjectInfo(
+ VK_BASE_OBJECT object,
+ VK_OBJECT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
-XGL_RESULT XGLAPI xglBindObjectMemory(
- XGL_OBJECT object,
+VK_RESULT VKAPI vkBindObjectMemory(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
-XGL_RESULT XGLAPI xglBindObjectMemoryRange(
- XGL_OBJECT object,
+VK_RESULT VKAPI vkBindObjectMemoryRange(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_SIZE rangeOffset,
- XGL_GPU_SIZE rangeSize,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ VK_GPU_SIZE rangeOffset,
+ VK_GPU_SIZE rangeSize,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
-XGL_RESULT XGLAPI xglBindImageMemoryRange(
- XGL_IMAGE image,
+VK_RESULT VKAPI vkBindImageMemoryRange(
+ VK_IMAGE image,
uint32_t allocationIdx,
- const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ const VK_IMAGE_MEMORY_BIND_INFO* bindInfo,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
// Fence functions
-XGL_RESULT XGLAPI xglCreateFence(
- XGL_DEVICE device,
- const XGL_FENCE_CREATE_INFO* pCreateInfo,
- XGL_FENCE* pFence);
+VK_RESULT VKAPI vkCreateFence(
+ VK_DEVICE device,
+ const VK_FENCE_CREATE_INFO* pCreateInfo,
+ VK_FENCE* pFence);
-XGL_RESULT XGLAPI xglResetFences(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkResetFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- XGL_FENCE* pFences);
+ VK_FENCE* pFences);
-XGL_RESULT XGLAPI xglGetFenceStatus(
- XGL_FENCE fence);
+VK_RESULT VKAPI vkGetFenceStatus(
+ VK_FENCE fence);
-XGL_RESULT XGLAPI xglWaitForFences(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkWaitForFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- const XGL_FENCE* pFences,
+ const VK_FENCE* pFences,
bool32_t waitAll,
uint64_t timeout); // timeout in nanoseconds
// Queue semaphore functions
-XGL_RESULT XGLAPI xglCreateSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo,
- XGL_SEMAPHORE* pSemaphore);
+VK_RESULT VKAPI vkCreateSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_CREATE_INFO* pCreateInfo,
+ VK_SEMAPHORE* pSemaphore);
-XGL_RESULT XGLAPI xglQueueSignalSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore);
+VK_RESULT VKAPI vkQueueSignalSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore);
-XGL_RESULT XGLAPI xglQueueWaitSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore);
+VK_RESULT VKAPI vkQueueWaitSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore);
// Event functions
-XGL_RESULT XGLAPI xglCreateEvent(
- XGL_DEVICE device,
- const XGL_EVENT_CREATE_INFO* pCreateInfo,
- XGL_EVENT* pEvent);
+VK_RESULT VKAPI vkCreateEvent(
+ VK_DEVICE device,
+ const VK_EVENT_CREATE_INFO* pCreateInfo,
+ VK_EVENT* pEvent);
-XGL_RESULT XGLAPI xglGetEventStatus(
- XGL_EVENT event);
+VK_RESULT VKAPI vkGetEventStatus(
+ VK_EVENT event);
-XGL_RESULT XGLAPI xglSetEvent(
- XGL_EVENT event);
+VK_RESULT VKAPI vkSetEvent(
+ VK_EVENT event);
-XGL_RESULT XGLAPI xglResetEvent(
- XGL_EVENT event);
+VK_RESULT VKAPI vkResetEvent(
+ VK_EVENT event);
// Query functions
-XGL_RESULT XGLAPI xglCreateQueryPool(
- XGL_DEVICE device,
- const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- XGL_QUERY_POOL* pQueryPool);
+VK_RESULT VKAPI vkCreateQueryPool(
+ VK_DEVICE device,
+ const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ VK_QUERY_POOL* pQueryPool);
-XGL_RESULT XGLAPI xglGetQueryPoolResults(
- XGL_QUERY_POOL queryPool,
+VK_RESULT VKAPI vkGetQueryPoolResults(
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount,
size_t* pDataSize,
// Format capabilities
-XGL_RESULT XGLAPI xglGetFormatInfo(
- XGL_DEVICE device,
- XGL_FORMAT format,
- XGL_FORMAT_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetFormatInfo(
+ VK_DEVICE device,
+ VK_FORMAT format,
+ VK_FORMAT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
// Buffer functions
-XGL_RESULT XGLAPI xglCreateBuffer(
- XGL_DEVICE device,
- const XGL_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_BUFFER* pBuffer);
+VK_RESULT VKAPI vkCreateBuffer(
+ VK_DEVICE device,
+ const VK_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_BUFFER* pBuffer);
// Buffer view functions
-XGL_RESULT XGLAPI xglCreateBufferView(
- XGL_DEVICE device,
- const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
- XGL_BUFFER_VIEW* pView);
+VK_RESULT VKAPI vkCreateBufferView(
+ VK_DEVICE device,
+ const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
+ VK_BUFFER_VIEW* pView);
// Image functions
-XGL_RESULT XGLAPI xglCreateImage(
- XGL_DEVICE device,
- const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage);
+VK_RESULT VKAPI vkCreateImage(
+ VK_DEVICE device,
+ const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage);
-XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
- XGL_IMAGE image,
- const XGL_IMAGE_SUBRESOURCE* pSubresource,
- XGL_SUBRESOURCE_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetImageSubresourceInfo(
+ VK_IMAGE image,
+ const VK_IMAGE_SUBRESOURCE* pSubresource,
+ VK_SUBRESOURCE_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
// Image view functions
-XGL_RESULT XGLAPI xglCreateImageView(
- XGL_DEVICE device,
- const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView);
+VK_RESULT VKAPI vkCreateImageView(
+ VK_DEVICE device,
+ const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView);
-XGL_RESULT XGLAPI xglCreateColorAttachmentView(
- XGL_DEVICE device,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView);
+VK_RESULT VKAPI vkCreateColorAttachmentView(
+ VK_DEVICE device,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView);
-XGL_RESULT XGLAPI xglCreateDepthStencilView(
- XGL_DEVICE device,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
- XGL_DEPTH_STENCIL_VIEW* pView);
+VK_RESULT VKAPI vkCreateDepthStencilView(
+ VK_DEVICE device,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
+ VK_DEPTH_STENCIL_VIEW* pView);
// Shader functions
-XGL_RESULT XGLAPI xglCreateShader(
- XGL_DEVICE device,
- const XGL_SHADER_CREATE_INFO* pCreateInfo,
- XGL_SHADER* pShader);
+VK_RESULT VKAPI vkCreateShader(
+ VK_DEVICE device,
+ const VK_SHADER_CREATE_INFO* pCreateInfo,
+ VK_SHADER* pShader);
// Pipeline functions
-XGL_RESULT XGLAPI xglCreateGraphicsPipeline(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateGraphicsPipeline(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglCreateComputePipeline(
- XGL_DEVICE device,
- const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateComputePipeline(
+ VK_DEVICE device,
+ const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglStorePipeline(
- XGL_PIPELINE pipeline,
+VK_RESULT VKAPI vkStorePipeline(
+ VK_PIPELINE pipeline,
size_t* pDataSize,
void* pData);
-XGL_RESULT XGLAPI xglLoadPipeline(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkLoadPipeline(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE* pPipeline);
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglLoadPipelineDerivative(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkLoadPipelineDerivative(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline);
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline);
// Sampler functions
-XGL_RESULT XGLAPI xglCreateSampler(
- XGL_DEVICE device,
- const XGL_SAMPLER_CREATE_INFO* pCreateInfo,
- XGL_SAMPLER* pSampler);
+VK_RESULT VKAPI vkCreateSampler(
+ VK_DEVICE device,
+ const VK_SAMPLER_CREATE_INFO* pCreateInfo,
+ VK_SAMPLER* pSampler);
// Descriptor set functions
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(
- XGL_DEVICE device,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout);
+VK_RESULT VKAPI vkCreateDescriptorSetLayout(
+ VK_DEVICE device,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_SET_LAYOUT* pSetLayout);
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(
+ VK_DEVICE device,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
-XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_UPDATE_MODE updateMode);
+VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_UPDATE_MODE updateMode);
-XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_CMD_BUFFER cmd);
+VK_RESULT VKAPI vkEndDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_CMD_BUFFER cmd);
-XGL_RESULT XGLAPI xglCreateDescriptorPool(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_POOL_USAGE poolUsage,
+VK_RESULT VKAPI vkCreateDescriptorPool(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_POOL_USAGE poolUsage,
uint32_t maxSets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_POOL* pDescriptorPool);
+ const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_POOL* pDescriptorPool);
-XGL_RESULT XGLAPI xglResetDescriptorPool(
- XGL_DESCRIPTOR_POOL descriptorPool);
+VK_RESULT VKAPI vkResetDescriptorPool(
+ VK_DESCRIPTOR_POOL descriptorPool);
-XGL_RESULT XGLAPI xglAllocDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
- XGL_DESCRIPTOR_SET_USAGE setUsage,
+VK_RESULT VKAPI vkAllocDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
+ VK_DESCRIPTOR_SET_USAGE setUsage,
uint32_t count,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
- XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
+ VK_DESCRIPTOR_SET* pDescriptorSets,
uint32_t* pCount);
-void XGLAPI xglClearDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
+void VKAPI vkClearDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets);
+ const VK_DESCRIPTOR_SET* pDescriptorSets);
-void XGLAPI xglUpdateDescriptors(
- XGL_DESCRIPTOR_SET descriptorSet,
+void VKAPI vkUpdateDescriptors(
+ VK_DESCRIPTOR_SET descriptorSet,
uint32_t updateCount,
const void** ppUpdateArray);
// State object functions
-XGL_RESULT XGLAPI xglCreateDynamicViewportState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicViewportState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicRasterState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicRasterState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicColorBlendState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicDepthStencilState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState);
// Command buffer functions
-XGL_RESULT XGLAPI xglCreateCommandBuffer(
- XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer);
+VK_RESULT VKAPI vkCreateCommandBuffer(
+ VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer);
-XGL_RESULT XGLAPI xglBeginCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
+VK_RESULT VKAPI vkBeginCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
-XGL_RESULT XGLAPI xglEndCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer);
+VK_RESULT VKAPI vkEndCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer);
-XGL_RESULT XGLAPI xglResetCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer);
+VK_RESULT VKAPI vkResetCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer);
// Command buffer building functions
-void XGLAPI xglCmdBindPipeline(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_PIPELINE pipeline);
+void VKAPI vkCmdBindPipeline(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_PIPELINE pipeline);
-void XGLAPI xglCmdBindDynamicStateObject(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_STATE_BIND_POINT stateBindPoint,
- XGL_DYNAMIC_STATE_OBJECT dynamicState);
+void VKAPI vkCmdBindDynamicStateObject(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_STATE_BIND_POINT stateBindPoint,
+ VK_DYNAMIC_STATE_OBJECT dynamicState);
-void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData);
-void XGLAPI xglCmdBindIndexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE indexType);
+void VKAPI vkCmdBindIndexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE indexType);
-void XGLAPI xglCmdBindVertexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdBindVertexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t binding);
-void XGLAPI xglCmdDraw(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDraw(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstVertex,
uint32_t vertexCount,
uint32_t firstInstance,
uint32_t instanceCount);
-void XGLAPI xglCmdDrawIndexed(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDrawIndexed(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstIndex,
uint32_t indexCount,
int32_t vertexOffset,
uint32_t firstInstance,
uint32_t instanceCount);
-void XGLAPI xglCmdDrawIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdDrawIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride);
-void XGLAPI xglCmdDrawIndexedIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdDrawIndexedIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride);
-void XGLAPI xglCmdDispatch(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDispatch(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t x,
uint32_t y,
uint32_t z);
-void XGLAPI xglCmdDispatchIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset);
+void VKAPI vkCmdDispatchIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset);
-void XGLAPI xglCmdCopyBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_BUFFER destBuffer,
+void VKAPI vkCmdCopyBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_COPY* pRegions);
-
-void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_BUFFER_COPY* pRegions);
+
+void VKAPI vkCmdCopyImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_COPY* pRegions);
-
-void XGLAPI xglCmdBlitImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_IMAGE_COPY* pRegions);
+
+void VKAPI vkCmdBlitImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_BLIT* pRegions);
+ const VK_IMAGE_BLIT* pRegions);
-void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+void VKAPI vkCmdCopyBufferToImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions);
+ const VK_BUFFER_IMAGE_COPY* pRegions);
-void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
+void VKAPI vkCmdCopyImageToBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions);
-
-void XGLAPI xglCmdCloneImageData(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout);
-
-void XGLAPI xglCmdUpdateBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE dataSize,
+ const VK_BUFFER_IMAGE_COPY* pRegions);
+
+void VKAPI vkCmdCloneImageData(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout);
+
+void VKAPI vkCmdUpdateBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE dataSize,
const uint32_t* pData);
-void XGLAPI xglCmdFillBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE fillSize,
+void VKAPI vkCmdFillBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE fillSize,
uint32_t data);
-void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
+void VKAPI vkCmdClearColorImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
-void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
+void VKAPI vkCmdClearDepthStencil(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
float depth,
uint32_t stencil,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-
-void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+
+void VKAPI vkCmdResolveImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t rectCount,
- const XGL_IMAGE_RESOLVE* pRects);
+ const VK_IMAGE_RESOLVE* pRects);
-void XGLAPI xglCmdSetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event,
- XGL_PIPE_EVENT pipeEvent);
+void VKAPI vkCmdSetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event,
+ VK_PIPE_EVENT pipeEvent);
-void XGLAPI xglCmdResetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event,
- XGL_PIPE_EVENT pipeEvent);
+void VKAPI vkCmdResetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event,
+ VK_PIPE_EVENT pipeEvent);
-void XGLAPI xglCmdWaitEvents(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_EVENT_WAIT_INFO* pWaitInfo);
+void VKAPI vkCmdWaitEvents(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_EVENT_WAIT_INFO* pWaitInfo);
-void XGLAPI xglCmdPipelineBarrier(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_PIPELINE_BARRIER* pBarrier);
+void VKAPI vkCmdPipelineBarrier(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_PIPELINE_BARRIER* pBarrier);
-void XGLAPI xglCmdBeginQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdBeginQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot,
- XGL_FLAGS flags);
+ VK_FLAGS flags);
-void XGLAPI xglCmdEndQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdEndQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot);
-void XGLAPI xglCmdResetQueryPool(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdResetQueryPool(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount);
-void XGLAPI xglCmdWriteTimestamp(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_TIMESTAMP_TYPE timestampType,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset);
+void VKAPI vkCmdWriteTimestamp(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_TIMESTAMP_TYPE timestampType,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset);
-void XGLAPI xglCmdInitAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdInitAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
const uint32_t* pData);
-void XGLAPI xglCmdLoadAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdLoadAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER srcBuffer,
- XGL_GPU_SIZE srcOffset);
+ VK_BUFFER srcBuffer,
+ VK_GPU_SIZE srcOffset);
-void XGLAPI xglCmdSaveAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdSaveAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset);
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset);
-XGL_RESULT XGLAPI xglCreateFramebuffer(
- XGL_DEVICE device,
- const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
- XGL_FRAMEBUFFER* pFramebuffer);
+VK_RESULT VKAPI vkCreateFramebuffer(
+ VK_DEVICE device,
+ const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
+ VK_FRAMEBUFFER* pFramebuffer);
-XGL_RESULT XGLAPI xglCreateRenderPass(
- XGL_DEVICE device,
- const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo,
- XGL_RENDER_PASS* pRenderPass);
+VK_RESULT VKAPI vkCreateRenderPass(
+ VK_DEVICE device,
+ const VK_RENDER_PASS_CREATE_INFO* pCreateInfo,
+ VK_RENDER_PASS* pRenderPass);
-void XGLAPI xglCmdBeginRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_RENDER_PASS_BEGIN* pRenderPassBegin);
+void VKAPI vkCmdBeginRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_RENDER_PASS_BEGIN* pRenderPassBegin);
-void XGLAPI xglCmdEndRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_RENDER_PASS renderPass);
+void VKAPI vkCmdEndRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_RENDER_PASS renderPass);
-#endif // XGL_PROTOTYPES
+#endif // VK_PROTOTYPES
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGL_H__
+#endif // __VULKAN_H__
/******************************************************************************************
To incorporate trasnform feedback, we could create a new pipeline stage. This would
be injected into a PSO by including the following in the chain:
- typedef struct _XGL_XFB_CREATE_INFO
+ typedef struct _VK_XFB_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_XFB_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_XFB_CREATE_INFO
const void* pNext; // Pointer to next structure
// More XFB state, if any goes here
- } XGL_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ } VK_DEPTH_STENCIL_VIEW_CREATE_INFO;
We expect that only the shader-side configuration (via layout qualifiers or their IR
equivalent) is used to configure the data written to each stream. When transform
feedback is part of the pipeline, transform feedback binding would be available
through a new API bind point:
- xglCmdBindTransformFeedbackMemoryView(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint, // = GRAPHICS
+ vkCmdBindTransformFeedbackMemoryView(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint, // = GRAPHICS
uint32_t index,
- const XGL_MEMORY_VIEW_ATTACH_INFO* pMemView);
+ const VK_MEMORY_VIEW_ATTACH_INFO* pMemView);
2) "Bindless" + support for non-bindless hardware.
- XGL doesn't have bindless textures the way that GL does. It has resource descriptor
+ VK doesn't have bindless textures the way that GL does. It has resource descriptor
sets, or resource tables. Resource tables can be nested and hold references to more
resource tables. They are explicitly sized by the application and have no artificial
upper size limit. An application can still attach as many textures as they want to
-#ifndef __XGLDBG_H__
-#define __XGLDBG_H__
+#ifndef __VKDBG_H__
+#define __VKDBG_H__
-#include <xgl.h>
+#include <vulkan.h>
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
-typedef enum _XGL_DBG_MSG_TYPE
+typedef enum _VK_DBG_MSG_TYPE
{
- XGL_DBG_MSG_UNKNOWN = 0x0,
- XGL_DBG_MSG_ERROR = 0x1,
- XGL_DBG_MSG_WARNING = 0x2,
- XGL_DBG_MSG_PERF_WARNING = 0x3,
+ VK_DBG_MSG_UNKNOWN = 0x0,
+ VK_DBG_MSG_ERROR = 0x1,
+ VK_DBG_MSG_WARNING = 0x2,
+ VK_DBG_MSG_PERF_WARNING = 0x3,
- XGL_DBG_MSG_TYPE_BEGIN_RANGE = XGL_DBG_MSG_UNKNOWN,
- XGL_DBG_MSG_TYPE_END_RANGE = XGL_DBG_MSG_PERF_WARNING,
- XGL_NUM_DBG_MSG_TYPE = (XGL_DBG_MSG_TYPE_END_RANGE - XGL_DBG_MSG_TYPE_BEGIN_RANGE + 1),
-} XGL_DBG_MSG_TYPE;
+ VK_DBG_MSG_TYPE_BEGIN_RANGE = VK_DBG_MSG_UNKNOWN,
+ VK_DBG_MSG_TYPE_END_RANGE = VK_DBG_MSG_PERF_WARNING,
+ VK_NUM_DBG_MSG_TYPE = (VK_DBG_MSG_TYPE_END_RANGE - VK_DBG_MSG_TYPE_BEGIN_RANGE + 1),
+} VK_DBG_MSG_TYPE;
-typedef enum _XGL_DBG_MSG_FILTER
+typedef enum _VK_DBG_MSG_FILTER
{
- XGL_DBG_MSG_FILTER_NONE = 0x0,
- XGL_DBG_MSG_FILTER_REPEATED = 0x1,
- XGL_DBG_MSG_FILTER_ALL = 0x2,
+ VK_DBG_MSG_FILTER_NONE = 0x0,
+ VK_DBG_MSG_FILTER_REPEATED = 0x1,
+ VK_DBG_MSG_FILTER_ALL = 0x2,
- XGL_DBG_MSG_FILTER_BEGIN_RANGE = XGL_DBG_MSG_FILTER_NONE,
- XGL_DBG_MSG_FILTER_END_RANGE = XGL_DBG_MSG_FILTER_ALL,
- XGL_NUM_DBG_MSG_FILTER = (XGL_DBG_MSG_FILTER_END_RANGE - XGL_DBG_MSG_FILTER_BEGIN_RANGE + 1),
-} XGL_DBG_MSG_FILTER;
+ VK_DBG_MSG_FILTER_BEGIN_RANGE = VK_DBG_MSG_FILTER_NONE,
+ VK_DBG_MSG_FILTER_END_RANGE = VK_DBG_MSG_FILTER_ALL,
+ VK_NUM_DBG_MSG_FILTER = (VK_DBG_MSG_FILTER_END_RANGE - VK_DBG_MSG_FILTER_BEGIN_RANGE + 1),
+} VK_DBG_MSG_FILTER;
-typedef enum _XGL_DBG_GLOBAL_OPTION
+typedef enum _VK_DBG_GLOBAL_OPTION
{
- XGL_DBG_OPTION_DEBUG_ECHO_ENABLE = 0x0,
- XGL_DBG_OPTION_BREAK_ON_ERROR = 0x1,
- XGL_DBG_OPTION_BREAK_ON_WARNING = 0x2,
+ VK_DBG_OPTION_DEBUG_ECHO_ENABLE = 0x0,
+ VK_DBG_OPTION_BREAK_ON_ERROR = 0x1,
+ VK_DBG_OPTION_BREAK_ON_WARNING = 0x2,
- XGL_DBG_GLOBAL_OPTION_BEGIN_RANGE = XGL_DBG_OPTION_DEBUG_ECHO_ENABLE,
- XGL_DBG_GLOBAL_OPTION_END_RANGE = XGL_DBG_OPTION_BREAK_ON_WARNING,
- XGL_NUM_DBG_GLOBAL_OPTION = (XGL_DBG_GLOBAL_OPTION_END_RANGE - XGL_DBG_GLOBAL_OPTION_BEGIN_RANGE + 1),
-} XGL_DBG_GLOBAL_OPTION;
+ VK_DBG_GLOBAL_OPTION_BEGIN_RANGE = VK_DBG_OPTION_DEBUG_ECHO_ENABLE,
+ VK_DBG_GLOBAL_OPTION_END_RANGE = VK_DBG_OPTION_BREAK_ON_WARNING,
+ VK_NUM_DBG_GLOBAL_OPTION = (VK_DBG_GLOBAL_OPTION_END_RANGE - VK_DBG_GLOBAL_OPTION_BEGIN_RANGE + 1),
+} VK_DBG_GLOBAL_OPTION;
-typedef enum _XGL_DBG_DEVICE_OPTION
+typedef enum _VK_DBG_DEVICE_OPTION
{
- XGL_DBG_OPTION_DISABLE_PIPELINE_LOADS = 0x0,
- XGL_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS = 0x1,
- XGL_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT = 0x2,
+ VK_DBG_OPTION_DISABLE_PIPELINE_LOADS = 0x0,
+ VK_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS = 0x1,
+ VK_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT = 0x2,
- XGL_DBG_DEVICE_OPTION_BEGIN_RANGE = XGL_DBG_OPTION_DISABLE_PIPELINE_LOADS,
- XGL_DBG_DEVICE_OPTION_END_RANGE = XGL_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT,
- XGL_NUM_DBG_DEVICE_OPTION = (XGL_DBG_DEVICE_OPTION_END_RANGE - XGL_DBG_DEVICE_OPTION_BEGIN_RANGE + 1),
-} XGL_DBG_DEVICE_OPTION;
+ VK_DBG_DEVICE_OPTION_BEGIN_RANGE = VK_DBG_OPTION_DISABLE_PIPELINE_LOADS,
+ VK_DBG_DEVICE_OPTION_END_RANGE = VK_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT,
+ VK_NUM_DBG_DEVICE_OPTION = (VK_DBG_DEVICE_OPTION_END_RANGE - VK_DBG_DEVICE_OPTION_BEGIN_RANGE + 1),
+} VK_DBG_DEVICE_OPTION;
-typedef enum _XGL_DBG_OBJECT_TYPE
+typedef enum _VK_DBG_OBJECT_TYPE
{
- XGL_DBG_OBJECT_UNKNOWN = 0x00,
- XGL_DBG_OBJECT_DEVICE = 0x01,
- XGL_DBG_OBJECT_QUEUE = 0x02,
- XGL_DBG_OBJECT_GPU_MEMORY = 0x03,
- XGL_DBG_OBJECT_IMAGE = 0x04,
- XGL_DBG_OBJECT_IMAGE_VIEW = 0x05,
- XGL_DBG_OBJECT_COLOR_TARGET_VIEW = 0x06,
- XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW = 0x07,
- XGL_DBG_OBJECT_SHADER = 0x08,
- XGL_DBG_OBJECT_GRAPHICS_PIPELINE = 0x09,
- XGL_DBG_OBJECT_COMPUTE_PIPELINE = 0x0a,
- XGL_DBG_OBJECT_SAMPLER = 0x0b,
- XGL_DBG_OBJECT_DESCRIPTOR_SET = 0x0c,
- XGL_DBG_OBJECT_VIEWPORT_STATE = 0x0d,
- XGL_DBG_OBJECT_RASTER_STATE = 0x0e,
- XGL_DBG_OBJECT_MSAA_STATE = 0x0f,
- XGL_DBG_OBJECT_COLOR_BLEND_STATE = 0x10,
- XGL_DBG_OBJECT_DEPTH_STENCIL_STATE = 0x11,
- XGL_DBG_OBJECT_CMD_BUFFER = 0x12,
- XGL_DBG_OBJECT_FENCE = 0x13,
- XGL_DBG_OBJECT_SEMAPHORE = 0x14,
- XGL_DBG_OBJECT_EVENT = 0x15,
- XGL_DBG_OBJECT_QUERY_POOL = 0x16,
- XGL_DBG_OBJECT_SHARED_GPU_MEMORY = 0x17,
- XGL_DBG_OBJECT_SHARED_SEMAPHORE = 0x18,
- XGL_DBG_OBJECT_PEER_GPU_MEMORY = 0x19,
- XGL_DBG_OBJECT_PEER_IMAGE = 0x1a,
- XGL_DBG_OBJECT_PINNED_GPU_MEMORY = 0x1b,
- XGL_DBG_OBJECT_INTERNAL_GPU_MEMORY = 0x1c,
- XGL_DBG_OBJECT_FRAMEBUFFER = 0x1d,
- XGL_DBG_OBJECT_RENDER_PASS = 0x1e,
-
- XGL_DBG_OBJECT_INSTANCE,
- XGL_DBG_OBJECT_BUFFER,
- XGL_DBG_OBJECT_BUFFER_VIEW,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN,
- XGL_DBG_OBJECT_DESCRIPTOR_POOL,
-
- XGL_DBG_OBJECT_TYPE_BEGIN_RANGE = XGL_DBG_OBJECT_UNKNOWN,
- XGL_DBG_OBJECT_TYPE_END_RANGE = XGL_DBG_OBJECT_DESCRIPTOR_POOL,
- XGL_NUM_DBG_OBJECT_TYPE = (XGL_DBG_OBJECT_TYPE_END_RANGE - XGL_DBG_OBJECT_TYPE_BEGIN_RANGE + 1),
-} XGL_DBG_OBJECT_TYPE;
-
-typedef void (XGLAPI *XGL_DBG_MSG_CALLBACK_FUNCTION)(
- XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+ VK_DBG_OBJECT_UNKNOWN = 0x00,
+ VK_DBG_OBJECT_DEVICE = 0x01,
+ VK_DBG_OBJECT_QUEUE = 0x02,
+ VK_DBG_OBJECT_GPU_MEMORY = 0x03,
+ VK_DBG_OBJECT_IMAGE = 0x04,
+ VK_DBG_OBJECT_IMAGE_VIEW = 0x05,
+ VK_DBG_OBJECT_COLOR_TARGET_VIEW = 0x06,
+ VK_DBG_OBJECT_DEPTH_STENCIL_VIEW = 0x07,
+ VK_DBG_OBJECT_SHADER = 0x08,
+ VK_DBG_OBJECT_GRAPHICS_PIPELINE = 0x09,
+ VK_DBG_OBJECT_COMPUTE_PIPELINE = 0x0a,
+ VK_DBG_OBJECT_SAMPLER = 0x0b,
+ VK_DBG_OBJECT_DESCRIPTOR_SET = 0x0c,
+ VK_DBG_OBJECT_VIEWPORT_STATE = 0x0d,
+ VK_DBG_OBJECT_RASTER_STATE = 0x0e,
+ VK_DBG_OBJECT_MSAA_STATE = 0x0f,
+ VK_DBG_OBJECT_COLOR_BLEND_STATE = 0x10,
+ VK_DBG_OBJECT_DEPTH_STENCIL_STATE = 0x11,
+ VK_DBG_OBJECT_CMD_BUFFER = 0x12,
+ VK_DBG_OBJECT_FENCE = 0x13,
+ VK_DBG_OBJECT_SEMAPHORE = 0x14,
+ VK_DBG_OBJECT_EVENT = 0x15,
+ VK_DBG_OBJECT_QUERY_POOL = 0x16,
+ VK_DBG_OBJECT_SHARED_GPU_MEMORY = 0x17,
+ VK_DBG_OBJECT_SHARED_SEMAPHORE = 0x18,
+ VK_DBG_OBJECT_PEER_GPU_MEMORY = 0x19,
+ VK_DBG_OBJECT_PEER_IMAGE = 0x1a,
+ VK_DBG_OBJECT_PINNED_GPU_MEMORY = 0x1b,
+ VK_DBG_OBJECT_INTERNAL_GPU_MEMORY = 0x1c,
+ VK_DBG_OBJECT_FRAMEBUFFER = 0x1d,
+ VK_DBG_OBJECT_RENDER_PASS = 0x1e,
+
+ VK_DBG_OBJECT_INSTANCE,
+ VK_DBG_OBJECT_BUFFER,
+ VK_DBG_OBJECT_BUFFER_VIEW,
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT,
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN,
+ VK_DBG_OBJECT_DESCRIPTOR_POOL,
+
+ VK_DBG_OBJECT_TYPE_BEGIN_RANGE = VK_DBG_OBJECT_UNKNOWN,
+ VK_DBG_OBJECT_TYPE_END_RANGE = VK_DBG_OBJECT_DESCRIPTOR_POOL,
+ VK_NUM_DBG_OBJECT_TYPE = (VK_DBG_OBJECT_TYPE_END_RANGE - VK_DBG_OBJECT_TYPE_BEGIN_RANGE + 1),
+} VK_DBG_OBJECT_TYPE;
+
+typedef void (VKAPI *VK_DBG_MSG_CALLBACK_FUNCTION)(
+ VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pMsg,
void* pUserData);
// Debug functions
-typedef XGL_RESULT (XGLAPI *xglDbgSetValidationLevelType)(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel);
-typedef XGL_RESULT (XGLAPI *xglDbgRegisterMsgCallbackType)(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData);
-typedef XGL_RESULT (XGLAPI *xglDbgUnregisterMsgCallbackType)(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
-typedef XGL_RESULT (XGLAPI *xglDbgSetMessageFilterType)(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter);
-typedef XGL_RESULT (XGLAPI *xglDbgSetObjectTagType)(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag);
-typedef XGL_RESULT (XGLAPI *xglDbgSetGlobalOptionType)(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData);
-typedef XGL_RESULT (XGLAPI *xglDbgSetDeviceOptionType)(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData);
-typedef void (XGLAPI *xglCmdDbgMarkerBeginType)(XGL_CMD_BUFFER cmdBuffer, const char* pMarker);
-typedef void (XGLAPI *xglCmdDbgMarkerEndType)(XGL_CMD_BUFFER cmdBuffer);
-
-#ifdef XGL_PROTOTYPES
-XGL_RESULT XGLAPI xglDbgSetValidationLevel(
- XGL_DEVICE device,
- XGL_VALIDATION_LEVEL validationLevel);
-
-XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
+typedef VK_RESULT (VKAPI *vkDbgSetValidationLevelType)(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel);
+typedef VK_RESULT (VKAPI *vkDbgRegisterMsgCallbackType)(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData);
+typedef VK_RESULT (VKAPI *vkDbgUnregisterMsgCallbackType)(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
+typedef VK_RESULT (VKAPI *vkDbgSetMessageFilterType)(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter);
+typedef VK_RESULT (VKAPI *vkDbgSetObjectTagType)(VK_BASE_OBJECT object, size_t tagSize, const void* pTag);
+typedef VK_RESULT (VKAPI *vkDbgSetGlobalOptionType)(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData);
+typedef VK_RESULT (VKAPI *vkDbgSetDeviceOptionType)(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData);
+typedef void (VKAPI *vkCmdDbgMarkerBeginType)(VK_CMD_BUFFER cmdBuffer, const char* pMarker);
+typedef void (VKAPI *vkCmdDbgMarkerEndType)(VK_CMD_BUFFER cmdBuffer);
+
+#ifdef VK_PROTOTYPES
+VK_RESULT VKAPI vkDbgSetValidationLevel(
+ VK_DEVICE device,
+ VK_VALIDATION_LEVEL validationLevel);
+
+VK_RESULT VKAPI vkDbgRegisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
void* pUserData);
-XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
+VK_RESULT VKAPI vkDbgUnregisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
-XGL_RESULT XGLAPI xglDbgSetMessageFilter(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkDbgSetMessageFilter(
+ VK_DEVICE device,
int32_t msgCode,
- XGL_DBG_MSG_FILTER filter);
+ VK_DBG_MSG_FILTER filter);
-XGL_RESULT XGLAPI xglDbgSetObjectTag(
- XGL_BASE_OBJECT object,
+VK_RESULT VKAPI vkDbgSetObjectTag(
+ VK_BASE_OBJECT object,
size_t tagSize,
const void* pTag);
-XGL_RESULT XGLAPI xglDbgSetGlobalOption(
- XGL_INSTANCE instance,
- XGL_DBG_GLOBAL_OPTION dbgOption,
+VK_RESULT VKAPI vkDbgSetGlobalOption(
+ VK_INSTANCE instance,
+ VK_DBG_GLOBAL_OPTION dbgOption,
size_t dataSize,
const void* pData);
-XGL_RESULT XGLAPI xglDbgSetDeviceOption(
- XGL_DEVICE device,
- XGL_DBG_DEVICE_OPTION dbgOption,
+VK_RESULT VKAPI vkDbgSetDeviceOption(
+ VK_DEVICE device,
+ VK_DBG_DEVICE_OPTION dbgOption,
size_t dataSize,
const void* pData);
-void XGLAPI xglCmdDbgMarkerBegin(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDbgMarkerBegin(
+ VK_CMD_BUFFER cmdBuffer,
const char* pMarker);
-void XGLAPI xglCmdDbgMarkerEnd(
- XGL_CMD_BUFFER cmdBuffer);
+void VKAPI vkCmdDbgMarkerEnd(
+ VK_CMD_BUFFER cmdBuffer);
-#endif // XGL_PROTOTYPES
+#endif // VK_PROTOTYPES
#ifdef __cplusplus
}; // extern "C"
#endif // __cplusplus
-#endif // __XGLDBG_H__
+#endif // __VKDBG_H__
-#ifndef XGLICD_H
-#define XGLICD_H
+#ifndef VKICD_H
+#define VKICD_H
#include <stdint.h>
#include <stdbool.h>
-#include "xglPlatform.h"
+#include "vkPlatform.h"
/*
* The ICD must reserve space for a pointer for the loader's dispatch
#define ICD_LOADER_MAGIC 0x01CDC0DE
-typedef union _XGL_LOADER_DATA {
+typedef union _VK_LOADER_DATA {
uint32_t loaderMagic;
void *loaderData;
-} XGL_LOADER_DATA;
+} VK_LOADER_DATA;
static inline void set_loader_magic_value(void *pNewObject) {
- XGL_LOADER_DATA *loader_info = (XGL_LOADER_DATA *) pNewObject;
+ VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *) pNewObject;
loader_info->loaderMagic = ICD_LOADER_MAGIC;
}
static inline bool valid_loader_magic_value(void *pNewObject) {
- const XGL_LOADER_DATA *loader_info = (XGL_LOADER_DATA *) pNewObject;
+ const VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *) pNewObject;
return loader_info->loaderMagic == ICD_LOADER_MAGIC;
}
-#endif // XGLICD_H
+#endif // VKICD_H
*/
#pragma once
-#include "xgl.h"
-#include "xglDbg.h"
+#include "vulkan.h"
+#include "vkDbg.h"
#if defined(__linux__) || defined(XCB_NVIDIA)
-#include "xglWsiX11Ext.h"
+#include "vkWsiX11Ext.h"
#endif
#if defined(__GNUC__) && __GNUC__ >= 4
-# define XGL_LAYER_EXPORT __attribute__((visibility("default")))
+# define VK_LAYER_EXPORT __attribute__((visibility("default")))
#elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)
-# define XGL_LAYER_EXPORT __attribute__((visibility("default")))
+# define VK_LAYER_EXPORT __attribute__((visibility("default")))
#else
-# define XGL_LAYER_EXPORT
+# define VK_LAYER_EXPORT
#endif
-typedef struct _XGL_BASE_LAYER_OBJECT
+typedef struct _VK_BASE_LAYER_OBJECT
{
- xglGetProcAddrType pGPA;
- XGL_BASE_OBJECT nextObject;
- XGL_BASE_OBJECT baseObject;
-} XGL_BASE_LAYER_OBJECT;
+ vkGetProcAddrType pGPA;
+ VK_BASE_OBJECT nextObject;
+ VK_BASE_OBJECT baseObject;
+} VK_BASE_LAYER_OBJECT;
-typedef struct _XGL_LAYER_DISPATCH_TABLE
+typedef struct _VK_LAYER_DISPATCH_TABLE
{
- xglGetProcAddrType GetProcAddr;
- xglCreateInstanceType CreateInstance;
- xglDestroyInstanceType DestroyInstance;
- xglEnumerateGpusType EnumerateGpus;
- xglGetGpuInfoType GetGpuInfo;
- xglCreateDeviceType CreateDevice;
- xglDestroyDeviceType DestroyDevice;
- xglGetExtensionSupportType GetExtensionSupport;
- xglEnumerateLayersType EnumerateLayers;
- xglGetDeviceQueueType GetDeviceQueue;
- xglQueueSubmitType QueueSubmit;
- xglQueueAddMemReferenceType QueueAddMemReference;
- xglQueueRemoveMemReferenceType QueueRemoveMemReference;
- xglQueueWaitIdleType QueueWaitIdle;
- xglDeviceWaitIdleType DeviceWaitIdle;
- xglAllocMemoryType AllocMemory;
- xglFreeMemoryType FreeMemory;
- xglSetMemoryPriorityType SetMemoryPriority;
- xglMapMemoryType MapMemory;
- xglUnmapMemoryType UnmapMemory;
- xglPinSystemMemoryType PinSystemMemory;
- xglGetMultiGpuCompatibilityType GetMultiGpuCompatibility;
- xglOpenSharedMemoryType OpenSharedMemory;
- xglOpenSharedSemaphoreType OpenSharedSemaphore;
- xglOpenPeerMemoryType OpenPeerMemory;
- xglOpenPeerImageType OpenPeerImage;
- xglDestroyObjectType DestroyObject;
- xglGetObjectInfoType GetObjectInfo;
- xglBindObjectMemoryType BindObjectMemory;
- xglBindObjectMemoryRangeType BindObjectMemoryRange;
- xglBindImageMemoryRangeType BindImageMemoryRange;
- xglCreateFenceType CreateFence;
- xglGetFenceStatusType GetFenceStatus;
- xglResetFencesType ResetFences;
- xglWaitForFencesType WaitForFences;
- xglCreateSemaphoreType CreateSemaphore;
- xglQueueSignalSemaphoreType QueueSignalSemaphore;
- xglQueueWaitSemaphoreType QueueWaitSemaphore;
- xglCreateEventType CreateEvent;
- xglGetEventStatusType GetEventStatus;
- xglSetEventType SetEvent;
- xglResetEventType ResetEvent;
- xglCreateQueryPoolType CreateQueryPool;
- xglGetQueryPoolResultsType GetQueryPoolResults;
- xglGetFormatInfoType GetFormatInfo;
- xglCreateBufferType CreateBuffer;
- xglCreateBufferViewType CreateBufferView;
- xglCreateImageType CreateImage;
- xglGetImageSubresourceInfoType GetImageSubresourceInfo;
- xglCreateImageViewType CreateImageView;
- xglCreateColorAttachmentViewType CreateColorAttachmentView;
- xglCreateDepthStencilViewType CreateDepthStencilView;
- xglCreateShaderType CreateShader;
- xglCreateGraphicsPipelineType CreateGraphicsPipeline;
- xglCreateGraphicsPipelineDerivativeType CreateGraphicsPipelineDerivative;
- xglCreateComputePipelineType CreateComputePipeline;
- xglStorePipelineType StorePipeline;
- xglLoadPipelineType LoadPipeline;
- xglLoadPipelineDerivativeType LoadPipelineDerivative;
- xglCreateSamplerType CreateSampler;
- xglCreateDescriptorSetLayoutType CreateDescriptorSetLayout;
- xglCreateDescriptorSetLayoutChainType CreateDescriptorSetLayoutChain;
- xglBeginDescriptorPoolUpdateType BeginDescriptorPoolUpdate;
- xglEndDescriptorPoolUpdateType EndDescriptorPoolUpdate;
- xglCreateDescriptorPoolType CreateDescriptorPool;
- xglResetDescriptorPoolType ResetDescriptorPool;
- xglAllocDescriptorSetsType AllocDescriptorSets;
- xglClearDescriptorSetsType ClearDescriptorSets;
- xglUpdateDescriptorsType UpdateDescriptors;
- xglCreateDynamicViewportStateType CreateDynamicViewportState;
- xglCreateDynamicRasterStateType CreateDynamicRasterState;
- xglCreateDynamicColorBlendStateType CreateDynamicColorBlendState;
- xglCreateDynamicDepthStencilStateType CreateDynamicDepthStencilState;
- xglCreateCommandBufferType CreateCommandBuffer;
- xglBeginCommandBufferType BeginCommandBuffer;
- xglEndCommandBufferType EndCommandBuffer;
- xglResetCommandBufferType ResetCommandBuffer;
- xglCmdBindPipelineType CmdBindPipeline;
- xglCmdBindDynamicStateObjectType CmdBindDynamicStateObject;
- xglCmdBindDescriptorSetsType CmdBindDescriptorSets;
- xglCmdBindVertexBufferType CmdBindVertexBuffer;
- xglCmdBindIndexBufferType CmdBindIndexBuffer;
- xglCmdDrawType CmdDraw;
- xglCmdDrawIndexedType CmdDrawIndexed;
- xglCmdDrawIndirectType CmdDrawIndirect;
- xglCmdDrawIndexedIndirectType CmdDrawIndexedIndirect;
- xglCmdDispatchType CmdDispatch;
- xglCmdDispatchIndirectType CmdDispatchIndirect;
- xglCmdCopyBufferType CmdCopyBuffer;
- xglCmdCopyImageType CmdCopyImage;
- xglCmdBlitImageType CmdBlitImage;
- xglCmdCopyBufferToImageType CmdCopyBufferToImage;
- xglCmdCopyImageToBufferType CmdCopyImageToBuffer;
- xglCmdCloneImageDataType CmdCloneImageData;
- xglCmdUpdateBufferType CmdUpdateBuffer;
- xglCmdFillBufferType CmdFillBuffer;
- xglCmdClearColorImageType CmdClearColorImage;
- xglCmdClearDepthStencilType CmdClearDepthStencil;
- xglCmdResolveImageType CmdResolveImage;
- xglCmdSetEventType CmdSetEvent;
- xglCmdResetEventType CmdResetEvent;
- xglCmdWaitEventsType CmdWaitEvents;
- xglCmdPipelineBarrierType CmdPipelineBarrier;
- xglCmdBeginQueryType CmdBeginQuery;
- xglCmdEndQueryType CmdEndQuery;
- xglCmdResetQueryPoolType CmdResetQueryPool;
- xglCmdWriteTimestampType CmdWriteTimestamp;
- xglCmdInitAtomicCountersType CmdInitAtomicCounters;
- xglCmdLoadAtomicCountersType CmdLoadAtomicCounters;
- xglCmdSaveAtomicCountersType CmdSaveAtomicCounters;
- xglCreateFramebufferType CreateFramebuffer;
- xglCreateRenderPassType CreateRenderPass;
- xglCmdBeginRenderPassType CmdBeginRenderPass;
- xglCmdEndRenderPassType CmdEndRenderPass;
- xglDbgSetValidationLevelType DbgSetValidationLevel;
- xglDbgRegisterMsgCallbackType DbgRegisterMsgCallback;
- xglDbgUnregisterMsgCallbackType DbgUnregisterMsgCallback;
- xglDbgSetMessageFilterType DbgSetMessageFilter;
- xglDbgSetObjectTagType DbgSetObjectTag;
- xglDbgSetGlobalOptionType DbgSetGlobalOption;
- xglDbgSetDeviceOptionType DbgSetDeviceOption;
- xglCmdDbgMarkerBeginType CmdDbgMarkerBegin;
- xglCmdDbgMarkerEndType CmdDbgMarkerEnd;
+ vkGetProcAddrType GetProcAddr;
+ vkCreateInstanceType CreateInstance;
+ vkDestroyInstanceType DestroyInstance;
+ vkEnumerateGpusType EnumerateGpus;
+ vkGetGpuInfoType GetGpuInfo;
+ vkCreateDeviceType CreateDevice;
+ vkDestroyDeviceType DestroyDevice;
+ vkGetExtensionSupportType GetExtensionSupport;
+ vkEnumerateLayersType EnumerateLayers;
+ vkGetDeviceQueueType GetDeviceQueue;
+ vkQueueSubmitType QueueSubmit;
+ vkQueueAddMemReferenceType QueueAddMemReference;
+ vkQueueRemoveMemReferenceType QueueRemoveMemReference;
+ vkQueueWaitIdleType QueueWaitIdle;
+ vkDeviceWaitIdleType DeviceWaitIdle;
+ vkAllocMemoryType AllocMemory;
+ vkFreeMemoryType FreeMemory;
+ vkSetMemoryPriorityType SetMemoryPriority;
+ vkMapMemoryType MapMemory;
+ vkUnmapMemoryType UnmapMemory;
+ vkPinSystemMemoryType PinSystemMemory;
+ vkGetMultiGpuCompatibilityType GetMultiGpuCompatibility;
+ vkOpenSharedMemoryType OpenSharedMemory;
+ vkOpenSharedSemaphoreType OpenSharedSemaphore;
+ vkOpenPeerMemoryType OpenPeerMemory;
+ vkOpenPeerImageType OpenPeerImage;
+ vkDestroyObjectType DestroyObject;
+ vkGetObjectInfoType GetObjectInfo;
+ vkBindObjectMemoryType BindObjectMemory;
+ vkBindObjectMemoryRangeType BindObjectMemoryRange;
+ vkBindImageMemoryRangeType BindImageMemoryRange;
+ vkCreateFenceType CreateFence;
+ vkGetFenceStatusType GetFenceStatus;
+ vkResetFencesType ResetFences;
+ vkWaitForFencesType WaitForFences;
+ vkCreateSemaphoreType CreateSemaphore;
+ vkQueueSignalSemaphoreType QueueSignalSemaphore;
+ vkQueueWaitSemaphoreType QueueWaitSemaphore;
+ vkCreateEventType CreateEvent;
+ vkGetEventStatusType GetEventStatus;
+ vkSetEventType SetEvent;
+ vkResetEventType ResetEvent;
+ vkCreateQueryPoolType CreateQueryPool;
+ vkGetQueryPoolResultsType GetQueryPoolResults;
+ vkGetFormatInfoType GetFormatInfo;
+ vkCreateBufferType CreateBuffer;
+ vkCreateBufferViewType CreateBufferView;
+ vkCreateImageType CreateImage;
+ vkGetImageSubresourceInfoType GetImageSubresourceInfo;
+ vkCreateImageViewType CreateImageView;
+ vkCreateColorAttachmentViewType CreateColorAttachmentView;
+ vkCreateDepthStencilViewType CreateDepthStencilView;
+ vkCreateShaderType CreateShader;
+ vkCreateGraphicsPipelineType CreateGraphicsPipeline;
+ vkCreateGraphicsPipelineDerivativeType CreateGraphicsPipelineDerivative;
+ vkCreateComputePipelineType CreateComputePipeline;
+ vkStorePipelineType StorePipeline;
+ vkLoadPipelineType LoadPipeline;
+ vkLoadPipelineDerivativeType LoadPipelineDerivative;
+ vkCreateSamplerType CreateSampler;
+ vkCreateDescriptorSetLayoutType CreateDescriptorSetLayout;
+ vkCreateDescriptorSetLayoutChainType CreateDescriptorSetLayoutChain;
+ vkBeginDescriptorPoolUpdateType BeginDescriptorPoolUpdate;
+ vkEndDescriptorPoolUpdateType EndDescriptorPoolUpdate;
+ vkCreateDescriptorPoolType CreateDescriptorPool;
+ vkResetDescriptorPoolType ResetDescriptorPool;
+ vkAllocDescriptorSetsType AllocDescriptorSets;
+ vkClearDescriptorSetsType ClearDescriptorSets;
+ vkUpdateDescriptorsType UpdateDescriptors;
+ vkCreateDynamicViewportStateType CreateDynamicViewportState;
+ vkCreateDynamicRasterStateType CreateDynamicRasterState;
+ vkCreateDynamicColorBlendStateType CreateDynamicColorBlendState;
+ vkCreateDynamicDepthStencilStateType CreateDynamicDepthStencilState;
+ vkCreateCommandBufferType CreateCommandBuffer;
+ vkBeginCommandBufferType BeginCommandBuffer;
+ vkEndCommandBufferType EndCommandBuffer;
+ vkResetCommandBufferType ResetCommandBuffer;
+ vkCmdBindPipelineType CmdBindPipeline;
+ vkCmdBindDynamicStateObjectType CmdBindDynamicStateObject;
+ vkCmdBindDescriptorSetsType CmdBindDescriptorSets;
+ vkCmdBindVertexBufferType CmdBindVertexBuffer;
+ vkCmdBindIndexBufferType CmdBindIndexBuffer;
+ vkCmdDrawType CmdDraw;
+ vkCmdDrawIndexedType CmdDrawIndexed;
+ vkCmdDrawIndirectType CmdDrawIndirect;
+ vkCmdDrawIndexedIndirectType CmdDrawIndexedIndirect;
+ vkCmdDispatchType CmdDispatch;
+ vkCmdDispatchIndirectType CmdDispatchIndirect;
+ vkCmdCopyBufferType CmdCopyBuffer;
+ vkCmdCopyImageType CmdCopyImage;
+ vkCmdBlitImageType CmdBlitImage;
+ vkCmdCopyBufferToImageType CmdCopyBufferToImage;
+ vkCmdCopyImageToBufferType CmdCopyImageToBuffer;
+ vkCmdCloneImageDataType CmdCloneImageData;
+ vkCmdUpdateBufferType CmdUpdateBuffer;
+ vkCmdFillBufferType CmdFillBuffer;
+ vkCmdClearColorImageType CmdClearColorImage;
+ vkCmdClearDepthStencilType CmdClearDepthStencil;
+ vkCmdResolveImageType CmdResolveImage;
+ vkCmdSetEventType CmdSetEvent;
+ vkCmdResetEventType CmdResetEvent;
+ vkCmdWaitEventsType CmdWaitEvents;
+ vkCmdPipelineBarrierType CmdPipelineBarrier;
+ vkCmdBeginQueryType CmdBeginQuery;
+ vkCmdEndQueryType CmdEndQuery;
+ vkCmdResetQueryPoolType CmdResetQueryPool;
+ vkCmdWriteTimestampType CmdWriteTimestamp;
+ vkCmdInitAtomicCountersType CmdInitAtomicCounters;
+ vkCmdLoadAtomicCountersType CmdLoadAtomicCounters;
+ vkCmdSaveAtomicCountersType CmdSaveAtomicCounters;
+ vkCreateFramebufferType CreateFramebuffer;
+ vkCreateRenderPassType CreateRenderPass;
+ vkCmdBeginRenderPassType CmdBeginRenderPass;
+ vkCmdEndRenderPassType CmdEndRenderPass;
+ vkDbgSetValidationLevelType DbgSetValidationLevel;
+ vkDbgRegisterMsgCallbackType DbgRegisterMsgCallback;
+ vkDbgUnregisterMsgCallbackType DbgUnregisterMsgCallback;
+ vkDbgSetMessageFilterType DbgSetMessageFilter;
+ vkDbgSetObjectTagType DbgSetObjectTag;
+ vkDbgSetGlobalOptionType DbgSetGlobalOption;
+ vkDbgSetDeviceOptionType DbgSetDeviceOption;
+ vkCmdDbgMarkerBeginType CmdDbgMarkerBegin;
+ vkCmdDbgMarkerEndType CmdDbgMarkerEnd;
#if defined(__linux__) || defined(XCB_NVIDIA)
- xglWsiX11AssociateConnectionType WsiX11AssociateConnection;
- xglWsiX11GetMSCType WsiX11GetMSC;
- xglWsiX11CreatePresentableImageType WsiX11CreatePresentableImage;
- xglWsiX11QueuePresentType WsiX11QueuePresent;
+ vkWsiX11AssociateConnectionType WsiX11AssociateConnection;
+ vkWsiX11GetMSCType WsiX11GetMSC;
+ vkWsiX11CreatePresentableImageType WsiX11CreatePresentableImage;
+ vkWsiX11QueuePresentType WsiX11QueuePresent;
#endif // WIN32
-} XGL_LAYER_DISPATCH_TABLE;
+} VK_LAYER_DISPATCH_TABLE;
// LL node for tree of dbg callback functions
-typedef struct _XGL_LAYER_DBG_FUNCTION_NODE
+typedef struct _VK_LAYER_DBG_FUNCTION_NODE
{
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback;
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback;
void *pUserData;
- struct _XGL_LAYER_DBG_FUNCTION_NODE *pNext;
-} XGL_LAYER_DBG_FUNCTION_NODE;
+ struct _VK_LAYER_DBG_FUNCTION_NODE *pNext;
+} VK_LAYER_DBG_FUNCTION_NODE;
-typedef enum _XGL_LAYER_DBG_ACTION
+typedef enum _VK_LAYER_DBG_ACTION
{
- XGL_DBG_LAYER_ACTION_IGNORE = 0x0,
- XGL_DBG_LAYER_ACTION_CALLBACK = 0x1,
- XGL_DBG_LAYER_ACTION_LOG_MSG = 0x2,
- XGL_DBG_LAYER_ACTION_BREAK = 0x4
-} XGL_LAYER_DBG_ACTION;
+ VK_DBG_LAYER_ACTION_IGNORE = 0x0,
+ VK_DBG_LAYER_ACTION_CALLBACK = 0x1,
+ VK_DBG_LAYER_ACTION_LOG_MSG = 0x2,
+ VK_DBG_LAYER_ACTION_BREAK = 0x4
+} VK_LAYER_DBG_ACTION;
-typedef enum _XGL_LAYER_DBG_REPORT_LEVEL
+typedef enum _VK_LAYER_DBG_REPORT_LEVEL
{
- XGL_DBG_LAYER_LEVEL_INFO = 0,
- XGL_DBG_LAYER_LEVEL_WARN,
- XGL_DBG_LAYER_LEVEL_PERF_WARN,
- XGL_DBG_LAYER_LEVEL_ERROR,
- XGL_DBG_LAYER_LEVEL_NONE,
-} XGL_LAYER_DBG_REPORT_LEVEL;
+ VK_DBG_LAYER_LEVEL_INFO = 0,
+ VK_DBG_LAYER_LEVEL_WARN,
+ VK_DBG_LAYER_LEVEL_PERF_WARN,
+ VK_DBG_LAYER_LEVEL_ERROR,
+ VK_DBG_LAYER_LEVEL_NONE,
+} VK_LAYER_DBG_REPORT_LEVEL;
// ------------------------------------------------------------------------------------------------
// API functions
//
-// File: xglPlatform.h
+// File: vkPlatform.h
//
/*
** Copyright (c) 2014 The Khronos Group Inc.
*/
-#ifndef __XGLPLATFORM_H__
-#define __XGLPLATFORM_H__
+#ifndef __VKPLATFORM_H__
+#define __VKPLATFORM_H__
#ifdef __cplusplus
extern "C"
// Ensure we don't pick up min/max macros from Winddef.h
#define NOMINMAX
- // On Windows, XGLAPI should equate to the __stdcall convention
- #define XGLAPI __stdcall
+ // On Windows, VKAPI should equate to the __stdcall convention
+ #define VKAPI __stdcall
// C99:
#ifndef __cplusplus
#define inline __inline
#endif // __cplusplus
#elif defined(__GNUC__)
- // On other platforms using GCC, XGLAPI stays undefined
- #define XGLAPI
+ // On other platforms using GCC, VKAPI stays undefined
+ #define VKAPI
#else
// Unsupported Platform!
#error "Unsupported OS Platform detected!"
#include <stddef.h>
-#if !defined(XGL_NO_STDINT_H)
+#if !defined(VK_NO_STDINT_H)
#if defined(_MSC_VER) && (_MSC_VER < 1600)
typedef signed __int8 int8_t;
typedef unsigned __int8 uint8_t;
#else
#include <stdint.h>
#endif
-#endif // !defined(XGL_NO_STDINT_H)
+#endif // !defined(VK_NO_STDINT_H)
-typedef uint64_t XGL_GPU_SIZE;
+typedef uint64_t VK_GPU_SIZE;
typedef uint32_t bool32_t;
-typedef uint32_t XGL_SAMPLE_MASK;
-typedef uint32_t XGL_FLAGS;
-typedef int32_t XGL_ENUM;
+typedef uint32_t VK_SAMPLE_MASK;
+typedef uint32_t VK_FLAGS;
+typedef int32_t VK_ENUM;
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGLPLATFORM_H__
+#endif // __VKPLATFORM_H__
/* IN DEVELOPMENT. DO NOT SHIP. */
-#ifndef __XGLWSIWINEXT_H__
-#define __XGLWSIWINEXT_H__
+#ifndef __VKWSIWINEXT_H__
+#define __VKWSIWINEXT_H__
// This is just to get windows to build.
// Need to replace with the declarations for Windows wsi.
-typedef void XGL_WSI_X11_CONNECTION_INFO;
+typedef void VK_WSI_X11_CONNECTION_INFO;
typedef unsigned int xcb_window_t;
typedef unsigned int xcb_randr_crtc_t;
-typedef void XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
-typedef void XGL_WSI_X11_PRESENT_INFO;
+typedef void VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
+typedef void VK_WSI_X11_PRESENT_INFO;
-#endif // __XGLWSIWINEXT_H__
+#endif // __VKWSIWINEXT_H__
/* IN DEVELOPMENT. DO NOT SHIP. */
-#ifndef __XGLWSIX11EXT_H__
-#define __XGLWSIX11EXT_H__
+#ifndef __VKWSIX11EXT_H__
+#define __VKWSIX11EXT_H__
#include <xcb/xcb.h>
#include <xcb/randr.h>
-#include "xgl.h"
+#include "vulkan.h"
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
-typedef struct _XGL_WSI_X11_CONNECTION_INFO {
+typedef struct _VK_WSI_X11_CONNECTION_INFO {
xcb_connection_t* pConnection;
xcb_window_t root;
xcb_randr_provider_t provider;
-} XGL_WSI_X11_CONNECTION_INFO;
+} VK_WSI_X11_CONNECTION_INFO;
-typedef struct _XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO
+typedef struct _VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO
{
- XGL_FORMAT format;
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_EXTENT2D extent;
- XGL_FLAGS flags;
-} XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
+ VK_FORMAT format;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_EXTENT2D extent;
+ VK_FLAGS flags;
+} VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
-typedef struct _XGL_WSI_X11_PRESENT_INFO
+typedef struct _VK_WSI_X11_PRESENT_INFO
{
/* which window to present to */
xcb_window_t destWindow;
- XGL_IMAGE srcImage;
+ VK_IMAGE srcImage;
/**
* After the command buffers in the queue have been completed, if the MSC
* }
*
* In other words, either set \p target_msc to an absolute value (require
- * xglWsiX11GetMSC(), potentially a round-trip to the server, to get the
+ * vkWsiX11GetMSC(), potentially a round-trip to the server, to get the
* current MSC first), or set \p target_msc to zero and set a "swap
* interval".
*
* be flipped to.
*/
bool32_t flip;
-} XGL_WSI_X11_PRESENT_INFO;
+} VK_WSI_X11_PRESENT_INFO;
-typedef XGL_RESULT (XGLAPI *xglWsiX11AssociateConnectionType)(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo);
-typedef XGL_RESULT (XGLAPI *xglWsiX11GetMSCType)(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc);
-typedef XGL_RESULT (XGLAPI *xglWsiX11CreatePresentableImageType)(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglWsiX11QueuePresentType)(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence);
+typedef VK_RESULT (VKAPI *vkWsiX11AssociateConnectionType)(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo);
+typedef VK_RESULT (VKAPI *vkWsiX11GetMSCType)(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc);
+typedef VK_RESULT (VKAPI *vkWsiX11CreatePresentableImageType)(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkWsiX11QueuePresentType)(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence);
/**
* Associate an X11 connection with a GPU. This should be done before device
* creation. If the device is already created,
- * XGL_ERROR_DEVICE_ALREADY_CREATED is returned.
+ * VK_ERROR_DEVICE_ALREADY_CREATED is returned.
*
* Truth is, given a connection, we could find the associated GPU. But
* without having a GPU as the first parameter, the loader could not find the
* dispatch table.
*
- * This function is available when xglGetExtensionSupport says "XGL_WSI_X11"
+ * This function is available when vkGetExtensionSupport says "VK_WSI_X11"
* is supported.
*/
-XGL_RESULT XGLAPI xglWsiX11AssociateConnection(
- XGL_PHYSICAL_GPU gpu,
- const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo);
+VK_RESULT VKAPI vkWsiX11AssociateConnection(
+ VK_PHYSICAL_GPU gpu,
+ const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo);
/**
* Return the current MSC (Media Stream Counter, incremented for each vblank)
* of \p crtc. If crtc is \p XCB_NONE, a suitable CRTC is picked based on \p
* win.
*/
-XGL_RESULT XGLAPI xglWsiX11GetMSC(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkWsiX11GetMSC(
+ VK_DEVICE device,
xcb_window_t window,
xcb_randr_crtc_t crtc,
uint64_t* pMsc);
/**
- * Create an XGL_IMAGE that can be presented. An XGL_GPU_MEMORY is created
+ * Create an VK_IMAGE that can be presented. An VK_GPU_MEMORY is created
* and bound automatically. The memory returned can only be used in
- * xglQueue[Add|Remove]MemReference. Destroying the memory or binding another memory to the
+ * vkQueue[Add|Remove]MemReference. Destroying the memory or binding another memory to the
* image is not allowed.
*/
-XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(
- XGL_DEVICE device,
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkWsiX11CreatePresentableImage(
+ VK_DEVICE device,
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem);
/**
* Present an image to an X11 window. The presentation always occurs after
* the command buffers in the queue have been completed, subject to other
- * parameters specified in XGL_WSI_X11_PRESENT_INFO.
+ * parameters specified in VK_WSI_X11_PRESENT_INFO.
*
* Fence is reached when the presentation occurs.
*/
-XGL_RESULT XGLAPI xglWsiX11QueuePresent(
- XGL_QUEUE queue,
- const XGL_WSI_X11_PRESENT_INFO* pPresentInfo,
- XGL_FENCE fence);
+VK_RESULT VKAPI vkWsiX11QueuePresent(
+ VK_QUEUE queue,
+ const VK_WSI_X11_PRESENT_INFO* pPresentInfo,
+ VK_FENCE fence);
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGLWSIX11EXT_H__
+#endif // __VKWSIX11EXT_H__
cmake_minimum_required (VERSION 2.8.11)
-macro(run_xgl_helper subcmd)
+macro(run_vk_helper subcmd)
add_custom_command(OUTPUT ${ARGN}
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl_helper.py --${subcmd} ${PROJECT_SOURCE_DIR}/include/xgl.h --abs_out_dir ${CMAKE_CURRENT_BINARY_DIR}
- DEPENDS ${PROJECT_SOURCE_DIR}/xgl_helper.py ${PROJECT_SOURCE_DIR}/include/xgl.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk_helper.py --${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan.h --abs_out_dir ${CMAKE_CURRENT_BINARY_DIR}
+ DEPENDS ${PROJECT_SOURCE_DIR}/vk_helper.py ${PROJECT_SOURCE_DIR}/include/vulkan.h
)
endmacro()
-macro(run_xgl_layer_generate subcmd output)
+macro(run_vk_layer_generate subcmd output)
add_custom_command(OUTPUT ${output}
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${subcmd} ${PROJECT_SOURCE_DIR}/include/xgl.h > ${output}
- DEPENDS ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${PROJECT_SOURCE_DIR}/include/xgl.h ${PROJECT_SOURCE_DIR}/xgl.py
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan.h > ${output}
+ DEPENDS ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${PROJECT_SOURCE_DIR}/include/vulkan.h ${PROJECT_SOURCE_DIR}/xgl.py
)
endmacro()
if (WIN32)
- macro(add_xgl_layer target)
- add_custom_command(OUTPUT XGLLayer${target}.def
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py win-def-file XGLLayer${target} layer > XGLLayer${target}.def
+ macro(add_vk_layer target)
+ add_custom_command(OUTPUT VKLayer${target}.def
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py win-def-file VKLayer${target} layer > VKLayer${target}.def
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py
)
- add_library(XGLLayer${target} SHARED ${ARGN} XGLLayer${target}.def)
- target_link_Libraries(XGLLayer${target} layer_utils)
- add_dependencies(XGLLayer${target} generate_xgl_layer_helpers)
- add_dependencies(XGLLayer${target} ${CMAKE_CURRENT_BINARY_DIR}/XGLLayer${target}.def)
- set_target_properties(XGLLayer${target} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/XGLLayer${target}.def")
+ add_library(VKLayer${target} SHARED ${ARGN} VKLayer${target}.def)
+ target_link_Libraries(VKLayer${target} layer_utils)
+ add_dependencies(VKLayer${target} generate_vk_layer_helpers)
+ add_dependencies(VKLayer${target} ${CMAKE_CURRENT_BINARY_DIR}/VKLayer${target}.def)
+ set_target_properties(VKLayer${target} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/VKLayer${target}.def")
endmacro()
else()
- macro(add_xgl_layer target)
- add_library(XGLLayer${target} SHARED ${ARGN})
- target_link_Libraries(XGLLayer${target} layer_utils)
- add_dependencies(XGLLayer${target} generate_xgl_layer_helpers)
- set_target_properties(XGLLayer${target} PROPERTIES LINK_FLAGS "-Wl,-Bsymbolic")
+ macro(add_vk_layer target)
+ add_library(VKLayer${target} SHARED ${ARGN})
+ target_link_Libraries(VKLayer${target} layer_utils)
+ add_dependencies(VKLayer${target} generate_vk_layer_helpers)
+ set_target_properties(VKLayer${target} PROPERTIES LINK_FLAGS "-Wl,-Bsymbolic")
endmacro()
endif()
)
if (WIN32)
- set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
- set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
endif()
if (NOT WIN32)
set (CMAKE_CXX_FLAGS "-std=c++11")
- set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
- set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
endif()
-add_custom_command(OUTPUT xgl_dispatch_table_helper.h
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py dispatch-table-ops layer > xgl_dispatch_table_helper.h
+add_custom_command(OUTPUT vk_dispatch_table_helper.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py dispatch-table-ops layer > vk_dispatch_table_helper.h
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py)
-add_custom_command(OUTPUT xgl_generic_intercept_proc_helper.h
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py layer-intercept-proc > xgl_generic_intercept_proc_helper.h
+add_custom_command(OUTPUT vk_generic_intercept_proc_helper.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py layer-intercept-proc > vk_generic_intercept_proc_helper.h
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py)
-run_xgl_helper(gen_enum_string_helper xgl_enum_string_helper.h)
-run_xgl_helper(gen_struct_wrappers
- xgl_struct_string_helper.h
- xgl_struct_string_helper_cpp.h
- xgl_struct_string_helper_no_addr.h
- xgl_struct_string_helper_no_addr_cpp.h
- xgl_struct_size_helper.h
- xgl_struct_size_helper.c
- xgl_struct_wrappers.h
- xgl_struct_wrappers.cpp
+run_vk_helper(gen_enum_string_helper vk_enum_string_helper.h)
+run_vk_helper(gen_struct_wrappers
+ vk_struct_string_helper.h
+ vk_struct_string_helper_cpp.h
+ vk_struct_string_helper_no_addr.h
+ vk_struct_string_helper_no_addr_cpp.h
+ vk_struct_size_helper.h
+ vk_struct_size_helper.c
+ vk_struct_wrappers.h
+ vk_struct_wrappers.cpp
)
-run_xgl_helper(gen_graphviz xgl_struct_graphviz_helper.h)
+run_vk_helper(gen_graphviz vk_struct_graphviz_helper.h)
-add_custom_target(generate_xgl_layer_helpers DEPENDS
- xgl_dispatch_table_helper.h
- xgl_generic_intercept_proc_helper.h
- xgl_enum_string_helper.h
- xgl_struct_string_helper.h
- xgl_struct_string_helper_no_addr.h
- xgl_struct_string_helper_cpp.h
- xgl_struct_string_helper_no_addr_cpp.h
- xgl_struct_size_helper.h
- xgl_struct_size_helper.c
- xgl_struct_wrappers.h
- xgl_struct_wrappers.cpp
- xgl_struct_graphviz_helper.h
+add_custom_target(generate_vk_layer_helpers DEPENDS
+ vk_dispatch_table_helper.h
+ vk_generic_intercept_proc_helper.h
+ vk_enum_string_helper.h
+ vk_struct_string_helper.h
+ vk_struct_string_helper_no_addr.h
+ vk_struct_string_helper_cpp.h
+ vk_struct_string_helper_no_addr_cpp.h
+ vk_struct_size_helper.h
+ vk_struct_size_helper.c
+ vk_struct_wrappers.h
+ vk_struct_wrappers.cpp
+ vk_struct_graphviz_helper.h
)
-run_xgl_layer_generate(Generic generic_layer.c)
-run_xgl_layer_generate(ApiDump api_dump.c)
-run_xgl_layer_generate(ApiDumpFile api_dump_file.c)
-run_xgl_layer_generate(ApiDumpNoAddr api_dump_no_addr.c)
-run_xgl_layer_generate(ApiDumpCpp api_dump.cpp)
-run_xgl_layer_generate(ApiDumpNoAddrCpp api_dump_no_addr.cpp)
-run_xgl_layer_generate(ObjectTracker object_track.c)
+run_vk_layer_generate(Generic generic_layer.c)
+run_vk_layer_generate(ApiDump api_dump.c)
+run_vk_layer_generate(ApiDumpFile api_dump_file.c)
+run_vk_layer_generate(ApiDumpNoAddr api_dump_no_addr.c)
+run_vk_layer_generate(ApiDumpCpp api_dump.cpp)
+run_vk_layer_generate(ApiDumpNoAddrCpp api_dump_no_addr.cpp)
+run_vk_layer_generate(ObjectTracker object_track.c)
add_library(layer_utils SHARED layers_config.cpp)
if (WIN32)
target_link_libraries(layer_utils)
endif()
-add_xgl_layer(Basic basic.cpp)
-add_xgl_layer(Multi multi.cpp)
-add_xgl_layer(DrawState draw_state.cpp)
-add_xgl_layer(MemTracker mem_tracker.cpp)
-add_xgl_layer(GlaveSnapshot glave_snapshot.c)
+add_vk_layer(Basic basic.cpp)
+add_vk_layer(Multi multi.cpp)
+add_vk_layer(DrawState draw_state.cpp)
+add_vk_layer(MemTracker mem_tracker.cpp)
+add_vk_layer(GlaveSnapshot glave_snapshot.c)
# generated
-add_xgl_layer(Generic generic_layer.c)
-add_xgl_layer(APIDump api_dump.c)
-add_xgl_layer(APIDumpFile api_dump_file.c)
-add_xgl_layer(APIDumpNoAddr api_dump_no_addr.c)
-add_xgl_layer(APIDumpCpp api_dump.cpp)
-add_xgl_layer(APIDumpNoAddrCpp api_dump_no_addr.cpp)
-add_xgl_layer(ObjectTracker object_track.c)
-add_xgl_layer(ParamChecker param_checker.cpp)
+add_vk_layer(Generic generic_layer.c)
+add_vk_layer(APIDump api_dump.c)
+add_vk_layer(APIDumpFile api_dump_file.c)
+add_vk_layer(APIDumpNoAddr api_dump_no_addr.c)
+add_vk_layer(APIDumpCpp api_dump.cpp)
+add_vk_layer(APIDumpNoAddrCpp api_dump_no_addr.cpp)
+add_vk_layer(ObjectTracker object_track.c)
+add_vk_layer(ParamChecker param_checker.cpp)
## Overview
-Layer libraries can be written to intercept or hook XGL entrypoints for various
-debug and validation purposes. One or more XGL entrypoints can be defined in your Layer
+Layer libraries can be written to intercept or hook VK entrypoints for various
+debug and validation purposes. One or more VK entrypoints can be defined in your Layer
library. Undefined entrypoints in the Layer library will be passed to the next Layer which
may be the driver. Multiple layer libraries can be chained (actually a hierarchy) together.
-xglEnumerateLayer can be called to list the available layer libraries. xglGetProcAddr is
+vkEnumerateLayer can be called to list the available layer libraries. vkGetProcAddr is
used internally by the Layers and ICD Loader to initialize dispatch tables. Layers are
-activated at xglCreateDevice time. xglCreateDevice createInfo struct is extended to allow
+activated at vkCreateDevice time. vkCreateDevice createInfo struct is extended to allow
a list of layers to be activated. Layer libraries can alternatively be LD\_PRELOADed depending
upon how they are implemented.
Note that some layers are code-generated and will therefore exist in the directory (build_dir)/layers
--include/xglLayer.h - header file for layer code.
+-include/vkLayer.h - header file for layer code.
### Templates
layer/Basic.cpp (name=Basic) simple example wrapping a few entrypoints. Shows layer features:
- Multiple dispatch tables for supporting multiple GPUs.
- Example layer extension function shown.
-- Layer extension advertised by xglGetExtension().
-- xglEnumerateLayers() supports loader layer name queries and call interception
+- Layer extension advertised by vkGetExtension().
+- vkEnumerateLayers() supports loader layer name queries and call interception
- Can be LD\_PRELOADed individually
layer/Multi.cpp (name=multi1:multi2) simple example showing multiple layers per library
-(build dir)/layer/generic_layer.c (name=Generic) - auto generated example wrapping all XGL entrypoints. Single global dispatch table. Can be LD\_PRELOADed.
+(build dir)/layer/generic_layer.c (name=Generic) - auto generated example wrapping all VK entrypoints. Single global dispatch table. Can be LD\_PRELOADed.
### Print API Calls and Parameter Values
(build dir)/layer/api_dump.c (name=APIDump) - print out API calls along with parameter values
(build dir)/layer/api_dump.cpp (name=APIDumpCpp) - same as above but uses c++ strings and i/o streams
-(build dir)/layer/api\_dump\_file.c (name=APIDumpFile) - Write API calls along with parameter values to xgl\_apidump.txt file.
+(build dir)/layer/api\_dump\_file.c (name=APIDumpFile) - Write API calls along with parameter values to vk\_apidump.txt file.
(build dir)/layer/api\_dump\_no\_addr.c (name=APIDumpNoAddr) - print out API calls along with parameter values but replace any variable addresses with the static string "addr".
(build dir)/layer/api\_dump\_no\_addr.cpp (name=APIDumpNoAddrCpp) - same as above but uses c++ strings and i/o streams
### Print Object Stats
-(build dir>/layer/object_track.c (name=ObjectTracker) - Print object CREATE/USE/DESTROY stats. Individually track objects by category. XGL\_OBJECT\_TYPE enum defined in object_track.h. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. Provides custom interface to query number of live objects of given type "XGL\_UINT64 objTrackGetObjectCount(XGL\_OBJECT\_TYPE type)" and a secondary call to return an array of those objects "XGL\_RESULT objTrackGetObjects(XGL\_OBJECT\_TYPE type, XGL\_UINT64 objCount, OBJTRACK\_NODE* pObjNodeArray)".
+(build dir>/layer/object_track.c (name=ObjectTracker) - Print object CREATE/USE/DESTROY stats. Individually track objects by category. VK\_OBJECT\_TYPE enum defined in object_track.h. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. Provides custom interface to query number of live objects of given type "VK\_UINT64 objTrackGetObjectCount(VK\_OBJECT\_TYPE type)" and a secondary call to return an array of those objects "VK\_RESULT objTrackGetObjects(VK\_OBJECT\_TYPE type, VK\_UINT64 objCount, OBJTRACK\_NODE* pObjNodeArray)".
### Report Draw State
layer/draw\_state.c (name=DrawState) - DrawState reports the Descriptor Set, Pipeline State, and dynamic state at each Draw call. DrawState layer performs a number of validation checks on this state. Of primary interest is making sure that the resources bound to Descriptor Sets correctly align with the layout specified for the Set. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
## Using Layers
-1. Build XGL loader and i965 icd driver using normal steps (cmake and make)
-2. Place libXGLLayer<name>.so in the same directory as your XGL test or app:
+1. Build VK loader and i965 icd driver using normal steps (cmake and make)
+2. Place libVKLayer<name>.so in the same directory as your VK test or app:
- cp build/layer/libXGLLayerBasic.so build/layer/libXGLLayerGeneric.so build/tests
+ cp build/layer/libVKLayerBasic.so build/layer/libVKLayerGeneric.so build/tests
- This is required for the Icd loader to be able to scan and enumerate your library. Alternatively, use the LIBXGL\_LAYERS\_PATH environment variable to specify where the layer libraries reside.
+ This is required for the Icd loader to be able to scan and enumerate your library. Alternatively, use the LIBVK\_LAYERS\_PATH environment variable to specify where the layer libraries reside.
3. Specify which Layers to activate by using
-xglCreateDevice XGL\_LAYER\_CREATE\_INFO struct or environment variable LIBXGL\_LAYER\_NAMES
+vkCreateDevice VK\_LAYER\_CREATE\_INFO struct or environment variable LIBVK\_LAYER\_NAMES
- export LIBXGL\_LAYER\_NAMES=Basic:Generic
- cd build/tests; ./xglinfo
+ export LIBVK\_LAYER\_NAMES=Basic:Generic
+ cd build/tests; ./vkinfo
## Tips for writing new layers
-1. Must implement xglGetProcAddr() (aka GPA);
-2. Must have a local dispatch table to call next layer (see xglLayer.h);
-3. Should implement xglEnumerateLayers() returning layer name when gpu == NULL; otherwise layer name is extracted from library filename by the Loader;
+1. Must implement vkGetProcAddr() (aka GPA);
+2. Must have a local dispatch table to call next layer (see vkLayer.h);
+3. Should implement vkEnumerateLayers() returning layer name when gpu == NULL; otherwise layer name is extracted from library filename by the Loader;
4. gpu objects must be unwrapped (gpu->nextObject) when passed to next layer;
5. next layers GPA can be found in the wrapped gpu object;
6. Loader calls a layer's GPA first so initialization should occur here;
7. all entrypoints can be wrapped but only will be called after layer is activated
- via the first xglCreatDevice;
-8. entrypoint names can be any name as specified by the layers xglGetProcAddr
- implementation; exceptions are xglGetProcAddr and xglEnumerateLayers,
+ via the first vkCreatDevice;
+8. entrypoint names can be any name as specified by the layers vkGetProcAddr
+ implementation; exceptions are vkGetProcAddr and vkEnumerateLayers,
which must have the correct name since the Loader calls these entrypoints;
-9. entrypoint names must be exported to the dynamic loader with XGL\_LAYER\_EXPORT;
-10. For LD\_PRELOAD support: a)entrypoint names should be offical xgl names and
+9. entrypoint names must be exported to the dynamic loader with VK\_LAYER\_EXPORT;
+10. For LD\_PRELOAD support: a)entrypoint names should be offical vk names and
b) initialization should occur on any call with a gpu object (Loader type
- initialization must be done if implementing xglInitAndEnumerateGpus).
-11. Implement xglGetExtension() if you want to advertise a layer extension
+ initialization must be done if implementing vkInitAndEnumerateGpus).
+11. Implement vkGetExtension() if you want to advertise a layer extension
(only available after the layer is activated);
-12. Layer naming convention is camel case same name as in library: libXGLLayer<name>.so
+12. Layer naming convention is camel case same name as in library: libVKLayer<name>.so
13. For multiple layers in one library should implement a separate GetProcAddr for each
layer and export them to dynamic loader; function name is <layerName>GetProcAddr().
- Main xglGetProcAddr() should also be implemented.
+ Main vkGetProcAddr() should also be implemented.
## Status
### Current Features
-- scanning of available Layers during xglInitAndEnumerateGpus;
-- layer names retrieved via xglEnumerateLayers();
-- xglEnumerateLayers and xglGetProcAddr supported APIs in xgl.h, ICD loader and i965 driver;
+- scanning of available Layers during vkInitAndEnumerateGpus;
+- layer names retrieved via vkEnumerateLayers();
+- vkEnumerateLayers and vkGetProcAddr supported APIs in vulkan.h, ICD loader and i965 driver;
- multiple layers in a hierarchy supported;
- layer enumeration supported per GPU;
- layers activated per gpu and per icd driver: separate dispatch table and layer library list in loader for each gpu or icd driver;
-- activation via xglCreateDevice extension struct in CreateInfo or via env var (LIBXGL\_LAYER\_NAMES);
+- activation via vkCreateDevice extension struct in CreateInfo or via env var (LIBVK\_LAYER\_NAMES);
- layer libraries can be LD\_PRELOADed if implemented correctly;
### Current known issues
- Layers with multiple threads are not well tested and some layers likely to have issues. APIDump family of layers should be thread-safe.
- layer libraries (except Basic) don't support multiple dispatch tables for multi-gpus;
-- layer libraries not yet include loader init functionality for full LD\_PRELOAD of entire API including xglInitAndEnumerateGpus;
-- Since Layers aren't activated until xglCreateDevice, any calls to xglGetExtension() will not report layer extensions unless implemented in the layer;
-- layer extensions do NOT need to be enabled in xglCreateDevice to be available;
+- layer libraries not yet include loader init functionality for full LD\_PRELOAD of entire API including vkInitAndEnumerateGpus;
+- Since Layers aren't activated until vkCreateDevice, any calls to vkGetExtension() will not report layer extensions unless implemented in the layer;
+- layer extensions do NOT need to be enabled in vkCreateDevice to be available;
- no support for apps registering layers, must be discovered via initial scan
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <assert.h>
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xglLayer.h"
+#include "vk_dispatch_table_helper.h"
+#include "vkLayer.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap;
-static XGL_LAYER_DISPATCH_TABLE * initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap.find((void *) gpuw);
if (it == tableMap.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap[(void *) gpuw] = pTable;
} else
{
return it->second;
}
- layer_initialize_dispatch_table(pTable, gpuw->pGPA, (XGL_PHYSICAL_GPU) gpuw->nextObject);
+ layer_initialize_dispatch_table(pTable, gpuw->pGPA, (VK_PHYSICAL_GPU) gpuw->nextObject);
return pTable;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLayerExtension1(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLayerExtension1(VK_DEVICE device)
{
- printf("In xglLayerExtension1() call w/ device: %p\n", (void*)device);
- printf("xglLayerExtension1 returning SUCCESS\n");
- return XGL_SUCCESS;
+ printf("In vkLayerExtension1() call w/ device: %p\n", (void*)device);
+ printf("vkLayerExtension1 returning SUCCESS\n");
+ return VK_SUCCESS;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_RESULT result;
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
/* This entrypoint is NOT going to init it's own dispatch table since loader calls here early */
- if (!strncmp(pExtName, "xglLayerExtension1", strlen("xglLayerExtension1")))
+ if (!strncmp(pExtName, "vkLayerExtension1", strlen("vkLayerExtension1")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!strncmp(pExtName, "Basic", strlen("Basic")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!tableMap.empty() && (tableMap.find(gpuw) != tableMap.end()))
{
- printf("At start of wrapped xglGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
- printf("Completed wrapped xglGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
+ printf("At start of wrapped vkGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ printf("Completed wrapped vkGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
- printf("At start of wrapped xglCreateDevice() call w/ gpu: %p\n", (void*)gpu);
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of wrapped vkCreateDevice() call w/ gpu: %p\n", (void*)gpu);
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table
tableMap.emplace(*pDevice, pTable);
- printf("Completed wrapped xglCreateDevice() call w/ pDevice, Device %p: %p\n", (void*)pDevice, (void *) *pDevice);
+ printf("Completed wrapped vkCreateDevice() call w/ pDevice, Device %p: %p\n", (void*)pDevice, (void *) *pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[device];
- printf("At start of wrapped xglGetFormatInfo() call w/ device: %p\n", (void*)device);
- XGL_RESULT result = pTable->GetFormatInfo(device, format, infoType, pDataSize, pData);
- printf("Completed wrapped xglGetFormatInfo() call w/ device: %p\n", (void*)device);
+ printf("At start of wrapped vkGetFormatInfo() call w/ device: %p\n", (void*)device);
+ VK_RESULT result = pTable->GetFormatInfo(device, format, infoType, pDataSize, pData);
+ printf("Completed wrapped vkGetFormatInfo() call w/ device: %p\n", (void*)device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = initLayerTable(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = initLayerTable(gpuw);
- printf("At start of wrapped xglEnumerateLayers() call w/ gpu: %p\n", gpu);
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("At start of wrapped vkEnumerateLayers() call w/ gpu: %p\n", gpu);
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL || pReserved == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// Example of a layer that is only compatible with Intel's GPUs
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT*) pReserved;
- xglGetGpuInfoType fpGetGpuInfo;
- XGL_PHYSICAL_GPU_PROPERTIES gpuProps;
- size_t dataSize = sizeof(XGL_PHYSICAL_GPU_PROPERTIES);
- fpGetGpuInfo = (xglGetGpuInfoType) gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, "xglGetGpuInfo");
- fpGetGpuInfo((XGL_PHYSICAL_GPU) gpuw->nextObject, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, &dataSize, &gpuProps);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT*) pReserved;
+ vkGetGpuInfoType fpGetGpuInfo;
+ VK_PHYSICAL_GPU_PROPERTIES gpuProps;
+ size_t dataSize = sizeof(VK_PHYSICAL_GPU_PROPERTIES);
+ fpGetGpuInfo = (vkGetGpuInfoType) gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, "vkGetGpuInfo");
+ fpGetGpuInfo((VK_PHYSICAL_GPU) gpuw->nextObject, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, &dataSize, &gpuProps);
if (gpuProps.vendorId == 0x8086)
{
*pOutLayerCount = 1;
{
*pOutLayerCount = 0;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
if (gpu == NULL)
return NULL;
- initLayerTable((const XGL_BASE_LAYER_OBJECT *) gpu);
-
- if (!strncmp("xglGetProcAddr", pName, sizeof("xglGetProcAddr")))
- return (void *) xglGetProcAddr;
- else if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
- return (void *) xglCreateDevice;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
- return (void *) xglEnumerateLayers;
- else if (!strncmp("xglGetFormatInfo", pName, sizeof ("xglGetFormatInfo")))
- return (void *) xglGetFormatInfo;
- else if (!strncmp("xglLayerExtension1", pName, sizeof("xglLayerExtension1")))
- return (void *) xglLayerExtension1;
+ initLayerTable((const VK_BASE_LAYER_OBJECT *) gpu);
+
+ if (!strncmp("vkGetProcAddr", pName, sizeof("vkGetProcAddr")))
+ return (void *) vkGetProcAddr;
+ else if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
+ return (void *) vkCreateDevice;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
+ return (void *) vkEnumerateLayers;
+ else if (!strncmp("vkGetFormatInfo", pName, sizeof ("vkGetFormatInfo")))
+ return (void *) vkGetFormatInfo;
+ else if (!strncmp("vkLayerExtension1", pName, sizeof("vkLayerExtension1")))
+ return (void *) vkLayerExtension1;
else {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xgl_struct_string_helper_cpp.h"
+#include "vk_dispatch_table_helper.h"
+#include "vk_struct_string_helper_cpp.h"
#pragma GCC diagnostic ignored "-Wwrite-strings"
-#include "xgl_struct_graphviz_helper.h"
+#include "vk_struct_graphviz_helper.h"
#pragma GCC diagnostic warning "-Wwrite-strings"
-#include "xgl_struct_size_helper.h"
+#include "vk_struct_size_helper.h"
#include "draw_state.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
#include "loader_platform.h"
#include "layers_msg.h"
-unordered_map<XGL_SAMPLER, SAMPLER_NODE*> sampleMap;
-unordered_map<XGL_IMAGE_VIEW, IMAGE_NODE*> imageMap;
-unordered_map<XGL_BUFFER_VIEW, BUFFER_NODE*> bufferMap;
-unordered_map<XGL_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*> dynamicStateMap;
-unordered_map<XGL_PIPELINE, PIPELINE_NODE*> pipelineMap;
-unordered_map<XGL_DESCRIPTOR_POOL, POOL_NODE*> poolMap;
-unordered_map<XGL_DESCRIPTOR_SET, SET_NODE*> setMap;
-unordered_map<XGL_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*> layoutMap;
+unordered_map<VK_SAMPLER, SAMPLER_NODE*> sampleMap;
+unordered_map<VK_IMAGE_VIEW, IMAGE_NODE*> imageMap;
+unordered_map<VK_BUFFER_VIEW, BUFFER_NODE*> bufferMap;
+unordered_map<VK_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*> dynamicStateMap;
+unordered_map<VK_PIPELINE, PIPELINE_NODE*> pipelineMap;
+unordered_map<VK_DESCRIPTOR_POOL, POOL_NODE*> poolMap;
+unordered_map<VK_DESCRIPTOR_SET, SET_NODE*> setMap;
+unordered_map<VK_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*> layoutMap;
// Map for layout chains
-unordered_map<XGL_CMD_BUFFER, GLOBAL_CB_NODE*> cmdBufferMap;
-unordered_map<XGL_RENDER_PASS, XGL_RENDER_PASS_CREATE_INFO*> renderPassMap;
-unordered_map<XGL_FRAMEBUFFER, XGL_FRAMEBUFFER_CREATE_INFO*> frameBufferMap;
+unordered_map<VK_CMD_BUFFER, GLOBAL_CB_NODE*> cmdBufferMap;
+unordered_map<VK_RENDER_PASS, VK_RENDER_PASS_CREATE_INFO*> renderPassMap;
+unordered_map<VK_FRAMEBUFFER, VK_FRAMEBUFFER_CREATE_INFO*> frameBufferMap;
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(g_initOnce);
// TODO : This can be much smarter, using separate locks for separate global data
static int globalLockInitialized = 0;
}
// Block of code at start here for managing/tracking Pipeline state that this layer cares about
// Just track 2 shaders for now
-#define XGL_NUM_GRAPHICS_SHADERS XGL_SHADER_STAGE_COMPUTE
+#define VK_NUM_GRAPHICS_SHADERS VK_SHADER_STAGE_COMPUTE
#define MAX_SLOTS 2048
#define NUM_COMMAND_BUFFERS_TO_DISPLAY 10
// Then need to synchronize the accesses based on cmd buffer so that if I'm reading state on one cmd buffer, updates
// to that same cmd buffer by separate thread are not changing state from underneath us
// Track the last cmd buffer touched by this thread
-static XGL_CMD_BUFFER g_lastCmdBuffer[MAX_TID] = {NULL};
+static VK_CMD_BUFFER g_lastCmdBuffer[MAX_TID] = {NULL};
// Track the last group of CBs touched for displaying to dot file
static GLOBAL_CB_NODE* g_pLastTouchedCB[NUM_COMMAND_BUFFERS_TO_DISPLAY] = {NULL};
static uint32_t g_lastTouchedCBIndex = 0;
// Track the last global DrawState of interest touched by any thread
static GLOBAL_CB_NODE* g_lastGlobalCB = NULL;
static PIPELINE_NODE* g_lastBoundPipeline = NULL;
-static DYNAMIC_STATE_NODE* g_lastBoundDynamicState[XGL_NUM_STATE_BIND_POINT] = {NULL};
-static XGL_DESCRIPTOR_SET g_lastBoundDescriptorSet = NULL;
+static DYNAMIC_STATE_NODE* g_lastBoundDynamicState[VK_NUM_STATE_BIND_POINT] = {NULL};
+static VK_DESCRIPTOR_SET g_lastBoundDescriptorSet = NULL;
#define MAX_BINDING 0xFFFFFFFF // Default vtxBinding value in CB Node to identify if no vtxBinding set
-//static DYNAMIC_STATE_NODE* g_pDynamicStateHead[XGL_NUM_STATE_BIND_POINT] = {0};
+//static DYNAMIC_STATE_NODE* g_pDynamicStateHead[VK_NUM_STATE_BIND_POINT] = {0};
-static void insertDynamicState(const XGL_DYNAMIC_STATE_OBJECT state, const GENERIC_HEADER* pCreateInfo, XGL_STATE_BIND_POINT bindPoint)
+static void insertDynamicState(const VK_DYNAMIC_STATE_OBJECT state, const GENERIC_HEADER* pCreateInfo, VK_STATE_BIND_POINT bindPoint)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO* pVPCI = NULL;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO* pVPCI = NULL;
size_t scSize = 0;
size_t vpSize = 0;
loader_platform_thread_lock_mutex(&globalLock);
DYNAMIC_STATE_NODE* pStateNode = new DYNAMIC_STATE_NODE;
pStateNode->stateObj = state;
switch (pCreateInfo->sType) {
- case XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_VP_STATE_CREATE_INFO));
- pVPCI = (XGL_DYNAMIC_VP_STATE_CREATE_INFO*)pCreateInfo;
- pStateNode->create_info.vpci.pScissors = new XGL_RECT[pStateNode->create_info.vpci.viewportAndScissorCount];
- pStateNode->create_info.vpci.pViewports = new XGL_VIEWPORT[pStateNode->create_info.vpci.viewportAndScissorCount];
- scSize = pVPCI->viewportAndScissorCount * sizeof(XGL_RECT);
- vpSize = pVPCI->viewportAndScissorCount * sizeof(XGL_VIEWPORT);
+ case VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_VP_STATE_CREATE_INFO));
+ pVPCI = (VK_DYNAMIC_VP_STATE_CREATE_INFO*)pCreateInfo;
+ pStateNode->create_info.vpci.pScissors = new VK_RECT[pStateNode->create_info.vpci.viewportAndScissorCount];
+ pStateNode->create_info.vpci.pViewports = new VK_VIEWPORT[pStateNode->create_info.vpci.viewportAndScissorCount];
+ scSize = pVPCI->viewportAndScissorCount * sizeof(VK_RECT);
+ vpSize = pVPCI->viewportAndScissorCount * sizeof(VK_VIEWPORT);
memcpy((void*)pStateNode->create_info.vpci.pScissors, pVPCI->pScissors, scSize);
memcpy((void*)pStateNode->create_info.vpci.pViewports, pVPCI->pViewports, vpSize);
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_RS_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_RS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_CB_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_CB_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_DS_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_DS_STATE_CREATE_INFO));
break;
default:
assert(0);
// Free all allocated nodes for Dynamic State objs
static void freeDynamicState()
{
- for (unordered_map<XGL_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*>::iterator ii=dynamicStateMap.begin(); ii!=dynamicStateMap.end(); ++ii) {
- if (XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO == (*ii).second->create_info.vpci.sType) {
+ for (unordered_map<VK_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*>::iterator ii=dynamicStateMap.begin(); ii!=dynamicStateMap.end(); ++ii) {
+ if (VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO == (*ii).second->create_info.vpci.sType) {
delete[] (*ii).second->create_info.vpci.pScissors;
delete[] (*ii).second->create_info.vpci.pViewports;
}
// Free all sampler nodes
static void freeSamplers()
{
- for (unordered_map<XGL_SAMPLER, SAMPLER_NODE*>::iterator ii=sampleMap.begin(); ii!=sampleMap.end(); ++ii) {
+ for (unordered_map<VK_SAMPLER, SAMPLER_NODE*>::iterator ii=sampleMap.begin(); ii!=sampleMap.end(); ++ii) {
delete (*ii).second;
}
}
-static XGL_IMAGE_VIEW_CREATE_INFO* getImageViewCreateInfo(XGL_IMAGE_VIEW view)
+static VK_IMAGE_VIEW_CREATE_INFO* getImageViewCreateInfo(VK_IMAGE_VIEW view)
{
loader_platform_thread_lock_mutex(&globalLock);
if (imageMap.find(view) == imageMap.end()) {
// Free all image nodes
static void freeImages()
{
- for (unordered_map<XGL_IMAGE_VIEW, IMAGE_NODE*>::iterator ii=imageMap.begin(); ii!=imageMap.end(); ++ii) {
+ for (unordered_map<VK_IMAGE_VIEW, IMAGE_NODE*>::iterator ii=imageMap.begin(); ii!=imageMap.end(); ++ii) {
delete (*ii).second;
}
}
-static XGL_BUFFER_VIEW_CREATE_INFO* getBufferViewCreateInfo(XGL_BUFFER_VIEW view)
+static VK_BUFFER_VIEW_CREATE_INFO* getBufferViewCreateInfo(VK_BUFFER_VIEW view)
{
loader_platform_thread_lock_mutex(&globalLock);
if (bufferMap.find(view) == bufferMap.end()) {
// Free all buffer nodes
static void freeBuffers()
{
- for (unordered_map<XGL_BUFFER_VIEW, BUFFER_NODE*>::iterator ii=bufferMap.begin(); ii!=bufferMap.end(); ++ii) {
+ for (unordered_map<VK_BUFFER_VIEW, BUFFER_NODE*>::iterator ii=bufferMap.begin(); ii!=bufferMap.end(); ++ii) {
delete (*ii).second;
}
}
-static GLOBAL_CB_NODE* getCBNode(XGL_CMD_BUFFER cb);
+static GLOBAL_CB_NODE* getCBNode(VK_CMD_BUFFER cb);
-static void updateCBTracking(XGL_CMD_BUFFER cb)
+static void updateCBTracking(VK_CMD_BUFFER cb)
{
g_lastCmdBuffer[getTIDIndex()] = cb;
GLOBAL_CB_NODE* pCB = getCBNode(cb);
}
// Print the last bound dynamic state
-static void printDynamicState(const XGL_CMD_BUFFER cb)
+static void printDynamicState(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
loader_platform_thread_lock_mutex(&globalLock);
char str[4*1024];
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (pCB->lastBoundDynamicState[i]) {
- sprintf(str, "Reporting CreateInfo for currently bound %s object %p", string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i), pCB->lastBoundDynamicState[i]->stateObj);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", str);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", dynamic_display(pCB->lastBoundDynamicState[i]->pCreateInfo, " ").c_str());
+ sprintf(str, "Reporting CreateInfo for currently bound %s object %p", string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i), pCB->lastBoundDynamicState[i]->stateObj);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", dynamic_display(pCB->lastBoundDynamicState[i]->pCreateInfo, " ").c_str());
break;
}
else {
- sprintf(str, "No dynamic state of type %s bound", string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "No dynamic state of type %s bound", string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
}
// Retrieve pipeline node ptr for given pipeline object
-static PIPELINE_NODE* getPipeline(XGL_PIPELINE pipeline)
+static PIPELINE_NODE* getPipeline(VK_PIPELINE pipeline)
{
loader_platform_thread_lock_mutex(&globalLock);
if (pipelineMap.find(pipeline) == pipelineMap.end()) {
}
// For given sampler, return a ptr to its Create Info struct, or NULL if sampler not found
-static XGL_SAMPLER_CREATE_INFO* getSamplerCreateInfo(const XGL_SAMPLER sampler)
+static VK_SAMPLER_CREATE_INFO* getSamplerCreateInfo(const VK_SAMPLER sampler)
{
loader_platform_thread_lock_mutex(&globalLock);
if (sampleMap.find(sampler) == sampleMap.end()) {
// Init the pipeline mapping info based on pipeline create info LL tree
// Threading note : Calls to this function should wrapped in mutex
-static void initPipeline(PIPELINE_NODE* pPipeline, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo)
+static void initPipeline(PIPELINE_NODE* pPipeline, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo)
{
// First init create info, we'll shadow the structs as we go down the tree
// TODO : Validate that no create info is incorrectly replicated
- memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO));
+ memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO));
GENERIC_HEADER* pTrav = (GENERIC_HEADER*)pCreateInfo->pNext;
GENERIC_HEADER* pPrev = (GENERIC_HEADER*)&pPipeline->graphicsPipelineCI; // Hold prev ptr to tie chain of structs together
size_t bufferSize = 0;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO* pVICI = NULL;
- XGL_PIPELINE_CB_STATE_CREATE_INFO* pCBCI = NULL;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* pTmpPSSCI = NULL;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO* pVICI = NULL;
+ VK_PIPELINE_CB_STATE_CREATE_INFO* pCBCI = NULL;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO* pTmpPSSCI = NULL;
while (pTrav) {
switch (pTrav->sType) {
- case XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
- pTmpPSSCI = (XGL_PIPELINE_SHADER_STAGE_CREATE_INFO*)pTrav;
+ case VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
+ pTmpPSSCI = (VK_PIPELINE_SHADER_STAGE_CREATE_INFO*)pTrav;
switch (pTmpPSSCI->shader.stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
pPrev->pNext = &pPipeline->vsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vsCI;
- memcpy(&pPipeline->vsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->vsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_TESS_CONTROL:
+ case VK_SHADER_STAGE_TESS_CONTROL:
pPrev->pNext = &pPipeline->tcsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tcsCI;
- memcpy(&pPipeline->tcsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->tcsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_TESS_EVALUATION:
+ case VK_SHADER_STAGE_TESS_EVALUATION:
pPrev->pNext = &pPipeline->tesCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tesCI;
- memcpy(&pPipeline->tesCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->tesCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_GEOMETRY:
+ case VK_SHADER_STAGE_GEOMETRY:
pPrev->pNext = &pPipeline->gsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->gsCI;
- memcpy(&pPipeline->gsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->gsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
pPrev->pNext = &pPipeline->fsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->fsCI;
- memcpy(&pPipeline->fsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->fsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_COMPUTE:
- // TODO : Flag error, CS is specified through XGL_COMPUTE_PIPELINE_CREATE_INFO
+ case VK_SHADER_STAGE_COMPUTE:
+ // TODO : Flag error, CS is specified through VK_COMPUTE_PIPELINE_CREATE_INFO
break;
default:
// TODO : Flag error
break;
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
pPrev->pNext = &pPipeline->vertexInputCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vertexInputCI;
- memcpy((void*)&pPipeline->vertexInputCI, pTrav, sizeof(XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO));
+ memcpy((void*)&pPipeline->vertexInputCI, pTrav, sizeof(VK_PIPELINE_VERTEX_INPUT_CREATE_INFO));
// Copy embedded ptrs
- pVICI = (XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav;
+ pVICI = (VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav;
pPipeline->vtxBindingCount = pVICI->bindingCount;
if (pPipeline->vtxBindingCount) {
- pPipeline->pVertexBindingDescriptions = new XGL_VERTEX_INPUT_BINDING_DESCRIPTION[pPipeline->vtxBindingCount];
- bufferSize = pPipeline->vtxBindingCount * sizeof(XGL_VERTEX_INPUT_BINDING_DESCRIPTION);
- memcpy((void*)pPipeline->pVertexBindingDescriptions, ((XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
+ pPipeline->pVertexBindingDescriptions = new VK_VERTEX_INPUT_BINDING_DESCRIPTION[pPipeline->vtxBindingCount];
+ bufferSize = pPipeline->vtxBindingCount * sizeof(VK_VERTEX_INPUT_BINDING_DESCRIPTION);
+ memcpy((void*)pPipeline->pVertexBindingDescriptions, ((VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
}
pPipeline->vtxAttributeCount = pVICI->attributeCount;
if (pPipeline->vtxAttributeCount) {
- pPipeline->pVertexAttributeDescriptions = new XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION[pPipeline->vtxAttributeCount];
- bufferSize = pPipeline->vtxAttributeCount * sizeof(XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION);
- memcpy((void*)pPipeline->pVertexAttributeDescriptions, ((XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
+ pPipeline->pVertexAttributeDescriptions = new VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION[pPipeline->vtxAttributeCount];
+ bufferSize = pPipeline->vtxAttributeCount * sizeof(VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION);
+ memcpy((void*)pPipeline->pVertexAttributeDescriptions, ((VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->iaStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->iaStateCI;
- memcpy((void*)&pPipeline->iaStateCI, pTrav, sizeof(XGL_PIPELINE_IA_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->iaStateCI, pTrav, sizeof(VK_PIPELINE_IA_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->tessStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tessStateCI;
- memcpy((void*)&pPipeline->tessStateCI, pTrav, sizeof(XGL_PIPELINE_TESS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->tessStateCI, pTrav, sizeof(VK_PIPELINE_TESS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->vpStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vpStateCI;
- memcpy((void*)&pPipeline->vpStateCI, pTrav, sizeof(XGL_PIPELINE_VP_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->vpStateCI, pTrav, sizeof(VK_PIPELINE_VP_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->rsStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->rsStateCI;
- memcpy((void*)&pPipeline->rsStateCI, pTrav, sizeof(XGL_PIPELINE_RS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->rsStateCI, pTrav, sizeof(VK_PIPELINE_RS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->msStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->msStateCI;
- memcpy((void*)&pPipeline->msStateCI, pTrav, sizeof(XGL_PIPELINE_MS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->msStateCI, pTrav, sizeof(VK_PIPELINE_MS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->cbStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->cbStateCI;
- memcpy((void*)&pPipeline->cbStateCI, pTrav, sizeof(XGL_PIPELINE_CB_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->cbStateCI, pTrav, sizeof(VK_PIPELINE_CB_STATE_CREATE_INFO));
// Copy embedded ptrs
- pCBCI = (XGL_PIPELINE_CB_STATE_CREATE_INFO*)pTrav;
+ pCBCI = (VK_PIPELINE_CB_STATE_CREATE_INFO*)pTrav;
pPipeline->attachmentCount = pCBCI->attachmentCount;
if (pPipeline->attachmentCount) {
- pPipeline->pAttachments = new XGL_PIPELINE_CB_ATTACHMENT_STATE[pPipeline->attachmentCount];
- bufferSize = pPipeline->attachmentCount * sizeof(XGL_PIPELINE_CB_ATTACHMENT_STATE);
- memcpy((void*)pPipeline->pAttachments, ((XGL_PIPELINE_CB_STATE_CREATE_INFO*)pTrav)->pAttachments, bufferSize);
+ pPipeline->pAttachments = new VK_PIPELINE_CB_ATTACHMENT_STATE[pPipeline->attachmentCount];
+ bufferSize = pPipeline->attachmentCount * sizeof(VK_PIPELINE_CB_ATTACHMENT_STATE);
+ memcpy((void*)pPipeline->pAttachments, ((VK_PIPELINE_CB_STATE_CREATE_INFO*)pTrav)->pAttachments, bufferSize);
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->dsStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->dsStateCI;
- memcpy((void*)&pPipeline->dsStateCI, pTrav, sizeof(XGL_PIPELINE_DS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->dsStateCI, pTrav, sizeof(VK_PIPELINE_DS_STATE_CREATE_INFO));
break;
default:
assert(0);
// Free the Pipeline nodes
static void freePipelines()
{
- for (unordered_map<XGL_PIPELINE, PIPELINE_NODE*>::iterator ii=pipelineMap.begin(); ii!=pipelineMap.end(); ++ii) {
+ for (unordered_map<VK_PIPELINE, PIPELINE_NODE*>::iterator ii=pipelineMap.begin(); ii!=pipelineMap.end(); ++ii) {
if ((*ii).second->pVertexBindingDescriptions) {
delete[] (*ii).second->pVertexBindingDescriptions;
}
}
}
// For given pipeline, return number of MSAA samples, or one if MSAA disabled
-static uint32_t getNumSamples(const XGL_PIPELINE pipeline)
+static uint32_t getNumSamples(const VK_PIPELINE pipeline)
{
PIPELINE_NODE* pPipe = pipelineMap[pipeline];
- if (XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
+ if (VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
if (pPipe->msStateCI.multisampleEnable)
return pPipe->msStateCI.samples;
}
return 1;
}
// Validate state related to the PSO
-static void validatePipelineState(const GLOBAL_CB_NODE* pCB, const XGL_PIPELINE_BIND_POINT pipelineBindPoint, const XGL_PIPELINE pipeline)
+static void validatePipelineState(const GLOBAL_CB_NODE* pCB, const VK_PIPELINE_BIND_POINT pipelineBindPoint, const VK_PIPELINE pipeline)
{
- if (XGL_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
+ if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
// Verify that any MSAA request in PSO matches sample# in bound FB
uint32_t psoNumSamples = getNumSamples(pipeline);
if (pCB->activeRenderPass) {
- XGL_RENDER_PASS_CREATE_INFO* pRPCI = renderPassMap[pCB->activeRenderPass];
- XGL_FRAMEBUFFER_CREATE_INFO* pFBCI = frameBufferMap[pCB->framebuffer];
+ VK_RENDER_PASS_CREATE_INFO* pRPCI = renderPassMap[pCB->activeRenderPass];
+ VK_FRAMEBUFFER_CREATE_INFO* pFBCI = frameBufferMap[pCB->framebuffer];
if (psoNumSamples != pFBCI->sampleCount) {
char str[1024];
sprintf(str, "Num samples mismatche! Binding PSO (%p) with %u samples while current RenderPass (%p) uses FB (%p) with %u samples!", (void*)pipeline, psoNumSamples, (void*)pCB->activeRenderPass, (void*)pCB->framebuffer, pFBCI->sampleCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS", str);
}
} else {
// TODO : I believe it's an error if we reach this point and don't have an activeRenderPass
// Block of code at start here specifically for managing/tracking DSs
// Return Pool node ptr for specified pool or else NULL
-static POOL_NODE* getPoolNode(XGL_DESCRIPTOR_POOL pool)
+static POOL_NODE* getPoolNode(VK_DESCRIPTOR_POOL pool)
{
loader_platform_thread_lock_mutex(&globalLock);
if (poolMap.find(pool) == poolMap.end()) {
return poolMap[pool];
}
// Return Set node ptr for specified set or else NULL
-static SET_NODE* getSetNode(XGL_DESCRIPTOR_SET set)
+static SET_NODE* getSetNode(VK_DESCRIPTOR_SET set)
{
loader_platform_thread_lock_mutex(&globalLock);
if (setMap.find(set) == setMap.end()) {
return setMap[set];
}
-// Return XGL_TRUE if DS Exists and is within an xglBeginDescriptorPoolUpdate() call sequence, otherwise XGL_FALSE
-static bool32_t dsUpdateActive(XGL_DESCRIPTOR_SET ds)
+// Return VK_TRUE if DS Exists and is within an vkBeginDescriptorPoolUpdate() call sequence, otherwise VK_FALSE
+static bool32_t dsUpdateActive(VK_DESCRIPTOR_SET ds)
{
// Note, both "get" functions use global mutex so this guy does not
SET_NODE* pTrav = getSetNode(ds);
return pPool->updateActive;
}
}
- return XGL_FALSE;
+ return VK_FALSE;
}
-static LAYOUT_NODE* getLayoutNode(const XGL_DESCRIPTOR_SET_LAYOUT layout) {
+static LAYOUT_NODE* getLayoutNode(const VK_DESCRIPTOR_SET_LAYOUT layout) {
loader_platform_thread_lock_mutex(&globalLock);
if (layoutMap.find(layout) == layoutMap.end()) {
loader_platform_thread_unlock_mutex(&globalLock);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return ((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return ((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return ((XGL_UPDATE_IMAGES*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return ((XGL_UPDATE_BUFFERS*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- return ((XGL_UPDATE_AS_COPY*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return ((VK_UPDATE_SAMPLERS*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return ((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return ((VK_UPDATE_IMAGES*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return ((VK_UPDATE_BUFFERS*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ return ((VK_UPDATE_AS_COPY*)pUpdateStruct)->binding;
default:
// TODO : Flag specific error for this case
assert(0);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return (((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return (((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return (((XGL_UPDATE_IMAGES*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return (((XGL_UPDATE_BUFFERS*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return (((VK_UPDATE_SAMPLERS*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return (((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return (((VK_UPDATE_IMAGES*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return (((VK_UPDATE_BUFFERS*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
// TODO : Need to understand this case better and make sure code is correct
- return (((XGL_UPDATE_AS_COPY*)pUpdateStruct)->arrayElement);
+ return (((VK_UPDATE_AS_COPY*)pUpdateStruct)->arrayElement);
default:
// TODO : Flag specific error for this case
assert(0);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return (((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return (((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return (((XGL_UPDATE_IMAGES*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return (((XGL_UPDATE_BUFFERS*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return (((VK_UPDATE_SAMPLERS*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return (((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return (((VK_UPDATE_IMAGES*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return (((VK_UPDATE_BUFFERS*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
// TODO : Need to understand this case better and make sure code is correct
- return (((XGL_UPDATE_AS_COPY*)pUpdateStruct)->count);
+ return (((VK_UPDATE_AS_COPY*)pUpdateStruct)->count);
default:
// TODO : Flag specific error for this case
assert(0);
static bool32_t validateUpdateType(const LAYOUT_NODE* pLayout, const GENERIC_HEADER* pUpdateStruct)
{
// First get actual type of update
- XGL_DESCRIPTOR_TYPE actualType;
+ VK_DESCRIPTOR_TYPE actualType;
uint32_t i = 0;
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- actualType = XGL_DESCRIPTOR_TYPE_SAMPLER;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ actualType = VK_DESCRIPTOR_TYPE_SAMPLER;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- actualType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ actualType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- actualType = ((XGL_UPDATE_IMAGES*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ actualType = ((VK_UPDATE_IMAGES*)pUpdateStruct)->descriptorType;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- actualType = ((XGL_UPDATE_BUFFERS*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ actualType = ((VK_UPDATE_BUFFERS*)pUpdateStruct)->descriptorType;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- actualType = ((XGL_UPDATE_AS_COPY*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ actualType = ((VK_UPDATE_AS_COPY*)pUpdateStruct)->descriptorType;
break;
default:
// TODO : Flag specific error for this case
size_t base_array_size = 0;
size_t total_array_size = 0;
size_t baseBuffAddr = 0;
- XGL_UPDATE_BUFFERS* pUBCI;
- XGL_UPDATE_IMAGES* pUICI;
- XGL_IMAGE_VIEW_ATTACH_INFO** ppLocalImageViews = NULL;
- XGL_BUFFER_VIEW_ATTACH_INFO** ppLocalBufferViews = NULL;
+ VK_UPDATE_BUFFERS* pUBCI;
+ VK_UPDATE_IMAGES* pUICI;
+ VK_IMAGE_VIEW_ATTACH_INFO** ppLocalImageViews = NULL;
+ VK_BUFFER_VIEW_ATTACH_INFO** ppLocalBufferViews = NULL;
char str[1024];
switch (pUpdate->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_SAMPLERS));
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_SAMPLERS));
#if ALLOC_DEBUG
printf("Alloc10 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_SAMPLERS));
- array_size = sizeof(XGL_SAMPLER) * ((XGL_UPDATE_SAMPLERS*)pNewNode)->count;
- ((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers = (XGL_SAMPLER*)malloc(array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_SAMPLERS));
+ array_size = sizeof(VK_SAMPLER) * ((VK_UPDATE_SAMPLERS*)pNewNode)->count;
+ ((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers = (VK_SAMPLER*)malloc(array_size);
#if ALLOC_DEBUG
- printf("Alloc11 #%lu pNewNode->pSamplers addr(%p)\n", ++g_alloc_count, (void*)((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers);
+ printf("Alloc11 #%lu pNewNode->pSamplers addr(%p)\n", ++g_alloc_count, (void*)((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers);
#endif
- memcpy((XGL_SAMPLER*)((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers, ((XGL_UPDATE_SAMPLERS*)pUpdate)->pSamplers, array_size);
+ memcpy((VK_SAMPLER*)((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers, ((VK_UPDATE_SAMPLERS*)pUpdate)->pSamplers, array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_SAMPLER_TEXTURES));
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_SAMPLER_TEXTURES));
#if ALLOC_DEBUG
printf("Alloc12 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_SAMPLER_TEXTURES));
- array_size = sizeof(XGL_SAMPLER_IMAGE_VIEW_INFO) * ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count;
- ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews = (XGL_SAMPLER_IMAGE_VIEW_INFO*)malloc(array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_SAMPLER_TEXTURES));
+ array_size = sizeof(VK_SAMPLER_IMAGE_VIEW_INFO) * ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count;
+ ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews = (VK_SAMPLER_IMAGE_VIEW_INFO*)malloc(array_size);
#if ALLOC_DEBUG
- printf("Alloc13 #%lu pNewNode->pSamplerImageViews addr(%p)\n", ++g_alloc_count, (void*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews);
+ printf("Alloc13 #%lu pNewNode->pSamplerImageViews addr(%p)\n", ++g_alloc_count, (void*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews);
#endif
- for (uint32_t i = 0; i < ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count; i++) {
- memcpy((XGL_SAMPLER_IMAGE_VIEW_INFO*)&((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i], &((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i], sizeof(XGL_SAMPLER_IMAGE_VIEW_INFO));
- ((XGL_SAMPLER_IMAGE_VIEW_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView = (XGL_IMAGE_VIEW_ATTACH_INFO*)malloc(sizeof(XGL_IMAGE_VIEW_ATTACH_INFO));
+ for (uint32_t i = 0; i < ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count; i++) {
+ memcpy((VK_SAMPLER_IMAGE_VIEW_INFO*)&((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i], &((VK_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i], sizeof(VK_SAMPLER_IMAGE_VIEW_INFO));
+ ((VK_SAMPLER_IMAGE_VIEW_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView = (VK_IMAGE_VIEW_ATTACH_INFO*)malloc(sizeof(VK_IMAGE_VIEW_ATTACH_INFO));
#if ALLOC_DEBUG
- printf("Alloc14 #%lu pSamplerImageViews)[%u].pImageView addr(%p)\n", ++g_alloc_count, i, (void*)((XGL_SAMPLER_IMAGE_VIEW_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView);
+ printf("Alloc14 #%lu pSamplerImageViews)[%u].pImageView addr(%p)\n", ++g_alloc_count, i, (void*)((VK_SAMPLER_IMAGE_VIEW_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView);
#endif
- memcpy((XGL_IMAGE_VIEW_ATTACH_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i].pImageView, ((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i].pImageView, sizeof(XGL_IMAGE_VIEW_ATTACH_INFO));
+ memcpy((VK_IMAGE_VIEW_ATTACH_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i].pImageView, ((VK_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i].pImageView, sizeof(VK_IMAGE_VIEW_ATTACH_INFO));
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUICI = (XGL_UPDATE_IMAGES*)pUpdate;
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_IMAGES));
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUICI = (VK_UPDATE_IMAGES*)pUpdate;
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_IMAGES));
#if ALLOC_DEBUG
printf("Alloc15 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_IMAGES));
- total_array_size = (sizeof(XGL_IMAGE_VIEW_ATTACH_INFO) * ((XGL_UPDATE_IMAGES*)pNewNode)->count);
- ppLocalImageViews = (XGL_IMAGE_VIEW_ATTACH_INFO**)&(((XGL_UPDATE_IMAGES*)pNewNode)->pImageViews);
- *ppLocalImageViews = (XGL_IMAGE_VIEW_ATTACH_INFO*)malloc(total_array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_IMAGES));
+ total_array_size = (sizeof(VK_IMAGE_VIEW_ATTACH_INFO) * ((VK_UPDATE_IMAGES*)pNewNode)->count);
+ ppLocalImageViews = (VK_IMAGE_VIEW_ATTACH_INFO**)&(((VK_UPDATE_IMAGES*)pNewNode)->pImageViews);
+ *ppLocalImageViews = (VK_IMAGE_VIEW_ATTACH_INFO*)malloc(total_array_size);
#if ALLOC_DEBUG
printf("Alloc16 #%lu *pppLocalImageViews addr(%p)\n", ++g_alloc_count, (void*)*ppLocalImageViews);
#endif
memcpy((void*)*ppLocalImageViews, pUICI->pImageViews, total_array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUBCI = (XGL_UPDATE_BUFFERS*)pUpdate;
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_BUFFERS));
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUBCI = (VK_UPDATE_BUFFERS*)pUpdate;
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_BUFFERS));
#if ALLOC_DEBUG
printf("Alloc17 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_BUFFERS));
- total_array_size = (sizeof(XGL_BUFFER_VIEW_ATTACH_INFO) * pUBCI->count);
- ppLocalBufferViews = (XGL_BUFFER_VIEW_ATTACH_INFO**)&(((XGL_UPDATE_BUFFERS*)pNewNode)->pBufferViews);
- *ppLocalBufferViews = (XGL_BUFFER_VIEW_ATTACH_INFO*)malloc(total_array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_BUFFERS));
+ total_array_size = (sizeof(VK_BUFFER_VIEW_ATTACH_INFO) * pUBCI->count);
+ ppLocalBufferViews = (VK_BUFFER_VIEW_ATTACH_INFO**)&(((VK_UPDATE_BUFFERS*)pNewNode)->pBufferViews);
+ *ppLocalBufferViews = (VK_BUFFER_VIEW_ATTACH_INFO*)malloc(total_array_size);
#if ALLOC_DEBUG
printf("Alloc18 #%lu *pppLocalBufferViews addr(%p)\n", ++g_alloc_count, (void*)*ppLocalBufferViews);
#endif
memcpy((void*)*ppLocalBufferViews, pUBCI->pBufferViews, total_array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_AS_COPY));
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_AS_COPY));
#if ALLOC_DEBUG
printf("Alloc19 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_AS_COPY));
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_AS_COPY));
break;
default:
- sprintf(str, "Unexpected UPDATE struct of type %s (value %u) in xglUpdateDescriptors() struct tree", string_XGL_STRUCTURE_TYPE(pUpdate->sType), pUpdate->sType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", str);
+ sprintf(str, "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VK_STRUCTURE_TYPE(pUpdate->sType), pUpdate->sType);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", str);
return NULL;
}
// Make sure that pNext for the end of shadow copy is NULL
return pNewNode;
}
// For given ds, update its mapping based on ppUpdateArray
-static void dsUpdate(XGL_DESCRIPTOR_SET ds, uint32_t updateCount, const void** ppUpdateArray)
+static void dsUpdate(VK_DESCRIPTOR_SET ds, uint32_t updateCount, const void** ppUpdateArray)
{
SET_NODE* pSet = getSetNode(ds);
loader_platform_thread_lock_mutex(&globalLock);
g_lastBoundDescriptorSet = pSet->set;
LAYOUT_NODE* pLayout = NULL;
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pLayoutCI = NULL;
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pLayoutCI = NULL;
// TODO : If pCIList is NULL, flag error
// Perform all updates
for (uint32_t i = 0; i < updateCount; i++) {
// Make sure that binding is within bounds
if (pLayout->createInfo.count < getUpdateBinding(pUpdate)) {
char str[1024];
- sprintf(str, "Descriptor Set %p does not have binding to match update binding %u for update type %s!", ds, getUpdateBinding(pUpdate), string_XGL_STRUCTURE_TYPE(pUpdate->sType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", str);
+ sprintf(str, "Descriptor Set %p does not have binding to match update binding %u for update type %s!", ds, getUpdateBinding(pUpdate), string_VK_STRUCTURE_TYPE(pUpdate->sType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", str);
}
else {
// Next verify that update falls within size of given binding
if (getBindingEndIndex(pLayout, getUpdateBinding(pUpdate)) < getUpdateEndIndex(pLayout, pUpdate)) {
char str[48*1024]; // TODO : Keep count of layout CI structs and size this string dynamically based on that count
pLayoutCI = &pLayout->createInfo;
- string DSstr = xgl_print_xgl_descriptor_set_layout_create_info(pLayoutCI, "{DS} ");
- sprintf(str, "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_XGL_STRUCTURE_TYPE(pUpdate->sType), getUpdateBinding(pUpdate), DSstr.c_str());
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", str);
+ string DSstr = vk_print_vk_descriptor_set_layout_create_info(pLayoutCI, "{DS} ");
+ sprintf(str, "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_VK_STRUCTURE_TYPE(pUpdate->sType), getUpdateBinding(pUpdate), DSstr.c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", str);
}
else { // TODO : should we skip update on a type mismatch or force it?
// Layout bindings match w/ update ok, now verify that update is of the right type
if (!validateUpdateType(pLayout, pUpdate)) {
char str[1024];
- sprintf(str, "Descriptor update type of %s does not match overlapping binding type!", string_XGL_STRUCTURE_TYPE(pUpdate->sType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", str);
+ sprintf(str, "Descriptor update type of %s does not match overlapping binding type!", string_VK_STRUCTURE_TYPE(pUpdate->sType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", str);
}
else {
// Save the update info
GENERIC_HEADER* pNewNode = shadowUpdateNode(pUpdate);
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate UPDATE struct in xglUpdateDescriptors()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate UPDATE struct in vkUpdateDescriptors()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
// Insert shadow node into LL of updates for this set
pFreeUpdate = pShadowUpdate;
pShadowUpdate = (GENERIC_HEADER*)pShadowUpdate->pNext;
uint32_t index = 0;
- XGL_UPDATE_SAMPLERS* pUS = NULL;
- XGL_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
- XGL_UPDATE_IMAGES* pUI = NULL;
- XGL_UPDATE_BUFFERS* pUB = NULL;
+ VK_UPDATE_SAMPLERS* pUS = NULL;
+ VK_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
+ VK_UPDATE_IMAGES* pUI = NULL;
+ VK_UPDATE_BUFFERS* pUB = NULL;
void** ppToFree = NULL;
switch (pFreeUpdate->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pUS = (XGL_UPDATE_SAMPLERS*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pUS = (VK_UPDATE_SAMPLERS*)pFreeUpdate;
if (pUS->pSamplers) {
ppToFree = (void**)&pUS->pSamplers;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pUST = (XGL_UPDATE_SAMPLER_TEXTURES*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pUST = (VK_UPDATE_SAMPLER_TEXTURES*)pFreeUpdate;
for (index = 0; index < pUST->count; index++) {
if (pUST->pSamplerImageViews[index].pImageView) {
ppToFree = (void**)&pUST->pSamplerImageViews[index].pImageView;
#endif
free(*ppToFree);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUI = (XGL_UPDATE_IMAGES*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUI = (VK_UPDATE_IMAGES*)pFreeUpdate;
if (pUI->pImageViews) {
ppToFree = (void**)&pUI->pImageViews;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUB = (XGL_UPDATE_BUFFERS*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUB = (VK_UPDATE_BUFFERS*)pFreeUpdate;
if (pUB->pBufferViews) {
ppToFree = (void**)&pUB->pBufferViews;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
break;
default:
assert(0);
// NOTE : Calls to this function should be wrapped in mutex
static void freePools()
{
- for (unordered_map<XGL_DESCRIPTOR_POOL, POOL_NODE*>::iterator ii=poolMap.begin(); ii!=poolMap.end(); ++ii) {
+ for (unordered_map<VK_DESCRIPTOR_POOL, POOL_NODE*>::iterator ii=poolMap.begin(); ii!=poolMap.end(); ++ii) {
SET_NODE* pSet = (*ii).second->pSets;
SET_NODE* pFreeSet = pSet;
while (pSet) {
// NOTE : Calls to this function should be wrapped in mutex
static void freeLayouts()
{
- for (unordered_map<XGL_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*>::iterator ii=layoutMap.begin(); ii!=layoutMap.end(); ++ii) {
+ for (unordered_map<VK_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*>::iterator ii=layoutMap.begin(); ii!=layoutMap.end(); ++ii) {
LAYOUT_NODE* pLayout = (*ii).second;
if (pLayout->pTypes) {
delete pLayout->pTypes;
}
// Currently clearing a set is removing all previous updates to that set
// TODO : Validate if this is correct clearing behavior
-static void clearDescriptorSet(XGL_DESCRIPTOR_SET set)
+static void clearDescriptorSet(VK_DESCRIPTOR_SET set)
{
SET_NODE* pSet = getSetNode(set);
if (!pSet) {
}
}
-static void clearDescriptorPool(XGL_DESCRIPTOR_POOL pool)
+static void clearDescriptorPool(VK_DESCRIPTOR_POOL pool)
{
POOL_NODE* pPool = getPoolNode(pool);
if (!pPool) {
char str[1024];
- sprintf(str, "Unable to find pool node for pool %p specified in xglClearDescriptorPool() call", (void*)pool);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
+ sprintf(str, "Unable to find pool node for pool %p specified in vkClearDescriptorPool() call", (void*)pool);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
}
else
{
}
}
// Code here to manage the Cmd buffer LL
-static GLOBAL_CB_NODE* getCBNode(XGL_CMD_BUFFER cb)
+static GLOBAL_CB_NODE* getCBNode(VK_CMD_BUFFER cb)
{
loader_platform_thread_lock_mutex(&globalLock);
if (cmdBufferMap.find(cb) == cmdBufferMap.end()) {
// NOTE : Calls to this function should be wrapped in mutex
static void freeCmdBuffers()
{
- for (unordered_map<XGL_CMD_BUFFER, GLOBAL_CB_NODE*>::iterator ii=cmdBufferMap.begin(); ii!=cmdBufferMap.end(); ++ii) {
+ for (unordered_map<VK_CMD_BUFFER, GLOBAL_CB_NODE*>::iterator ii=cmdBufferMap.begin(); ii!=cmdBufferMap.end(); ++ii) {
while (!(*ii).second->pCmds.empty()) {
delete (*ii).second->pCmds.back();
(*ii).second->pCmds.pop_back();
else {
char str[1024];
sprintf(str, "Out of memory while attempting to allocate new CMD_NODE for cmdBuffer %p", (void*)pCB->cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCB->cmdBuffer, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCB->cmdBuffer, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
}
-static void resetCB(const XGL_CMD_BUFFER cb)
+static void resetCB(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
pCB->pCmds.pop_back();
}
// Reset CB state
- XGL_FLAGS saveFlags = pCB->flags;
+ VK_FLAGS saveFlags = pCB->flags;
uint32_t saveQueueNodeIndex = pCB->queueNodeIndex;
memset(pCB, 0, sizeof(GLOBAL_CB_NODE));
pCB->cmdBuffer = cb;
}
// Set the last bound dynamic state of given type
// TODO : Need to track this per cmdBuffer and correlate cmdBuffer for Draw w/ last bound for that cmdBuffer?
-static void setLastBoundDynamicState(const XGL_CMD_BUFFER cmdBuffer, const XGL_DYNAMIC_STATE_OBJECT state, const XGL_STATE_BIND_POINT sType)
+static void setLastBoundDynamicState(const VK_CMD_BUFFER cmdBuffer, const VK_DYNAMIC_STATE_OBJECT state, const VK_STATE_BIND_POINT sType)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
if (dynamicStateMap.find(state) == dynamicStateMap.end()) {
char str[1024];
sprintf(str, "Unable to find dynamic state object %p, was it ever created?", (void*)state);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, state, 0, DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, state, 0, DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, "DS", str);
}
else {
pCB->lastBoundDynamicState[sType] = dynamicStateMap[state];
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
}
// Print the last bound Gfx Pipeline
-static void printPipeline(const XGL_CMD_BUFFER cb)
+static void printPipeline(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
// nothing to print
}
else {
- string pipeStr = xgl_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str();
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", pipeStr.c_str());
+ string pipeStr = vk_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str();
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", pipeStr.c_str());
}
}
}
// Common Dot dumping code
-static void dsCoreDumpDot(const XGL_DESCRIPTOR_SET ds, FILE* pOutFile)
+static void dsCoreDumpDot(const VK_DESCRIPTOR_SET ds, FILE* pOutFile)
{
SET_NODE* pSet = getSetNode(ds);
if (pSet) {
char tmp_str[4*1024];
fprintf(pOutFile, "subgraph cluster_DescriptorPool\n{\nlabel=\"Descriptor Pool\"\n");
sprintf(tmp_str, "Pool (%p)", pPool->pool);
- char* pGVstr = xgl_gv_print_xgl_descriptor_pool_create_info(&pPool->createInfo, tmp_str);
+ char* pGVstr = vk_gv_print_vk_descriptor_pool_create_info(&pPool->createInfo, tmp_str);
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "subgraph cluster_DescriptorSet\n{\nlabel=\"Descriptor Set (%p)\"\n", pSet->set);
uint32_t layout_index = 0;
++layout_index;
sprintf(tmp_str, "LAYOUT%u", layout_index);
- pGVstr = xgl_gv_print_xgl_descriptor_set_layout_create_info(&pLayout->createInfo, tmp_str);
+ pGVstr = vk_gv_print_vk_descriptor_set_layout_create_info(&pLayout->createInfo, tmp_str);
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
if (pSet->pUpdateStructs) {
uint32_t i = 0;
for (i=0; i < pSet->descriptorCount; i++) {
if (pSet->ppDescriptors[i]) {
- fprintf(pOutFile, "<TR><TD PORT=\"slot%u\">slot%u</TD><TD>%s</TD></TR>", i, i, string_XGL_STRUCTURE_TYPE(pSet->ppDescriptors[i]->sType));
+ fprintf(pOutFile, "<TR><TD PORT=\"slot%u\">slot%u</TD><TD>%s</TD></TR>", i, i, string_VK_STRUCTURE_TYPE(pSet->ppDescriptors[i]->sType));
}
}
#define NUM_COLORS 7
uint32_t colorIdx = 0;
fprintf(pOutFile, "</TABLE>>\n];\n");
// Now add the views that are mapped to active descriptors
- XGL_UPDATE_SAMPLERS* pUS = NULL;
- XGL_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
- XGL_UPDATE_IMAGES* pUI = NULL;
- XGL_UPDATE_BUFFERS* pUB = NULL;
- XGL_UPDATE_AS_COPY* pUAC = NULL;
- XGL_SAMPLER_CREATE_INFO* pSCI = NULL;
- XGL_IMAGE_VIEW_CREATE_INFO* pIVCI = NULL;
- XGL_BUFFER_VIEW_CREATE_INFO* pBVCI = NULL;
+ VK_UPDATE_SAMPLERS* pUS = NULL;
+ VK_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
+ VK_UPDATE_IMAGES* pUI = NULL;
+ VK_UPDATE_BUFFERS* pUB = NULL;
+ VK_UPDATE_AS_COPY* pUAC = NULL;
+ VK_SAMPLER_CREATE_INFO* pSCI = NULL;
+ VK_IMAGE_VIEW_CREATE_INFO* pIVCI = NULL;
+ VK_BUFFER_VIEW_CREATE_INFO* pBVCI = NULL;
void** ppNextPtr = NULL;
void* pSaveNext = NULL;
for (i=0; i < pSet->descriptorCount; i++) {
if (pSet->ppDescriptors[i]) {
switch (pSet->ppDescriptors[i]->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pUS = (XGL_UPDATE_SAMPLERS*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pUS = (VK_UPDATE_SAMPLERS*)pSet->ppDescriptors[i];
pSCI = getSamplerCreateInfo(pUS->pSamplers[i-pUS->arrayIndex]);
if (pSCI) {
sprintf(tmp_str, "SAMPLER%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_sampler_create_info(pSCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_sampler_create_info(pSCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pUST = (XGL_UPDATE_SAMPLER_TEXTURES*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pUST = (VK_UPDATE_SAMPLER_TEXTURES*)pSet->ppDescriptors[i];
pSCI = getSamplerCreateInfo(pUST->pSamplerImageViews[i-pUST->arrayIndex].sampler);
if (pSCI) {
sprintf(tmp_str, "SAMPLER%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_sampler_create_info(pSCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_sampler_create_info(pSCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
pIVCI = getImageViewCreateInfo(pUST->pSamplerImageViews[i-pUST->arrayIndex].pImageView->view);
if (pIVCI) {
sprintf(tmp_str, "IMAGE_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_image_view_create_info(pIVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_image_view_create_info(pIVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUI = (XGL_UPDATE_IMAGES*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUI = (VK_UPDATE_IMAGES*)pSet->ppDescriptors[i];
pIVCI = getImageViewCreateInfo(pUI->pImageViews[i-pUI->arrayIndex].view);
if (pIVCI) {
sprintf(tmp_str, "IMAGE_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_image_view_create_info(pIVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_image_view_create_info(pIVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUB = (XGL_UPDATE_BUFFERS*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUB = (VK_UPDATE_BUFFERS*)pSet->ppDescriptors[i];
pBVCI = getBufferViewCreateInfo(pUB->pBufferViews[i-pUB->arrayIndex].view);
if (pBVCI) {
sprintf(tmp_str, "BUFFER_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_buffer_view_create_info(pBVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_buffer_view_create_info(pBVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- pUAC = (XGL_UPDATE_AS_COPY*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ pUAC = (VK_UPDATE_AS_COPY*)pSet->ppDescriptors[i];
// TODO : Need to validate this code
// Save off pNext and set to NULL while printing this struct, then restore it
ppNextPtr = (void**)&pUAC->pNext;
pSaveNext = *ppNextPtr;
*ppNextPtr = NULL;
sprintf(tmp_str, "UPDATE_AS_COPY%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_update_as_copy(pUAC, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_update_as_copy(pUAC, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
// Restore next ptr
*ppNextPtr = pSaveNext;
}
}
// Dump subgraph w/ DS info
-static void dsDumpDot(const XGL_CMD_BUFFER cb, FILE* pOutFile)
+static void dsDumpDot(const VK_CMD_BUFFER cb, FILE* pOutFile)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB && pCB->lastBoundDescriptorSet) {
fprintf(pOutFile, "digraph g {\ngraph [\nrankdir = \"TB\"\n];\nnode [\nfontsize = \"16\"\nshape = \"plaintext\"\n];\nedge [\n];\n");
fprintf(pOutFile, "subgraph cluster_dynamicState\n{\nlabel=\"Dynamic State\"\n");
char* pGVstr = NULL;
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (g_lastBoundDynamicState[i] && g_lastBoundDynamicState[i]->pCreateInfo) {
- pGVstr = dynamic_gv_display(g_lastBoundDynamicState[i]->pCreateInfo, string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
+ pGVstr = dynamic_gv_display(g_lastBoundDynamicState[i]->pCreateInfo, string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
}
}
fprintf(pOutFile, "}\n"); // close dynamicState subgraph
fprintf(pOutFile, "subgraph cluster_PipelineStateObject\n{\nlabel=\"Pipeline State Object\"\n");
- pGVstr = xgl_gv_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
+ pGVstr = vk_gv_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "}\n");
}
}
// Dump a GraphViz dot file showing the pipeline for a given CB
-static void dumpDotFile(const XGL_CMD_BUFFER cb, string outFileName)
+static void dumpDotFile(const VK_CMD_BUFFER cb, string outFileName)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
fprintf(pOutFile, "digraph g {\ngraph [\nrankdir = \"TB\"\n];\nnode [\nfontsize = \"16\"\nshape = \"plaintext\"\n];\nedge [\n];\n");
fprintf(pOutFile, "subgraph cluster_dynamicState\n{\nlabel=\"Dynamic State\"\n");
char* pGVstr = NULL;
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (pCB->lastBoundDynamicState[i] && pCB->lastBoundDynamicState[i]->pCreateInfo) {
- pGVstr = dynamic_gv_display(pCB->lastBoundDynamicState[i]->pCreateInfo, string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
+ pGVstr = dynamic_gv_display(pCB->lastBoundDynamicState[i]->pCreateInfo, string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
}
}
fprintf(pOutFile, "}\n"); // close dynamicState subgraph
fprintf(pOutFile, "subgraph cluster_PipelineStateObject\n{\nlabel=\"Pipeline State Object\"\n");
- pGVstr = xgl_gv_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
+ pGVstr = vk_gv_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "}\n");
}
}
// Verify VB Buffer binding
-static void validateVBBinding(const XGL_CMD_BUFFER cb)
+static void validateVBBinding(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB && pCB->lastBoundPipeline) {
char str[1024];
if (!pPipeTrav) {
sprintf(str, "Can't find last bound Pipeline %p!", (void*)pCB->lastBoundPipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NO_PIPELINE_BOUND, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NO_PIPELINE_BOUND, "DS", str);
}
else {
// Verify Vtx binding
if (pCB->lastVtxBinding >= pPipeTrav->vtxBindingCount) {
if (0 == pPipeTrav->vtxBindingCount) {
sprintf(str, "Vtx Buffer Index %u was bound, but no vtx buffers are attached to PSO.", pCB->lastVtxBinding);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
}
else {
sprintf(str, "Vtx binding Index of %u exceeds PSO pVertexBindingDescriptions max array index of %u.", pCB->lastVtxBinding, (pPipeTrav->vtxBindingCount - 1));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
}
}
else {
- string tmpStr = xgl_print_xgl_vertex_input_binding_description(&pPipeTrav->pVertexBindingDescriptions[pCB->lastVtxBinding], "{DS}INFO : ").c_str();
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmpStr.c_str());
+ string tmpStr = vk_print_vk_vertex_input_binding_description(&pPipeTrav->pVertexBindingDescriptions[pCB->lastVtxBinding], "{DS}INFO : ").c_str();
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmpStr.c_str());
}
}
}
}
}
// Print details of DS config to stdout
-static void printDSConfig(const XGL_CMD_BUFFER cb)
+static void printDSConfig(const VK_CMD_BUFFER cb)
{
char tmp_str[1024];
char ds_config_str[1024*256] = {0}; // TODO : Currently making this buffer HUGE w/o overrun protection. Need to be smarter, start smaller, and grow as needed.
POOL_NODE* pPool = getPoolNode(pSet->pool);
// Print out pool details
sprintf(tmp_str, "Details for pool %p.", (void*)pPool->pool);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
- string poolStr = xgl_print_xgl_descriptor_pool_create_info(&pPool->createInfo, " ");
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ string poolStr = vk_print_vk_descriptor_pool_create_info(&pPool->createInfo, " ");
sprintf(ds_config_str, "%s", poolStr.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
// Print out set details
char prefix[10];
uint32_t index = 0;
sprintf(tmp_str, "Details for descriptor set %p.", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
LAYOUT_NODE* pLayout = pSet->pLayout;
// Print layout details
sprintf(tmp_str, "Layout #%u, (object %p) for DS %p.", index+1, (void*)pLayout->layout, (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
sprintf(prefix, " [L%u] ", index);
- string DSLstr = xgl_print_xgl_descriptor_set_layout_create_info(&pLayout->createInfo, prefix).c_str();
+ string DSLstr = vk_print_vk_descriptor_set_layout_create_info(&pLayout->createInfo, prefix).c_str();
sprintf(ds_config_str, "%s", DSLstr.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
index++;
GENERIC_HEADER* pUpdate = pSet->pUpdateStructs;
if (pUpdate) {
sprintf(tmp_str, "Update Chain [UC] for descriptor set %p:", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
sprintf(prefix, " [UC] ");
sprintf(ds_config_str, "%s", dynamic_display(pUpdate, prefix).c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
// TODO : If there is a "view" associated with this update, print CI for that view
}
else {
- sprintf(tmp_str, "No Update Chain for descriptor set %p (xglUpdateDescriptors has not been called)", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ sprintf(tmp_str, "No Update Chain for descriptor set %p (vkUpdateDescriptors has not been called)", (void*)pSet->set);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
}
}
}
-static void printCB(const XGL_CMD_BUFFER cb)
+static void printCB(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
char str[1024];
sprintf(str, "Cmds in CB %p", (void*)cb);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
for (vector<CMD_NODE*>::iterator ii=pCB->pCmds.begin(); ii!=pCB->pCmds.end(); ++ii) {
sprintf(str, " CMD#%lu: %s", (*ii)->cmdNumber, cmdTypeToString((*ii)->type).c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_NONE, "DS", str);
}
}
else {
}
-static void synchAndPrintDSConfig(const XGL_CMD_BUFFER cb)
+static void synchAndPrintDSConfig(const VK_CMD_BUFFER cb)
{
printDSConfig(cb);
printPipeline(cb);
getLayerOptionEnum("DrawStateReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("DrawStateDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("DrawStateLogFilename");
if (strOpt)
}
// initialize Layer dispatch table
// TODO handle multiple GPUs
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
- xglGetProcAddrType fpGetProcAddr = (xglGetProcAddrType)fpNextGPA((XGL_PHYSICAL_GPU) pCurObj->nextObject, (char *) "xglGetProcAddr");
+ vkGetProcAddrType fpGetProcAddr = (vkGetProcAddrType)fpNextGPA((VK_PHYSICAL_GPU) pCurObj->nextObject, (char *) "vkGetProcAddr");
nextTable.GetProcAddr = fpGetProcAddr;
if (!globalLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How??? One
- // suggestion is to call this during xglCreateInstance(), and then we
- // can clean it up during xglDestroyInstance(). However, that requires
+ // suggestion is to call this during vkCreateInstance(), and then we
+ // can clean it up during vkDestroyInstance(). However, that requires
// that the layer have per-instance locks. We need to come back and
// address this soon.
loader_platform_thread_create_mutex(&globalLock);
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
// Free all the memory
loader_platform_thread_lock_mutex(&globalLock);
freePools();
freeLayouts();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
/* This entrypoint is NOT going to init its own dispatch table since loader calls here early */
if (!strcmp(pExtName, "DrawState") || !strcmp(pExtName, "drawStateDumpDotFile") ||
!strcmp(pExtName, "drawStateDumpCommandBufferDotFile") || !strcmp(pExtName, "drawStateDumpPngFile"))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (nextTable.GetExtensionSupport != NULL)
{
- result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], "DrawState", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
for (uint32_t i=0; i < cmdBufferCount; i++) {
// Validate that cmd buffers have been updated
}
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
// TODO : When wrapped objects (such as dynamic state) are destroyed, need to clean up memory
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
BUFFER_NODE* pNewNode = new BUFFER_NODE;
pNewNode->buffer = *pView;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
IMAGE_NODE *pNewNode = new IMAGE_NODE;
pNewNode->image = *pView;
return result;
}
-static void track_pipeline(const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+static void track_pipeline(const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
+ // Create LL HEAD for this Pipeline
+ loader_platform_thread_lock_mutex(&globalLock);
PIPELINE_NODE* pPipeNode = new PIPELINE_NODE;
memset((void*)pPipeNode, 0, sizeof(PIPELINE_NODE));
pPipeNode->pipeline = *pPipeline;
initPipeline(pPipeNode, pCreateInfo);
+ loader_platform_thread_unlock_mutex(&globalLock);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
// Create LL HEAD for this Pipeline
char str[1024];
sprintf(str, "Created Gfx Pipeline %p", (void*)*pPipeline);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pPipeline, 0, DRAWSTATE_NONE, "DS", str);
- loader_platform_thread_lock_mutex(&globalLock);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, *pPipeline, 0, DRAWSTATE_NONE, "DS", str);
track_pipeline(pCreateInfo, pPipeline);
- loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
// Create LL HEAD for this Pipeline
char str[1024];
sprintf(str, "Created Gfx Pipeline %p (derived from pipeline %p)", (void*)*pPipeline, basePipeline);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pPipeline, 0, DRAWSTATE_NONE, "DS", str);
- loader_platform_thread_lock_mutex(&globalLock);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, *pPipeline, 0, DRAWSTATE_NONE, "DS", str);
track_pipeline(pCreateInfo, pPipeline);
loader_platform_thread_unlock_mutex(&globalLock);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
SAMPLER_NODE* pNewNode = new SAMPLER_NODE;
pNewNode->sampler = *pSampler;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ if (VK_SUCCESS == result) {
LAYOUT_NODE* pNewNode = new LAYOUT_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate LAYOUT_NODE in xglCreateDescriptorSetLayout()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, *pSetLayout, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate LAYOUT_NODE in vkCreateDescriptorSetLayout()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, *pSetLayout, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
memset(pNewNode, 0, sizeof(LAYOUT_NODE));
- memcpy((void*)&pNewNode->createInfo, pCreateInfo, sizeof(XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO));
- pNewNode->createInfo.pBinding = new XGL_DESCRIPTOR_SET_LAYOUT_BINDING[pCreateInfo->count];
- memcpy((void*)pNewNode->createInfo.pBinding, pCreateInfo->pBinding, sizeof(XGL_DESCRIPTOR_SET_LAYOUT_BINDING)*pCreateInfo->count);
+ memcpy((void*)&pNewNode->createInfo, pCreateInfo, sizeof(VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO));
+ pNewNode->createInfo.pBinding = new VK_DESCRIPTOR_SET_LAYOUT_BINDING[pCreateInfo->count];
+ memcpy((void*)pNewNode->createInfo.pBinding, pCreateInfo->pBinding, sizeof(VK_DESCRIPTOR_SET_LAYOUT_BINDING)*pCreateInfo->count);
uint32_t totalCount = 0;
for (uint32_t i=0; i<pCreateInfo->count; i++) {
totalCount += pCreateInfo->pBinding[i].count;
if (pCreateInfo->pBinding[i].pImmutableSamplers) {
void** ppImmutableSamplers = (void**)&pNewNode->createInfo.pBinding[i].pImmutableSamplers;
- *ppImmutableSamplers = malloc(sizeof(XGL_SAMPLER)*pCreateInfo->pBinding[i].count);
- memcpy(*ppImmutableSamplers, pCreateInfo->pBinding[i].pImmutableSamplers, pCreateInfo->pBinding[i].count*sizeof(XGL_SAMPLER));
+ *ppImmutableSamplers = malloc(sizeof(VK_SAMPLER)*pCreateInfo->pBinding[i].count);
+ memcpy(*ppImmutableSamplers, pCreateInfo->pBinding[i].pImmutableSamplers, pCreateInfo->pBinding[i].count*sizeof(VK_SAMPLER));
}
}
if (totalCount > 0) {
- pNewNode->pTypes = new XGL_DESCRIPTOR_TYPE[totalCount];
+ pNewNode->pTypes = new VK_DESCRIPTOR_TYPE[totalCount];
uint32_t offset = 0;
uint32_t j = 0;
for (uint32_t i=0; i<pCreateInfo->count; i++) {
return result;
}
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
+ if (VK_SUCCESS == result) {
// TODO : Need to capture the layout chains
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pPoolNode = poolMap.begin()->second;
if (!pPoolNode) {
char str[1024];
sprintf(str, "Unable to find pool node");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
}
else {
pPoolNode->updateActive = 1;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pPoolNode = poolMap.begin()->second;
if (!pPoolNode) {
char str[1024];
sprintf(str, "Unable to find pool node");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
}
else {
if (!pPoolNode->updateActive) {
char str[1024];
- sprintf(str, "You must call xglBeginDescriptorPoolUpdate() before this call to xglEndDescriptorPoolUpdate()!");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_DS_END_WITHOUT_BEGIN, "DS", str);
+ sprintf(str, "You must call vkBeginDescriptorPoolUpdate() before this call to vkEndDescriptorPoolUpdate()!");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_DS_END_WITHOUT_BEGIN, "DS", str);
}
else {
pPoolNode->updateActive = 0;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ if (VK_SUCCESS == result) {
// Insert this pool into Global Pool LL at head
char str[1024];
sprintf(str, "Created Descriptor Pool %p", (void*)*pDescriptorPool);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pDescriptorPool, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (VK_BASE_OBJECT)pDescriptorPool, 0, DRAWSTATE_NONE, "DS", str);
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pNewNode = new POOL_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate POOL_NODE in xglCreateDescriptorPool()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)*pDescriptorPool, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate POOL_NODE in vkCreateDescriptorPool()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, (VK_BASE_OBJECT)*pDescriptorPool, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
memset(pNewNode, 0, sizeof(POOL_NODE));
- XGL_DESCRIPTOR_POOL_CREATE_INFO* pCI = (XGL_DESCRIPTOR_POOL_CREATE_INFO*)&pNewNode->createInfo;
- memcpy((void*)pCI, pCreateInfo, sizeof(XGL_DESCRIPTOR_POOL_CREATE_INFO));
+ VK_DESCRIPTOR_POOL_CREATE_INFO* pCI = (VK_DESCRIPTOR_POOL_CREATE_INFO*)&pNewNode->createInfo;
+ memcpy((void*)pCI, pCreateInfo, sizeof(VK_DESCRIPTOR_POOL_CREATE_INFO));
if (pNewNode->createInfo.count) {
- size_t typeCountSize = pNewNode->createInfo.count * sizeof(XGL_DESCRIPTOR_TYPE_COUNT);
- pNewNode->createInfo.pTypeCount = new XGL_DESCRIPTOR_TYPE_COUNT[typeCountSize];
+ size_t typeCountSize = pNewNode->createInfo.count * sizeof(VK_DESCRIPTOR_TYPE_COUNT);
+ pNewNode->createInfo.pTypeCount = new VK_DESCRIPTOR_TYPE_COUNT[typeCountSize];
memcpy((void*)pNewNode->createInfo.pTypeCount, pCreateInfo->pTypeCount, typeCountSize);
}
pNewNode->poolUsage = poolUsage;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ if (VK_SUCCESS == result) {
clearDescriptorPool(descriptorPool);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
- if ((XGL_SUCCESS == result) || (*pCount > 0)) {
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ if ((VK_SUCCESS == result) || (*pCount > 0)) {
POOL_NODE *pPoolNode = getPoolNode(descriptorPool);
if (!pPoolNode) {
char str[1024];
- sprintf(str, "Unable to find pool node for pool %p specified in xglAllocDescriptorSets() call", (void*)descriptorPool);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, descriptorPool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
+ sprintf(str, "Unable to find pool node for pool %p specified in vkAllocDescriptorSets() call", (void*)descriptorPool);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, descriptorPool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
}
else {
for (uint32_t i = 0; i < *pCount; i++) {
char str[1024];
sprintf(str, "Created Descriptor Set %p", (void*)pDescriptorSets[i]);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
// Create new set node and add to head of pool nodes
SET_NODE* pNewNode = new SET_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate SET_NODE in xglAllocDescriptorSets()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate SET_NODE in vkAllocDescriptorSets()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
memset(pNewNode, 0, sizeof(SET_NODE));
LAYOUT_NODE* pLayout = getLayoutNode(pSetLayouts[i]);
if (NULL == pLayout) {
char str[1024];
- sprintf(str, "Unable to find set layout node for layout %p specified in xglAllocDescriptorSets() call", (void*)pSetLayouts[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pSetLayouts[i], 0, DRAWSTATE_INVALID_LAYOUT, "DS", str);
+ sprintf(str, "Unable to find set layout node for layout %p specified in vkAllocDescriptorSets() call", (void*)pSetLayouts[i]);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pSetLayouts[i], 0, DRAWSTATE_INVALID_LAYOUT, "DS", str);
}
pNewNode->pLayout = pLayout;
pNewNode->pool = descriptorPool;
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
for (uint32_t i = 0; i < count; i++) {
clearDescriptorSet(pDescriptorSets[i]);
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
SET_NODE* pSet = getSetNode(descriptorSet);
if (!dsUpdateActive(descriptorSet)) {
char str[1024];
- sprintf(str, "You must call xglBeginDescriptorPoolUpdate() before this call to xglUpdateDescriptors()!");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pSet->pool, 0, DRAWSTATE_UPDATE_WITHOUT_BEGIN, "DS", str);
+ sprintf(str, "You must call vkBeginDescriptorPoolUpdate() before this call to vkUpdateDescriptors()!");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pSet->pool, 0, DRAWSTATE_UPDATE_WITHOUT_BEGIN, "DS", str);
}
else {
- // pUpdateChain is a Linked-list of XGL_UPDATE_* structures defining the mappings for the descriptors
+ // pUpdateChain is a Linked-list of VK_UPDATE_* structures defining the mappings for the descriptors
dsUpdate(descriptorSet, updateCount, ppUpdateArray);
}
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_VIEWPORT);
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_VIEWPORT);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_RASTER);
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_RASTER);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_COLOR_BLEND);
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_COLOR_BLEND);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_DEPTH_STENCIL);
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_DEPTH_STENCIL);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
GLOBAL_CB_NODE* pCB = new GLOBAL_CB_NODE;
memset(pCB, 0, sizeof(GLOBAL_CB_NODE));
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ if (VK_SUCCESS == result) {
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
if (CB_NEW != pCB->state)
resetCB(cmdBuffer);
pCB->state = CB_UPDATE_ACTIVE;
if (pBeginInfo->pNext) {
- XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO* pCbGfxBI = (XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO*)pBeginInfo->pNext;
- if (XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO == pCbGfxBI->sType) {
+ VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO* pCbGfxBI = (VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO*)pBeginInfo->pNext;
+ if (VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO == pCbGfxBI->sType) {
pCB->activeRenderPass = pCbGfxBI->renderPassContinue.renderPass;
}
}
}
else {
char str[1024];
- sprintf(str, "In xglBeginCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ sprintf(str, "In vkBeginCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
updateCBTracking(cmdBuffer);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ if (VK_SUCCESS == result) {
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
pCB->state = CB_UPDATE_COMPLETE;
}
else {
char str[1024];
- sprintf(str, "In xglEndCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ sprintf(str, "In vkEndCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
updateCBTracking(cmdBuffer);
//cbDumpDotFile("cb_dump.dot");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ if (VK_SUCCESS == result) {
resetCB(cmdBuffer);
updateCBTracking(cmdBuffer);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p that doesn't exist!", (void*)pipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_INVALID_PIPELINE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_INVALID_PIPELINE, "DS", str);
}
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
setLastBoundDynamicState(cmdBuffer, state, stateBindPoint);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
// TODO : This check here needs to be made at QueueSubmit time
/*
char str[1024];
- sprintf(str, "You must call xglEndDescriptorPoolUpdate(%p) before this call to xglCmdBindDescriptorSet()!", (void*)descriptorSet);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, descriptorSet, 0, DRAWSTATE_BINDING_DS_NO_END_UPDATE, "DS", str);
+ sprintf(str, "You must call vkEndDescriptorPoolUpdate(%p) before this call to vkCmdBindDescriptorSet()!", (void*)descriptorSet);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, descriptorSet, 0, DRAWSTATE_BINDING_DS_NO_END_UPDATE, "DS", str);
*/
}
loader_platform_thread_lock_mutex(&globalLock);
g_lastBoundDescriptorSet = pDescriptorSets[i];
loader_platform_thread_unlock_mutex(&globalLock);
char str[1024];
- sprintf(str, "DS %p bound on pipeline %s", (void*)pDescriptorSets[i], string_XGL_PIPELINE_BIND_POINT(pipelineBindPoint));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "DS %p bound on pipeline %s", (void*)pDescriptorSets[i], string_VK_PIPELINE_BIND_POINT(pipelineBindPoint));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to bind DS %p that doesn't exist!", (void*)pDescriptorSets[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_INVALID_SET, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_INVALID_SET, "DS", str);
}
}
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAW);
pCB->drawCount[DRAW]++;
char str[1024];
- sprintf(str, "xglCmdDraw() call #%lu, reporting DS state:", g_drawCount[DRAW]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDraw() call #%lu, reporting DS state:", g_drawCount[DRAW]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDEXED);
pCB->drawCount[DRAW_INDEXED]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndexed() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndexed() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDIRECT);
pCB->drawCount[DRAW_INDIRECT]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDEXEDINDIRECT);
pCB->drawCount[DRAW_INDEXED_INDIRECT]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndexedIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndexedIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- float depth, uint32_t stencil,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ float depth, uint32_t stencil,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ if (VK_SUCCESS == result) {
// Shadow create info and store in map
- XGL_FRAMEBUFFER_CREATE_INFO* localFBCI = new XGL_FRAMEBUFFER_CREATE_INFO(*pCreateInfo);
+ VK_FRAMEBUFFER_CREATE_INFO* localFBCI = new VK_FRAMEBUFFER_CREATE_INFO(*pCreateInfo);
if (pCreateInfo->pColorAttachments) {
- localFBCI->pColorAttachments = new XGL_COLOR_ATTACHMENT_BIND_INFO[localFBCI->colorAttachmentCount];
- memcpy((void*)localFBCI->pColorAttachments, pCreateInfo->pColorAttachments, localFBCI->colorAttachmentCount*sizeof(XGL_COLOR_ATTACHMENT_BIND_INFO));
+ localFBCI->pColorAttachments = new VK_COLOR_ATTACHMENT_BIND_INFO[localFBCI->colorAttachmentCount];
+ memcpy((void*)localFBCI->pColorAttachments, pCreateInfo->pColorAttachments, localFBCI->colorAttachmentCount*sizeof(VK_COLOR_ATTACHMENT_BIND_INFO));
}
if (pCreateInfo->pDepthStencilAttachment) {
- localFBCI->pDepthStencilAttachment = new XGL_DEPTH_STENCIL_BIND_INFO[localFBCI->colorAttachmentCount];
- memcpy((void*)localFBCI->pDepthStencilAttachment, pCreateInfo->pDepthStencilAttachment, localFBCI->colorAttachmentCount*sizeof(XGL_DEPTH_STENCIL_BIND_INFO));
+ localFBCI->pDepthStencilAttachment = new VK_DEPTH_STENCIL_BIND_INFO[localFBCI->colorAttachmentCount];
+ memcpy((void*)localFBCI->pDepthStencilAttachment, pCreateInfo->pDepthStencilAttachment, localFBCI->colorAttachmentCount*sizeof(VK_DEPTH_STENCIL_BIND_INFO));
}
frameBufferMap[*pFramebuffer] = localFBCI;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ if (VK_SUCCESS == result) {
// Shadow create info and store in map
- XGL_RENDER_PASS_CREATE_INFO* localRPCI = new XGL_RENDER_PASS_CREATE_INFO(*pCreateInfo);
+ VK_RENDER_PASS_CREATE_INFO* localRPCI = new VK_RENDER_PASS_CREATE_INFO(*pCreateInfo);
if (pCreateInfo->pColorLoadOps) {
- localRPCI->pColorLoadOps = new XGL_ATTACHMENT_LOAD_OP[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorLoadOps, pCreateInfo->pColorLoadOps, localRPCI->colorAttachmentCount*sizeof(XGL_ATTACHMENT_LOAD_OP));
+ localRPCI->pColorLoadOps = new VK_ATTACHMENT_LOAD_OP[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorLoadOps, pCreateInfo->pColorLoadOps, localRPCI->colorAttachmentCount*sizeof(VK_ATTACHMENT_LOAD_OP));
}
if (pCreateInfo->pColorStoreOps) {
- localRPCI->pColorStoreOps = new XGL_ATTACHMENT_STORE_OP[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorStoreOps, pCreateInfo->pColorStoreOps, localRPCI->colorAttachmentCount*sizeof(XGL_ATTACHMENT_STORE_OP));
+ localRPCI->pColorStoreOps = new VK_ATTACHMENT_STORE_OP[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorStoreOps, pCreateInfo->pColorStoreOps, localRPCI->colorAttachmentCount*sizeof(VK_ATTACHMENT_STORE_OP));
}
if (pCreateInfo->pColorLoadClearValues) {
- localRPCI->pColorLoadClearValues = new XGL_CLEAR_COLOR[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorLoadClearValues, pCreateInfo->pColorLoadClearValues, localRPCI->colorAttachmentCount*sizeof(XGL_CLEAR_COLOR));
+ localRPCI->pColorLoadClearValues = new VK_CLEAR_COLOR[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorLoadClearValues, pCreateInfo->pColorLoadClearValues, localRPCI->colorAttachmentCount*sizeof(VK_CLEAR_COLOR));
}
renderPassMap[*pRenderPass] = localRPCI;
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN *pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN *pRenderPassBegin)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
pCB->activeRenderPass = pRenderPassBegin->renderPass;
pCB->framebuffer = pRenderPassBegin->framebuffer;
if (pCB->lastBoundPipeline) {
- validatePipelineState(pCB, XGL_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBoundPipeline);
+ validatePipelineState(pCB, VK_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBoundPipeline);
}
} else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE* pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE* pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
#if ALLOC_DEBUG
printf("Alloc34 #%lu pNewDbgFuncNode addr(%p)\n", ++g_alloc_count, (void*)pNewDbgFuncNode);
#endif
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDbgMarkerEnd(cmdBuffer);
}
// FIXME: NEED WINDOWS EQUIVALENT
char str[1024];
sprintf(str, "Cannot execute dot program yet on Windows.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
#else // WIN32
char dotExe[32] = "/usr/bin/dot";
if( access(dotExe, X_OK) != -1) {
else {
char str[1024];
sprintf(str, "Cannot execute dot program at (%s) to dump requested %s file.", dotExe, outFileName);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
}
#endif // WIN32
}
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- if (!strcmp(funcName, "xglGetProcAddr"))
- return (void *) xglGetProcAddr;
- if (!strcmp(funcName, "xglCreateDevice"))
- return (void*) xglCreateDevice;
- if (!strcmp(funcName, "xglDestroyDevice"))
- return (void*) xglDestroyDevice;
- if (!strcmp(funcName, "xglGetExtensionSupport"))
- return (void*) xglGetExtensionSupport;
- if (!strcmp(funcName, "xglEnumerateLayers"))
- return (void*) xglEnumerateLayers;
- if (!strcmp(funcName, "xglQueueSubmit"))
- return (void*) xglQueueSubmit;
- if (!strcmp(funcName, "xglDestroyObject"))
- return (void*) xglDestroyObject;
- if (!strcmp(funcName, "xglCreateBufferView"))
- return (void*) xglCreateBufferView;
- if (!strcmp(funcName, "xglCreateImageView"))
- return (void*) xglCreateImageView;
- if (!strcmp(funcName, "xglCreateGraphicsPipeline"))
- return (void*) xglCreateGraphicsPipeline;
- if (!strcmp(funcName, "xglCreateGraphicsPipelineDerivative"))
- return (void*) xglCreateGraphicsPipelineDerivative;
- if (!strcmp(funcName, "xglCreateSampler"))
- return (void*) xglCreateSampler;
- if (!strcmp(funcName, "xglCreateDescriptorSetLayout"))
- return (void*) xglCreateDescriptorSetLayout;
- if (!strcmp(funcName, "xglCreateDescriptorSetLayoutChain"))
- return (void*) xglCreateDescriptorSetLayoutChain;
- if (!strcmp(funcName, "xglBeginDescriptorPoolUpdate"))
- return (void*) xglBeginDescriptorPoolUpdate;
- if (!strcmp(funcName, "xglEndDescriptorPoolUpdate"))
- return (void*) xglEndDescriptorPoolUpdate;
- if (!strcmp(funcName, "xglCreateDescriptorPool"))
- return (void*) xglCreateDescriptorPool;
- if (!strcmp(funcName, "xglResetDescriptorPool"))
- return (void*) xglResetDescriptorPool;
- if (!strcmp(funcName, "xglAllocDescriptorSets"))
- return (void*) xglAllocDescriptorSets;
- if (!strcmp(funcName, "xglClearDescriptorSets"))
- return (void*) xglClearDescriptorSets;
- if (!strcmp(funcName, "xglUpdateDescriptors"))
- return (void*) xglUpdateDescriptors;
- if (!strcmp(funcName, "xglCreateDynamicViewportState"))
- return (void*) xglCreateDynamicViewportState;
- if (!strcmp(funcName, "xglCreateDynamicRasterState"))
- return (void*) xglCreateDynamicRasterState;
- if (!strcmp(funcName, "xglCreateDynamicColorBlendState"))
- return (void*) xglCreateDynamicColorBlendState;
- if (!strcmp(funcName, "xglCreateDynamicDepthStencilState"))
- return (void*) xglCreateDynamicDepthStencilState;
- if (!strcmp(funcName, "xglCreateCommandBuffer"))
- return (void*) xglCreateCommandBuffer;
- if (!strcmp(funcName, "xglBeginCommandBuffer"))
- return (void*) xglBeginCommandBuffer;
- if (!strcmp(funcName, "xglEndCommandBuffer"))
- return (void*) xglEndCommandBuffer;
- if (!strcmp(funcName, "xglResetCommandBuffer"))
- return (void*) xglResetCommandBuffer;
- if (!strcmp(funcName, "xglCmdBindPipeline"))
- return (void*) xglCmdBindPipeline;
- if (!strcmp(funcName, "xglCmdBindDynamicStateObject"))
- return (void*) xglCmdBindDynamicStateObject;
- if (!strcmp(funcName, "xglCmdBindDescriptorSets"))
- return (void*) xglCmdBindDescriptorSets;
- if (!strcmp(funcName, "xglCmdBindVertexBuffer"))
- return (void*) xglCmdBindVertexBuffer;
- if (!strcmp(funcName, "xglCmdBindIndexBuffer"))
- return (void*) xglCmdBindIndexBuffer;
- if (!strcmp(funcName, "xglCmdDraw"))
- return (void*) xglCmdDraw;
- if (!strcmp(funcName, "xglCmdDrawIndexed"))
- return (void*) xglCmdDrawIndexed;
- if (!strcmp(funcName, "xglCmdDrawIndirect"))
- return (void*) xglCmdDrawIndirect;
- if (!strcmp(funcName, "xglCmdDrawIndexedIndirect"))
- return (void*) xglCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "xglCmdDispatch"))
- return (void*) xglCmdDispatch;
- if (!strcmp(funcName, "xglCmdDispatchIndirect"))
- return (void*) xglCmdDispatchIndirect;
- if (!strcmp(funcName, "xglCmdCopyBuffer"))
- return (void*) xglCmdCopyBuffer;
- if (!strcmp(funcName, "xglCmdCopyImage"))
- return (void*) xglCmdCopyImage;
- if (!strcmp(funcName, "xglCmdCopyBufferToImage"))
- return (void*) xglCmdCopyBufferToImage;
- if (!strcmp(funcName, "xglCmdCopyImageToBuffer"))
- return (void*) xglCmdCopyImageToBuffer;
- if (!strcmp(funcName, "xglCmdCloneImageData"))
- return (void*) xglCmdCloneImageData;
- if (!strcmp(funcName, "xglCmdUpdateBuffer"))
- return (void*) xglCmdUpdateBuffer;
- if (!strcmp(funcName, "xglCmdFillBuffer"))
- return (void*) xglCmdFillBuffer;
- if (!strcmp(funcName, "xglCmdClearColorImage"))
- return (void*) xglCmdClearColorImage;
- if (!strcmp(funcName, "xglCmdClearDepthStencil"))
- return (void*) xglCmdClearDepthStencil;
- if (!strcmp(funcName, "xglCmdResolveImage"))
- return (void*) xglCmdResolveImage;
- if (!strcmp(funcName, "xglCmdSetEvent"))
- return (void*) xglCmdSetEvent;
- if (!strcmp(funcName, "xglCmdResetEvent"))
- return (void*) xglCmdResetEvent;
- if (!strcmp(funcName, "xglCmdWaitEvents"))
- return (void*) xglCmdWaitEvents;
- if (!strcmp(funcName, "xglCmdPipelineBarrier"))
- return (void*) xglCmdPipelineBarrier;
- if (!strcmp(funcName, "xglCmdBeginQuery"))
- return (void*) xglCmdBeginQuery;
- if (!strcmp(funcName, "xglCmdEndQuery"))
- return (void*) xglCmdEndQuery;
- if (!strcmp(funcName, "xglCmdResetQueryPool"))
- return (void*) xglCmdResetQueryPool;
- if (!strcmp(funcName, "xglCmdWriteTimestamp"))
- return (void*) xglCmdWriteTimestamp;
- if (!strcmp(funcName, "xglCmdInitAtomicCounters"))
- return (void*) xglCmdInitAtomicCounters;
- if (!strcmp(funcName, "xglCmdLoadAtomicCounters"))
- return (void*) xglCmdLoadAtomicCounters;
- if (!strcmp(funcName, "xglCmdSaveAtomicCounters"))
- return (void*) xglCmdSaveAtomicCounters;
- if (!strcmp(funcName, "xglCreateFramebuffer"))
- return (void*) xglCreateFramebuffer;
- if (!strcmp(funcName, "xglCreateRenderPass"))
- return (void*) xglCreateRenderPass;
- if (!strcmp(funcName, "xglCmdBeginRenderPass"))
- return (void*) xglCmdBeginRenderPass;
- if (!strcmp(funcName, "xglCmdEndRenderPass"))
- return (void*) xglCmdEndRenderPass;
- if (!strcmp(funcName, "xglDbgRegisterMsgCallback"))
- return (void*) xglDbgRegisterMsgCallback;
- if (!strcmp(funcName, "xglDbgUnregisterMsgCallback"))
- return (void*) xglDbgUnregisterMsgCallback;
- if (!strcmp(funcName, "xglCmdDbgMarkerBegin"))
- return (void*) xglCmdDbgMarkerBegin;
- if (!strcmp(funcName, "xglCmdDbgMarkerEnd"))
- return (void*) xglCmdDbgMarkerEnd;
+ if (!strcmp(funcName, "vkGetProcAddr"))
+ return (void *) vkGetProcAddr;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (void*) vkCreateDevice;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (void*) vkDestroyDevice;
+ if (!strcmp(funcName, "vkGetExtensionSupport"))
+ return (void*) vkGetExtensionSupport;
+ if (!strcmp(funcName, "vkEnumerateLayers"))
+ return (void*) vkEnumerateLayers;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (void*) vkQueueSubmit;
+ if (!strcmp(funcName, "vkDestroyObject"))
+ return (void*) vkDestroyObject;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (void*) vkCreateBufferView;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (void*) vkCreateImageView;
+ if (!strcmp(funcName, "vkCreateGraphicsPipeline"))
+ return (void*) vkCreateGraphicsPipeline;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelineDerivative"))
+ return (void*) vkCreateGraphicsPipelineDerivative;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (void*) vkCreateSampler;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
+ return (void*) vkCreateDescriptorSetLayout;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayoutChain"))
+ return (void*) vkCreateDescriptorSetLayoutChain;
+ if (!strcmp(funcName, "vkBeginDescriptorPoolUpdate"))
+ return (void*) vkBeginDescriptorPoolUpdate;
+ if (!strcmp(funcName, "vkEndDescriptorPoolUpdate"))
+ return (void*) vkEndDescriptorPoolUpdate;
+ if (!strcmp(funcName, "vkCreateDescriptorPool"))
+ return (void*) vkCreateDescriptorPool;
+ if (!strcmp(funcName, "vkResetDescriptorPool"))
+ return (void*) vkResetDescriptorPool;
+ if (!strcmp(funcName, "vkAllocDescriptorSets"))
+ return (void*) vkAllocDescriptorSets;
+ if (!strcmp(funcName, "vkClearDescriptorSets"))
+ return (void*) vkClearDescriptorSets;
+ if (!strcmp(funcName, "vkUpdateDescriptors"))
+ return (void*) vkUpdateDescriptors;
+ if (!strcmp(funcName, "vkCreateDynamicViewportState"))
+ return (void*) vkCreateDynamicViewportState;
+ if (!strcmp(funcName, "vkCreateDynamicRasterState"))
+ return (void*) vkCreateDynamicRasterState;
+ if (!strcmp(funcName, "vkCreateDynamicColorBlendState"))
+ return (void*) vkCreateDynamicColorBlendState;
+ if (!strcmp(funcName, "vkCreateDynamicDepthStencilState"))
+ return (void*) vkCreateDynamicDepthStencilState;
+ if (!strcmp(funcName, "vkCreateCommandBuffer"))
+ return (void*) vkCreateCommandBuffer;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (void*) vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (void*) vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (void*) vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (void*) vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdBindDynamicStateObject"))
+ return (void*) vkCmdBindDynamicStateObject;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (void*) vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffer"))
+ return (void*) vkCmdBindVertexBuffer;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (void*) vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDraw"))
+ return (void*) vkCmdDraw;
+ if (!strcmp(funcName, "vkCmdDrawIndexed"))
+ return (void*) vkCmdDrawIndexed;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (void*) vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (void*) vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatch"))
+ return (void*) vkCmdDispatch;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (void*) vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (void*) vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (void*) vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (void*) vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (void*) vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdCloneImageData"))
+ return (void*) vkCmdCloneImageData;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (void*) vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (void*) vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (void*) vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdClearDepthStencil"))
+ return (void*) vkCmdClearDepthStencil;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (void*) vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdSetEvent"))
+ return (void*) vkCmdSetEvent;
+ if (!strcmp(funcName, "vkCmdResetEvent"))
+ return (void*) vkCmdResetEvent;
+ if (!strcmp(funcName, "vkCmdWaitEvents"))
+ return (void*) vkCmdWaitEvents;
+ if (!strcmp(funcName, "vkCmdPipelineBarrier"))
+ return (void*) vkCmdPipelineBarrier;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (void*) vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (void*) vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (void*) vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkCmdWriteTimestamp"))
+ return (void*) vkCmdWriteTimestamp;
+ if (!strcmp(funcName, "vkCmdInitAtomicCounters"))
+ return (void*) vkCmdInitAtomicCounters;
+ if (!strcmp(funcName, "vkCmdLoadAtomicCounters"))
+ return (void*) vkCmdLoadAtomicCounters;
+ if (!strcmp(funcName, "vkCmdSaveAtomicCounters"))
+ return (void*) vkCmdSaveAtomicCounters;
+ if (!strcmp(funcName, "vkCreateFramebuffer"))
+ return (void*) vkCreateFramebuffer;
+ if (!strcmp(funcName, "vkCreateRenderPass"))
+ return (void*) vkCreateRenderPass;
+ if (!strcmp(funcName, "vkCmdBeginRenderPass"))
+ return (void*) vkCmdBeginRenderPass;
+ if (!strcmp(funcName, "vkCmdEndRenderPass"))
+ return (void*) vkCmdEndRenderPass;
+ if (!strcmp(funcName, "vkDbgRegisterMsgCallback"))
+ return (void*) vkDbgRegisterMsgCallback;
+ if (!strcmp(funcName, "vkDbgUnregisterMsgCallback"))
+ return (void*) vkDbgUnregisterMsgCallback;
+ if (!strcmp(funcName, "vkCmdDbgMarkerBegin"))
+ return (void*) vkCmdDbgMarkerBegin;
+ if (!strcmp(funcName, "vkCmdDbgMarkerEnd"))
+ return (void*) vkCmdDbgMarkerEnd;
if (!strcmp("drawStateDumpDotFile", funcName))
return (void*) drawStateDumpDotFile;
if (!strcmp("drawStateDumpCommandBufferDotFile", funcName))
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
-#include "xglLayer.h"
+#include "vkLayer.h"
#include <vector>
using namespace std;
DRAWSTATE_UPDATE_WITHOUT_BEGIN, // Attempt to update descriptors w/o calling BeginDescriptorPoolUpdate
DRAWSTATE_INVALID_PIPELINE, // Invalid Pipeline referenced
DRAWSTATE_INVALID_CMD_BUFFER, // Invalid CmdBuffer referenced
- DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in xglCmdBindVertexData() too large for PSO's pVertexBindingDescriptions array
+ DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in vkCmdBindVertexData() too large for PSO's pVertexBindingDescriptions array
DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, // Invalid dyn state object
DRAWSTATE_MISSING_DOT_PROGRAM, // No "dot" program in order to generate png image
- DRAWSTATE_BINDING_DS_NO_END_UPDATE, // DS bound to CmdBuffer w/o call to xglEndDescriptorSetUpdate())
+ DRAWSTATE_BINDING_DS_NO_END_UPDATE, // DS bound to CmdBuffer w/o call to vkEndDescriptorSetUpdate())
DRAWSTATE_NO_DS_POOL, // No DS Pool is available
DRAWSTATE_OUT_OF_MEMORY, // malloc failed
DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, // Type in layout vs. update are not the same
typedef struct _SHADER_DS_MAPPING {
uint32_t slotCount;
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pShaderMappingSlot;
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pShaderMappingSlot;
} SHADER_DS_MAPPING;
typedef struct _GENERIC_HEADER {
- XGL_STRUCTURE_TYPE sType;
+ VK_STRUCTURE_TYPE sType;
const void* pNext;
} GENERIC_HEADER;
typedef struct _PIPELINE_NODE {
- XGL_PIPELINE pipeline;
-
- XGL_GRAPHICS_PIPELINE_CREATE_INFO graphicsPipelineCI;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vertexInputCI;
- XGL_PIPELINE_IA_STATE_CREATE_INFO iaStateCI;
- XGL_PIPELINE_TESS_STATE_CREATE_INFO tessStateCI;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vpStateCI;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rsStateCI;
- XGL_PIPELINE_MS_STATE_CREATE_INFO msStateCI;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cbStateCI;
- XGL_PIPELINE_DS_STATE_CREATE_INFO dsStateCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO tcsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO tesCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO gsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fsCI;
- // Compute shader is include in XGL_COMPUTE_PIPELINE_CREATE_INFO
- XGL_COMPUTE_PIPELINE_CREATE_INFO computePipelineCI;
-
- XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateTree; // Ptr to shadow of data in create tree
+ VK_PIPELINE pipeline;
+
+ VK_GRAPHICS_PIPELINE_CREATE_INFO graphicsPipelineCI;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vertexInputCI;
+ VK_PIPELINE_IA_STATE_CREATE_INFO iaStateCI;
+ VK_PIPELINE_TESS_STATE_CREATE_INFO tessStateCI;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vpStateCI;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rsStateCI;
+ VK_PIPELINE_MS_STATE_CREATE_INFO msStateCI;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cbStateCI;
+ VK_PIPELINE_DS_STATE_CREATE_INFO dsStateCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO tcsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO tesCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO gsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fsCI;
+ // Compute shader is include in VK_COMPUTE_PIPELINE_CREATE_INFO
+ VK_COMPUTE_PIPELINE_CREATE_INFO computePipelineCI;
+
+ VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateTree; // Ptr to shadow of data in create tree
// Vtx input info (if any)
uint32_t vtxBindingCount; // number of bindings
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
uint32_t vtxAttributeCount; // number of attributes
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
uint32_t attachmentCount; // number of CB attachments
- XGL_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
+ VK_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
} PIPELINE_NODE;
typedef struct _SAMPLER_NODE {
- XGL_SAMPLER sampler;
- XGL_SAMPLER_CREATE_INFO createInfo;
+ VK_SAMPLER sampler;
+ VK_SAMPLER_CREATE_INFO createInfo;
} SAMPLER_NODE;
typedef struct _IMAGE_NODE {
- XGL_IMAGE_VIEW image;
- XGL_IMAGE_VIEW_CREATE_INFO createInfo;
- XGL_IMAGE_VIEW_ATTACH_INFO attachInfo;
+ VK_IMAGE_VIEW image;
+ VK_IMAGE_VIEW_CREATE_INFO createInfo;
+ VK_IMAGE_VIEW_ATTACH_INFO attachInfo;
} IMAGE_NODE;
typedef struct _BUFFER_NODE {
- XGL_BUFFER_VIEW buffer;
- XGL_BUFFER_VIEW_CREATE_INFO createInfo;
- XGL_BUFFER_VIEW_ATTACH_INFO attachInfo;
+ VK_BUFFER_VIEW buffer;
+ VK_BUFFER_VIEW_CREATE_INFO createInfo;
+ VK_BUFFER_VIEW_ATTACH_INFO attachInfo;
} BUFFER_NODE;
typedef struct _DYNAMIC_STATE_NODE {
- XGL_DYNAMIC_STATE_OBJECT stateObj;
+ VK_DYNAMIC_STATE_OBJECT stateObj;
GENERIC_HEADER* pCreateInfo;
union {
- XGL_DYNAMIC_VP_STATE_CREATE_INFO vpci;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO rsci;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO cbci;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO dsci;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO vpci;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO rsci;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO cbci;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO dsci;
} create_info;
} DYNAMIC_STATE_NODE;
// Descriptor Data structures
// Layout Node has the core layout data
typedef struct _LAYOUT_NODE {
- XGL_DESCRIPTOR_SET_LAYOUT layout;
- XGL_DESCRIPTOR_TYPE* pTypes; // Dynamic array that will be created to verify descriptor types
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO createInfo;
+ VK_DESCRIPTOR_SET_LAYOUT layout;
+ VK_DESCRIPTOR_TYPE* pTypes; // Dynamic array that will be created to verify descriptor types
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO createInfo;
uint32_t startIndex; // 1st index of this layout
uint32_t endIndex; // last index of this layout
} LAYOUT_NODE;
typedef struct _SET_NODE {
- XGL_DESCRIPTOR_SET set;
- XGL_DESCRIPTOR_POOL pool;
- XGL_DESCRIPTOR_SET_USAGE setUsage;
+ VK_DESCRIPTOR_SET set;
+ VK_DESCRIPTOR_POOL pool;
+ VK_DESCRIPTOR_SET_USAGE setUsage;
// Head of LL of all Update structs for this set
GENERIC_HEADER* pUpdateStructs;
// Total num of descriptors in this set (count of its layout plus all prior layouts)
} SET_NODE;
typedef struct _POOL_NODE {
- XGL_DESCRIPTOR_POOL pool;
- XGL_DESCRIPTOR_POOL_USAGE poolUsage;
+ VK_DESCRIPTOR_POOL pool;
+ VK_DESCRIPTOR_POOL_USAGE poolUsage;
uint32_t maxSets;
- XGL_DESCRIPTOR_POOL_CREATE_INFO createInfo;
+ VK_DESCRIPTOR_POOL_CREATE_INFO createInfo;
bool32_t updateActive; // Track if Pool is in an update block
SET_NODE* pSets; // Head of LL of sets for this Pool
} POOL_NODE;
} CB_STATE;
// Cmd Buffer Wrapper Struct
typedef struct _GLOBAL_CB_NODE {
- XGL_CMD_BUFFER cmdBuffer;
+ VK_CMD_BUFFER cmdBuffer;
uint32_t queueNodeIndex;
- XGL_FLAGS flags;
- XGL_FENCE fence; // fence tracking this cmd buffer
+ VK_FLAGS flags;
+ VK_FENCE fence; // fence tracking this cmd buffer
uint64_t numCmds; // number of cmds in this CB
uint64_t drawCount[NUM_DRAW_TYPES]; // Count of each type of draw in this CB
CB_STATE state; // Track if cmd buffer update status
// Currently storing "lastBound" objects on per-CB basis
// long-term may want to create caches of "lastBound" states and could have
// each individual CMD_NODE referencing its own "lastBound" state
- XGL_PIPELINE lastBoundPipeline;
+ VK_PIPELINE lastBoundPipeline;
uint32_t lastVtxBinding;
- DYNAMIC_STATE_NODE* lastBoundDynamicState[XGL_NUM_STATE_BIND_POINT];
- XGL_DESCRIPTOR_SET lastBoundDescriptorSet;
- XGL_RENDER_PASS activeRenderPass;
- XGL_FRAMEBUFFER framebuffer;
+ DYNAMIC_STATE_NODE* lastBoundDynamicState[VK_NUM_STATE_BIND_POINT];
+ VK_DESCRIPTOR_SET lastBoundDescriptorSet;
+ VK_RENDER_PASS activeRenderPass;
+ VK_FRAMEBUFFER framebuffer;
} GLOBAL_CB_NODE;
//prototypes for extension functions
#include <string.h>
#include "loader_platform.h"
#include "glave_snapshot.h"
-#include "xgl_struct_string_helper.h"
+#include "vk_struct_string_helper.h"
#define LAYER_NAME_STR "GlaveSnapshot"
#define LAYER_ABBREV_STR "GLVSnap"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
// The following is #included again to catch certain OS-specific functions being used:
#include "loader_platform.h"
memcpy(*ppDest, pSrc, size);
}
-XGL_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const XGL_DEVICE_CREATE_INFO* pSrcCreateInfo)
+VK_DEVICE_CREATE_INFO* glv_deepcopy_VK_DEVICE_CREATE_INFO(const VK_DEVICE_CREATE_INFO* pSrcCreateInfo)
{
- XGL_DEVICE_CREATE_INFO* pDestCreateInfo;
+ VK_DEVICE_CREATE_INFO* pDestCreateInfo;
- // NOTE: partially duplicated code from add_XGL_DEVICE_CREATE_INFO_to_packet(...)
+ // NOTE: partially duplicated code from add_VK_DEVICE_CREATE_INFO_to_packet(...)
{
uint32_t i;
- glv_vk_malloc_and_copy((void**)&pDestCreateInfo, sizeof(XGL_DEVICE_CREATE_INFO), pSrcCreateInfo);
- glv_vk_malloc_and_copy((void**)&pDestCreateInfo->pRequestedQueues, pSrcCreateInfo->queueRecordCount*sizeof(XGL_DEVICE_QUEUE_CREATE_INFO), pSrcCreateInfo->pRequestedQueues);
+ glv_vk_malloc_and_copy((void**)&pDestCreateInfo, sizeof(VK_DEVICE_CREATE_INFO), pSrcCreateInfo);
+ glv_vk_malloc_and_copy((void**)&pDestCreateInfo->pRequestedQueues, pSrcCreateInfo->queueRecordCount*sizeof(VK_DEVICE_QUEUE_CREATE_INFO), pSrcCreateInfo->pRequestedQueues);
if (pSrcCreateInfo->extensionCount > 0)
{
glv_vk_malloc_and_copy((void**)&pDestCreateInfo->ppEnabledExtensionNames[i], strlen(pSrcCreateInfo->ppEnabledExtensionNames[i]) + 1, pSrcCreateInfo->ppEnabledExtensionNames[i]);
}
}
- XGL_LAYER_CREATE_INFO *pSrcNext = ( XGL_LAYER_CREATE_INFO *) pSrcCreateInfo->pNext;
- XGL_LAYER_CREATE_INFO **ppDstNext = ( XGL_LAYER_CREATE_INFO **) &pDestCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO *pSrcNext = ( VK_LAYER_CREATE_INFO *) pSrcCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO **ppDstNext = ( VK_LAYER_CREATE_INFO **) &pDestCreateInfo->pNext;
while (pSrcNext != NULL)
{
- if ((pSrcNext->sType == XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
+ if ((pSrcNext->sType == VK_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
{
- glv_vk_malloc_and_copy((void**)ppDstNext, sizeof(XGL_LAYER_CREATE_INFO), pSrcNext);
+ glv_vk_malloc_and_copy((void**)ppDstNext, sizeof(VK_LAYER_CREATE_INFO), pSrcNext);
glv_vk_malloc_and_copy((void**)&(*ppDstNext)->ppActiveLayerNames, pSrcNext->layerCount * sizeof(char*), pSrcNext->ppActiveLayerNames);
for (i = 0; i < pSrcNext->layerCount; i++)
{
glv_vk_malloc_and_copy((void**)&(*ppDstNext)->ppActiveLayerNames[i], strlen(pSrcNext->ppActiveLayerNames[i]) + 1, pSrcNext->ppActiveLayerNames[i]);
}
- ppDstNext = (XGL_LAYER_CREATE_INFO**) &(*ppDstNext)->pNext;
+ ppDstNext = (VK_LAYER_CREATE_INFO**) &(*ppDstNext)->pNext;
}
- pSrcNext = (XGL_LAYER_CREATE_INFO*) pSrcNext->pNext;
+ pSrcNext = (VK_LAYER_CREATE_INFO*) pSrcNext->pNext;
}
}
return pDestCreateInfo;
}
-void glv_deepfree_xgl_device_create_info(XGL_DEVICE_CREATE_INFO* pCreateInfo)
+void glv_deepfree_VK_DEVICE_CREATE_INFO(VK_DEVICE_CREATE_INFO* pCreateInfo)
{
uint32_t i;
if (pCreateInfo->pRequestedQueues != NULL)
free((void*)pCreateInfo->ppEnabledExtensionNames);
}
- XGL_LAYER_CREATE_INFO *pSrcNext = (XGL_LAYER_CREATE_INFO*)pCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO *pSrcNext = (VK_LAYER_CREATE_INFO*)pCreateInfo->pNext;
while (pSrcNext != NULL)
{
- XGL_LAYER_CREATE_INFO* pTmp = (XGL_LAYER_CREATE_INFO*)pSrcNext->pNext;
- if ((pSrcNext->sType == XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
+ VK_LAYER_CREATE_INFO* pTmp = (VK_LAYER_CREATE_INFO*)pSrcNext->pNext;
+ if ((pSrcNext->sType == VK_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
{
for (i = 0; i < pSrcNext->layerCount; i++)
{
free(pCreateInfo);
}
-void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
pDest->gpu = gpu;
- pDest->pCreateInfo = glv_deepcopy_xgl_device_create_info(pCreateInfo);
+ pDest->pCreateInfo = glv_deepcopy_VK_DEVICE_CREATE_INFO(pCreateInfo);
- pDest->pDevice = (XGL_DEVICE*)malloc(sizeof(XGL_DEVICE));
+ pDest->pDevice = (VK_DEVICE*)malloc(sizeof(VK_DEVICE));
*pDest->pDevice = *pDevice;
}
void glv_vk_snapshot_destroy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pSrc)
{
- memset(&pSrc->gpu, 0, sizeof(XGL_PHYSICAL_GPU));
+ memset(&pSrc->gpu, 0, sizeof(VK_PHYSICAL_GPU));
- glv_deepfree_xgl_device_create_info(pSrc->pCreateInfo);
+ glv_deepfree_VK_DEVICE_CREATE_INFO(pSrc->pCreateInfo);
pSrc->pCreateInfo = NULL;
free(pSrc->pDevice);
// add a new node to the global and object lists, then return it so the caller can populate the object information.
-static GLV_VK_SNAPSHOT_LL_NODE* snapshot_insert_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, XGL_OBJECT_TYPE type)
+static GLV_VK_SNAPSHOT_LL_NODE* snapshot_insert_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, VK_OBJECT_TYPE type)
{
// Create a new node
GLV_VK_SNAPSHOT_LL_NODE* pNewObjNode = (GLV_VK_SNAPSHOT_LL_NODE*)malloc(sizeof(GLV_VK_SNAPSHOT_LL_NODE));
}
// This is just a helper function to snapshot_remove_object(..). It is not intended for this to be called directly.
-static void snapshot_remove_obj_type(GLV_VK_SNAPSHOT* pSnapshot, void* pObj, XGL_OBJECT_TYPE objType) {
+static void snapshot_remove_obj_type(GLV_VK_SNAPSHOT* pSnapshot, void* pObj, VK_OBJECT_TYPE objType) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = pSnapshot->pObjectHead[objType];
GLV_VK_SNAPSHOT_LL_NODE *pPrev = pSnapshot->pObjectHead[objType];
while (pTrav) {
pTrav = pTrav->pNextObj;
}
char str[1024];
- sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
+ sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
}
// Search global list to find object,
// Object not found.
char str[1024];
sprintf(str, "Object %p was not found in the created object list. It should be added as a deleted object.", pObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObject, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObject, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
return NULL;
}
// Add a new deleted object node to the list
-static void snapshot_insert_deleted_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, XGL_OBJECT_TYPE type)
+static void snapshot_insert_deleted_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, VK_OBJECT_TYPE type)
{
// Create a new node
GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pNewObjNode = (GLV_VK_SNAPSHOT_DELETED_OBJ_NODE*)malloc(sizeof(GLV_VK_SNAPSHOT_DELETED_OBJ_NODE));
}
// Note: the parameters after pSnapshot match the order of vkCreateDevice(..)
-static void snapshot_insert_device(GLV_VK_SNAPSHOT* pSnapshot, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+static void snapshot_insert_device(GLV_VK_SNAPSHOT* pSnapshot, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(pSnapshot, *pDevice, XGL_OBJECT_TYPE_DEVICE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(pSnapshot, *pDevice, VK_OBJECT_TYPE_DEVICE);
pNode->obj.pStruct = malloc(sizeof(GLV_VK_SNAPSHOT_DEVICE_NODE));
GLV_VK_SNAPSHOT_DEVICE_NODE* pDevNode = (GLV_VK_SNAPSHOT_DEVICE_NODE*)pNode->obj.pStruct;
pSnapshot->deviceCount++;
}
-static void snapshot_remove_device(GLV_VK_SNAPSHOT* pSnapshot, XGL_DEVICE device)
+static void snapshot_remove_device(GLV_VK_SNAPSHOT* pSnapshot, VK_DEVICE device)
{
GLV_VK_SNAPSHOT_LL_NODE* pFoundObject = snapshot_remove_object(pSnapshot, device);
// If the code got here, then the device wasn't in the devices list.
// That means we should add this device to the deleted items list.
- snapshot_insert_deleted_object(&s_delta, device, XGL_OBJECT_TYPE_DEVICE);
+ snapshot_insert_deleted_object(&s_delta, device, VK_OBJECT_TYPE_DEVICE);
}
// Traverse global list and return type for given object
-static XGL_OBJECT_TYPE ll_get_obj_type(XGL_OBJECT object) {
+static VK_OBJECT_TYPE ll_get_obj_type(VK_OBJECT object) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pGlobalObjs;
while (pTrav) {
if (pTrav->obj.pVkObject == object)
}
char str[1024];
sprintf(str, "Attempting look-up on obj %p but it is NOT in the global list!", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, GLVSNAPSHOT_MISSING_OBJECT, LAYER_ABBREV_STR, str);
- return XGL_OBJECT_TYPE_UNKNOWN;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, GLVSNAPSHOT_MISSING_OBJECT, LAYER_ABBREV_STR, str);
+ return VK_OBJECT_TYPE_UNKNOWN;
}
-static void ll_increment_use_count(void* pObj, XGL_OBJECT_TYPE objType) {
+static void ll_increment_use_count(void* pObj, VK_OBJECT_TYPE objType) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
// Instead, we need to make a list of referenced objects. When the delta is merged with a snapshot, we'll need
// to confirm that the referenced objects actually exist in the snapshot; otherwise I guess the merge should fail.
char str[1024];
- sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
// ll_insert_obj(pObj, objType);
// ll_increment_use_count(pObj, objType);
}
// Set selected flag state for an object node
-static void set_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
+static void set_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
if (pObj != NULL) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
// If we do not find it print an error
char str[1024];
- sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
}
// Track selected state for an object node
-static void track_object_status(void* pObj, XGL_STATE_BIND_POINT stateBindPoint) {
- GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[XGL_OBJECT_TYPE_CMD_BUFFER];
+static void track_object_status(void* pObj, VK_STATE_BIND_POINT stateBindPoint) {
+ GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[VK_OBJECT_TYPE_CMD_BUFFER];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
- if (stateBindPoint == XGL_STATE_BIND_VIEWPORT) {
+ if (stateBindPoint == VK_STATE_BIND_VIEWPORT) {
pTrav->obj.status |= OBJSTATUS_VIEWPORT_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_RASTER) {
+ } else if (stateBindPoint == VK_STATE_BIND_RASTER) {
pTrav->obj.status |= OBJSTATUS_RASTER_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_COLOR_BLEND) {
+ } else if (stateBindPoint == VK_STATE_BIND_COLOR_BLEND) {
pTrav->obj.status |= OBJSTATUS_COLOR_BLEND_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_DEPTH_STENCIL) {
+ } else if (stateBindPoint == VK_STATE_BIND_DEPTH_STENCIL) {
pTrav->obj.status |= OBJSTATUS_DEPTH_STENCIL_BOUND;
}
return;
// If we do not find it print an error
char str[1024];
sprintf(str, "Unable to track status for non-existent Command Buffer object %p", pObj);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
// Reset selected flag state for an object node
-static void reset_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
+static void reset_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
// If we do not find it print an error
char str[1024];
- sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
-#include "xgl_dispatch_table_helper.h"
+#include "vk_dispatch_table_helper.h"
static void initGlaveSnapshot(void)
{
const char *strOpt;
getLayerOptionEnum(LAYER_NAME_STR "ReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum(LAYER_NAME_STR "DebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption(LAYER_NAME_STR "LogFilename");
if (strOpt)
g_logFile = stdout;
}
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
if (!objLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How???
//=============================================================================
// vulkan entrypoints
//=============================================================================
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateInstance(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance)
{
- XGL_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
+ VK_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
loader_platform_thread_lock_mutex(&objLock);
- snapshot_insert_object(&s_delta, *pInstance, XGL_OBJECT_TYPE_INSTANCE);
+ snapshot_insert_object(&s_delta, *pInstance, VK_OBJECT_TYPE_INSTANCE);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(XGL_INSTANCE instance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyInstance(VK_INSTANCE instance)
{
- XGL_RESULT result = nextTable.DestroyInstance(instance);
+ VK_RESULT result = nextTable.DestroyInstance(instance);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)instance);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)instance, XGL_OBJECT_TYPE_INSTANCE);
+ ll_increment_use_count((void*)instance, VK_OBJECT_TYPE_INSTANCE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
+ VK_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetGpuInfo(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetGpuInfo((XGL_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetGpuInfo((VK_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
snapshot_insert_device(&s_delta, gpu, pCreateInfo, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_device(&s_delta, device);
loader_platform_thread_unlock_mutex(&objLock);
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pGlobalObjs;
while (pTrav != NULL)
{
- if (pTrav->obj.objType == XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY)
+ if (pTrav->obj.objType == VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY)
{
GLV_VK_SNAPSHOT_LL_NODE *pDel = pTrav;
pTrav = pTrav->pNextGlobal;
snapshot_remove_object(&s_delta, (void*)(pDel->obj.pVkObject));
} else {
char str[1024];
- sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject, pTrav->obj.numUses);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, GLVSNAPSHOT_OBJECT_LEAK, LAYER_ABBREV_STR, str);
+ sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject, pTrav->obj.numUses);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, GLVSNAPSHOT_OBJECT_LEAK, LAYER_ABBREV_STR, str);
pTrav = pTrav->pNextGlobal;
}
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_RESULT result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL) {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else {
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], LAYER_NAME_STR, maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
- set_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ set_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pMem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pMem, VK_OBJECT_TYPE_GPU_MEMORY);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)mem);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- set_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ set_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- reset_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ reset_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu0;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu0;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu0, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu0, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetMultiGpuCompatibility((XGL_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
+ VK_RESULT result = nextTable.GetMultiGpuCompatibility((VK_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
+ VK_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)object);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset, XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset, VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
+ VK_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)image, XGL_OBJECT_TYPE_IMAGE);
+ ll_increment_use_count((void*)image, VK_OBJECT_TYPE_IMAGE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
+ VK_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFence, XGL_OBJECT_TYPE_FENCE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFence, VK_OBJECT_TYPE_FENCE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)fence, XGL_OBJECT_TYPE_FENCE);
+ ll_increment_use_count((void*)fence, VK_OBJECT_TYPE_FENCE);
loader_platform_thread_unlock_mutex(&objLock);
// Warn if submitted_flag is not set
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSemaphore(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSemaphore, XGL_OBJECT_TYPE_QUEUE_SEMAPHORE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSemaphore, VK_OBJECT_TYPE_QUEUE_SEMAPHORE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pEvent, XGL_OBJECT_TYPE_EVENT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pEvent, VK_OBJECT_TYPE_EVENT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetEventStatus(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetEventStatus(event);
+ VK_RESULT result = nextTable.GetEventStatus(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetEvent(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.SetEvent(event);
+ VK_RESULT result = nextTable.SetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetEvent(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetEvent(event);
+ VK_RESULT result = nextTable.ResetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pQueryPool, XGL_OBJECT_TYPE_QUERY_POOL);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pQueryPool, VK_OBJECT_TYPE_QUERY_POOL);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)queryPool, XGL_OBJECT_TYPE_QUERY_POOL);
+ ll_increment_use_count((void*)queryPool, VK_OBJECT_TYPE_QUERY_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
+ VK_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pBuffer, XGL_OBJECT_TYPE_BUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pBuffer, VK_OBJECT_TYPE_BUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_BUFFER_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_BUFFER_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, XGL_OBJECT_TYPE_IMAGE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, VK_OBJECT_TYPE_IMAGE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)image, XGL_OBJECT_TYPE_IMAGE);
+ ll_increment_use_count((void*)image, VK_OBJECT_TYPE_IMAGE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_IMAGE_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_IMAGE_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pShader, XGL_OBJECT_TYPE_SHADER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pShader, VK_OBJECT_TYPE_SHADER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, XGL_OBJECT_TYPE_PIPELINE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, VK_OBJECT_TYPE_PIPELINE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, XGL_OBJECT_TYPE_PIPELINE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, VK_OBJECT_TYPE_PIPELINE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglStorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkStorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)pipeline, XGL_OBJECT_TYPE_PIPELINE);
+ ll_increment_use_count((void*)pipeline, VK_OBJECT_TYPE_PIPELINE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
+ VK_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipeline(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
+ VK_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSampler, XGL_OBJECT_TYPE_SAMPLER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSampler, VK_OBJECT_TYPE_SAMPLER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout( XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout( VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSetLayout, XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSetLayout, VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pDescriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pDescriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ if (result == VK_SUCCESS)
{
for (uint32_t i = 0; i < *pCount; i++) {
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, pDescriptorSets[i], XGL_OBJECT_TYPE_DESCRIPTOR_SET);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, pDescriptorSets[i], VK_OBJECT_TYPE_DESCRIPTOR_SET);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorSet, XGL_OBJECT_TYPE_DESCRIPTOR_SET);
+ ll_increment_use_count((void*)descriptorSet, VK_OBJECT_TYPE_DESCRIPTOR_SET);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pCmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pCmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- reset_status((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |
+ reset_status((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |
OBJSTATUS_RASTER_BOUND |
OBJSTATUS_COLOR_BLEND_BOUND |
OBJSTATUS_DEPTH_STENCIL_BOUND));
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
track_object_status((void*)cmdBuffer, stateBindPoint);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFramebuffer, XGL_OBJECT_TYPE_FRAMEBUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFramebuffer, VK_OBJECT_TYPE_FRAMEBUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pRenderPass, XGL_OBJECT_TYPE_RENDER_PASS);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pRenderPass, VK_OBJECT_TYPE_RENDER_PASS);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN *pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN *pRenderPassBegin)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
+ VK_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
- } XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
+ } VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction &= ~XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction &= ~VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
+ VK_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(VK_BASE_OBJECT object, size_t tagSize, const void* pTag)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
+ VK_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDbgMarkerEnd(cmdBuffer);
}
#if defined(__linux__) || defined(XCB_NVIDIA)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.WsiX11AssociateConnection((XGL_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
+ VK_RESULT result = nextTable.WsiX11AssociateConnection((VK_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
+ VK_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
- if (result == XGL_SUCCESS)
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, XGL_OBJECT_TYPE_IMAGE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, VK_OBJECT_TYPE_IMAGE);
pNode->obj.pStruct = NULL;
- GLV_VK_SNAPSHOT_LL_NODE* pMemNode = snapshot_insert_object(&s_delta, *pMem, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);
+ GLV_VK_SNAPSHOT_LL_NODE* pMemNode = snapshot_insert_object(&s_delta, *pMem, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);
pMemNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
char str[2048];
GLV_VK_SNAPSHOT_LL_NODE* pTrav = s_delta.pGlobalObjs;
sprintf(str, "==== DELTA SNAPSHOT contains %lu objects, %lu devices, and %lu deleted objects", s_delta.globalObjCount, s_delta.deviceCount, s_delta.deltaDeletedObjectCount);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
// print all objects
if (s_delta.globalObjCount > 0)
{
sprintf(str, "======== DELTA SNAPSHOT Created Objects:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pTrav != NULL)
{
- sprintf(str, "\t%s obj %p", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ sprintf(str, "\t%s obj %p", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pTrav = pTrav->pNextGlobal;
}
}
{
GLV_VK_SNAPSHOT_LL_NODE* pDeviceNode = s_delta.pDevices;
sprintf(str, "======== DELTA SNAPSHOT Devices:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pDeviceNode != NULL)
{
GLV_VK_SNAPSHOT_DEVICE_NODE* pDev = (GLV_VK_SNAPSHOT_DEVICE_NODE*)pDeviceNode->obj.pStruct;
- char * createInfoStr = xgl_print_xgl_device_create_info(pDev->params.pCreateInfo, "\t\t");
- sprintf(str, "\t%s obj %p:\n%s", string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE_DEVICE), pDeviceNode->obj.pVkObject, createInfoStr);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDeviceNode->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ char * createInfoStr = vk_print_vk_device_create_info(pDev->params.pCreateInfo, "\t\t");
+ sprintf(str, "\t%s obj %p:\n%s", string_VK_OBJECT_TYPE(VK_OBJECT_TYPE_DEVICE), pDeviceNode->obj.pVkObject, createInfoStr);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDeviceNode->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pDeviceNode = pDeviceNode->pNextObj;
}
}
{
GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pDelObjNode = s_delta.pDeltaDeletedObjects;
sprintf(str, "======== DELTA SNAPSHOT Deleted Objects:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pDelObjNode != NULL)
{
- sprintf(str, " %s obj %p", string_XGL_OBJECT_TYPE(pDelObjNode->objType), pDelObjNode->pVkObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDelObjNode->pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ sprintf(str, " %s obj %p", string_VK_OBJECT_TYPE(pDelObjNode->objType), pDelObjNode->pVkObject);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDelObjNode->pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pDelObjNode = pDelObjNode->pNextObj;
}
}
//=============================================================================
// Old Exported methods
//=============================================================================
-uint64_t glvSnapshotGetObjectCount(XGL_OBJECT_TYPE type)
+uint64_t glvSnapshotGetObjectCount(VK_OBJECT_TYPE type)
{
- uint64_t retVal = (type == XGL_OBJECT_TYPE_ANY) ? s_delta.globalObjCount : s_delta.numObjs[type];
+ uint64_t retVal = (type == VK_OBJECT_TYPE_ANY) ? s_delta.globalObjCount : s_delta.numObjs[type];
return retVal;
}
-XGL_RESULT glvSnapshotGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE *pObjNodeArray)
+VK_RESULT glvSnapshotGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE *pObjNodeArray)
{
// This bool flags if we're pulling all objs or just a single class of objs
- bool32_t bAllObjs = (type == XGL_OBJECT_TYPE_ANY);
+ bool32_t bAllObjs = (type == VK_OBJECT_TYPE_ANY);
// Check the count first thing
uint64_t maxObjCount = (bAllObjs) ? s_delta.globalObjCount : s_delta.numObjs[type];
if (objCount > maxObjCount) {
char str[1024];
- sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_XGL_OBJECT_TYPE(type));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_OBJCOUNT_MAX_EXCEEDED, LAYER_ABBREV_STR, str);
- return XGL_ERROR_INVALID_VALUE;
+ sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_VK_OBJECT_TYPE(type));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_OBJCOUNT_MAX_EXCEEDED, LAYER_ABBREV_STR, str);
+ return VK_ERROR_INVALID_VALUE;
}
GLV_VK_SNAPSHOT_LL_NODE* pTrav = (bAllObjs) ? s_delta.pGlobalObjs : s_delta.pObjectHead[type];
for (uint64_t i = 0; i < objCount; i++) {
if (!pTrav) {
char str[1024];
- sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_XGL_OBJECT_TYPE(type), maxObjCount, i, objCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
- return XGL_ERROR_UNKNOWN;
+ sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_VK_OBJECT_TYPE(type), maxObjCount, i, objCount);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
+ return VK_ERROR_UNKNOWN;
}
memcpy(&pObjNodeArray[i], pTrav, sizeof(GLV_VK_SNAPSHOT_OBJECT_NODE));
pTrav = (bAllObjs) ? pTrav->pNextGlobal : pTrav->pNextObj;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void glvSnapshotPrintObjects(void)
glvSnapshotPrintDelta();
}
-#include "xgl_generic_intercept_proc_helper.h"
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+#include "vk_generic_intercept_proc_helper.h"
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
void* addr;
if (gpu == NULL)
return NULL;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
} OBJECT_STATUS;
// Object type enum
-typedef enum _XGL_OBJECT_TYPE
+typedef enum _VK_OBJECT_TYPE
{
- XGL_OBJECT_TYPE_UNKNOWN,
- XGL_OBJECT_TYPE_SAMPLER,
- XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET,
- XGL_OBJECT_TYPE_DESCRIPTOR_POOL,
- XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
- XGL_OBJECT_TYPE_IMAGE_VIEW,
- XGL_OBJECT_TYPE_QUEUE_SEMAPHORE,
- XGL_OBJECT_TYPE_SHADER,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
- XGL_OBJECT_TYPE_BUFFER,
- XGL_OBJECT_TYPE_PIPELINE,
- XGL_OBJECT_TYPE_DEVICE,
- XGL_OBJECT_TYPE_QUERY_POOL,
- XGL_OBJECT_TYPE_EVENT,
- XGL_OBJECT_TYPE_QUEUE,
- XGL_OBJECT_TYPE_PHYSICAL_GPU,
- XGL_OBJECT_TYPE_RENDER_PASS,
- XGL_OBJECT_TYPE_FRAMEBUFFER,
- XGL_OBJECT_TYPE_IMAGE,
- XGL_OBJECT_TYPE_BUFFER_VIEW,
- XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
- XGL_OBJECT_TYPE_INSTANCE,
- XGL_OBJECT_TYPE_PIPELINE_DELTA,
- XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
- XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
- XGL_OBJECT_TYPE_GPU_MEMORY,
- XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
- XGL_OBJECT_TYPE_FENCE,
- XGL_OBJECT_TYPE_CMD_BUFFER,
- XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
+ VK_OBJECT_TYPE_UNKNOWN,
+ VK_OBJECT_TYPE_SAMPLER,
+ VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET,
+ VK_OBJECT_TYPE_DESCRIPTOR_POOL,
+ VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
+ VK_OBJECT_TYPE_IMAGE_VIEW,
+ VK_OBJECT_TYPE_QUEUE_SEMAPHORE,
+ VK_OBJECT_TYPE_SHADER,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
+ VK_OBJECT_TYPE_BUFFER,
+ VK_OBJECT_TYPE_PIPELINE,
+ VK_OBJECT_TYPE_DEVICE,
+ VK_OBJECT_TYPE_QUERY_POOL,
+ VK_OBJECT_TYPE_EVENT,
+ VK_OBJECT_TYPE_QUEUE,
+ VK_OBJECT_TYPE_PHYSICAL_GPU,
+ VK_OBJECT_TYPE_RENDER_PASS,
+ VK_OBJECT_TYPE_FRAMEBUFFER,
+ VK_OBJECT_TYPE_IMAGE,
+ VK_OBJECT_TYPE_BUFFER_VIEW,
+ VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
+ VK_OBJECT_TYPE_INSTANCE,
+ VK_OBJECT_TYPE_PIPELINE_DELTA,
+ VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
+ VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
+ VK_OBJECT_TYPE_GPU_MEMORY,
+ VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
+ VK_OBJECT_TYPE_FENCE,
+ VK_OBJECT_TYPE_CMD_BUFFER,
+ VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
- XGL_NUM_OBJECT_TYPE,
- XGL_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
-} XGL_OBJECT_TYPE;
+ VK_NUM_OBJECT_TYPE,
+ VK_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
+} VK_OBJECT_TYPE;
-static const char* string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE type) {
+static const char* string_VK_OBJECT_TYPE(VK_OBJECT_TYPE type) {
switch (type)
{
- case XGL_OBJECT_TYPE_DEVICE:
+ case VK_OBJECT_TYPE_DEVICE:
return "DEVICE";
- case XGL_OBJECT_TYPE_PIPELINE:
+ case VK_OBJECT_TYPE_PIPELINE:
return "PIPELINE";
- case XGL_OBJECT_TYPE_FENCE:
+ case VK_OBJECT_TYPE_FENCE:
return "FENCE";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
return "DESCRIPTOR_SET_LAYOUT";
- case XGL_OBJECT_TYPE_GPU_MEMORY:
+ case VK_OBJECT_TYPE_GPU_MEMORY:
return "GPU_MEMORY";
- case XGL_OBJECT_TYPE_QUEUE:
+ case VK_OBJECT_TYPE_QUEUE:
return "QUEUE";
- case XGL_OBJECT_TYPE_IMAGE:
+ case VK_OBJECT_TYPE_IMAGE:
return "IMAGE";
- case XGL_OBJECT_TYPE_CMD_BUFFER:
+ case VK_OBJECT_TYPE_CMD_BUFFER:
return "CMD_BUFFER";
- case XGL_OBJECT_TYPE_QUEUE_SEMAPHORE:
+ case VK_OBJECT_TYPE_QUEUE_SEMAPHORE:
return "QUEUE_SEMAPHORE";
- case XGL_OBJECT_TYPE_FRAMEBUFFER:
+ case VK_OBJECT_TYPE_FRAMEBUFFER:
return "FRAMEBUFFER";
- case XGL_OBJECT_TYPE_SAMPLER:
+ case VK_OBJECT_TYPE_SAMPLER:
return "SAMPLER";
- case XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
+ case VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
return "COLOR_ATTACHMENT_VIEW";
- case XGL_OBJECT_TYPE_BUFFER_VIEW:
+ case VK_OBJECT_TYPE_BUFFER_VIEW:
return "BUFFER_VIEW";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET:
return "DESCRIPTOR_SET";
- case XGL_OBJECT_TYPE_PHYSICAL_GPU:
+ case VK_OBJECT_TYPE_PHYSICAL_GPU:
return "PHYSICAL_GPU";
- case XGL_OBJECT_TYPE_IMAGE_VIEW:
+ case VK_OBJECT_TYPE_IMAGE_VIEW:
return "IMAGE_VIEW";
- case XGL_OBJECT_TYPE_BUFFER:
+ case VK_OBJECT_TYPE_BUFFER:
return "BUFFER";
- case XGL_OBJECT_TYPE_PIPELINE_DELTA:
+ case VK_OBJECT_TYPE_PIPELINE_DELTA:
return "PIPELINE_DELTA";
- case XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
return "DYNAMIC_RS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_EVENT:
+ case VK_OBJECT_TYPE_EVENT:
return "EVENT";
- case XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
+ case VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
return "DEPTH_STENCIL_VIEW";
- case XGL_OBJECT_TYPE_SHADER:
+ case VK_OBJECT_TYPE_SHADER:
return "SHADER";
- case XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
return "DYNAMIC_DS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
return "DYNAMIC_VP_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
return "DYNAMIC_CB_STATE_OBJECT";
- case XGL_OBJECT_TYPE_INSTANCE:
+ case VK_OBJECT_TYPE_INSTANCE:
return "INSTANCE";
- case XGL_OBJECT_TYPE_RENDER_PASS:
+ case VK_OBJECT_TYPE_RENDER_PASS:
return "RENDER_PASS";
- case XGL_OBJECT_TYPE_QUERY_POOL:
+ case VK_OBJECT_TYPE_QUERY_POOL:
return "QUERY_POOL";
- case XGL_OBJECT_TYPE_DESCRIPTOR_POOL:
+ case VK_OBJECT_TYPE_DESCRIPTOR_POOL:
return "DESCRIPTOR_POOL";
- case XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
+ case VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
return "PRESENTABLE_IMAGE_MEMORY";
default:
return "UNKNOWN";
typedef struct _GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS
{
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE_CREATE_INFO* pCreateInfo;
- XGL_DEVICE* pDevice;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE_CREATE_INFO* pCreateInfo;
+ VK_DEVICE* pDevice;
} GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS;
-XGL_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const XGL_DEVICE_CREATE_INFO* pSrcCreateInfo);void glv_deepfree_xgl_device_create_info(XGL_DEVICE_CREATE_INFO* pCreateInfo);
-void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice);
+VK_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const VK_DEVICE_CREATE_INFO* pSrcCreateInfo);void glv_deepfree_xgl_device_create_info(VK_DEVICE_CREATE_INFO* pCreateInfo);
+void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice);
void glv_vk_snapshot_destroy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pSrc);
//=============================================================================
// Node that stores information about an object
typedef struct _GLV_VK_SNAPSHOT_OBJECT_NODE {
void* pVkObject;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
uint64_t numUses;
OBJECT_STATUS status;
void* pStruct; //< optionally points to a device-specific struct (ie, GLV_VK_SNAPSHOT_DEVICE_NODE)
} GLV_VK_SNAPSHOT_OBJECT_NODE;
-// Node that stores information about an XGL_DEVICE
+// Node that stores information about an VK_DEVICE
typedef struct _GLV_VK_SNAPSHOT_DEVICE_NODE {
// This object
- XGL_DEVICE device;
+ VK_DEVICE device;
// CreateDevice parameters
GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS params;
typedef struct _GLV_VK_SNAPSHOT_DELETED_OBJ_NODE {
struct _GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pNextObj;
void* pVkObject;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
} GLV_VK_SNAPSHOT_DELETED_OBJ_NODE;
//=============================================================================
GLV_VK_SNAPSHOT_LL_NODE* pGlobalObjs;
// TEMPORARY: Keep track of all objects of each type
- uint64_t numObjs[XGL_NUM_OBJECT_TYPE];
- GLV_VK_SNAPSHOT_LL_NODE *pObjectHead[XGL_NUM_OBJECT_TYPE];
+ uint64_t numObjs[VK_NUM_OBJECT_TYPE];
+ GLV_VK_SNAPSHOT_LL_NODE *pObjectHead[VK_NUM_OBJECT_TYPE];
// List of created devices and [potentially] hierarchical tree of the objects on it.
// This is used to represent ownership of the objects
// merge a delta into a snapshot and return the updated snapshot
GLV_VK_SNAPSHOT glvSnapshotMerge(const GLV_VK_SNAPSHOT * const pDelta, const GLV_VK_SNAPSHOT * const pSnapshot);
-uint64_t glvSnapshotGetObjectCount(XGL_OBJECT_TYPE type);
-XGL_RESULT glvSnapshotGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE* pObjNodeArray);
+uint64_t glvSnapshotGetObjectCount(VK_OBJECT_TYPE type);
+VK_RESULT glvSnapshotGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE* pObjNodeArray);
void glvSnapshotPrintObjects(void);
// Func ptr typedefs
-typedef uint64_t (*GLVSNAPSHOT_GET_OBJECT_COUNT)(XGL_OBJECT_TYPE);
-typedef XGL_RESULT (*GLVSNAPSHOT_GET_OBJECTS)(XGL_OBJECT_TYPE, uint64_t, GLV_VK_SNAPSHOT_OBJECT_NODE*);
+typedef uint64_t (*GLVSNAPSHOT_GET_OBJECT_COUNT)(VK_OBJECT_TYPE);
+typedef VK_RESULT (*GLVSNAPSHOT_GET_OBJECTS)(VK_OBJECT_TYPE, uint64_t, GLV_VK_SNAPSHOT_OBJECT_NODE*);
typedef void (*GLVSNAPSHOT_PRINT_OBJECTS)(void);
typedef void (*GLVSNAPSHOT_START_TRACKING)(void);
typedef GLV_VK_SNAPSHOT (*GLVSNAPSHOT_GET_DELTA)(void);
#include <string>
#include <map>
#include <string.h>
-#include <xglLayer.h>
+#include <vkLayer.h>
#include "loader_platform.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
static unsigned int convertStringEnumVal(const char *_enum)
{
// only handles single enum values
- if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_IGNORE"))
- return XGL_DBG_LAYER_ACTION_IGNORE;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_CALLBACK"))
- return XGL_DBG_LAYER_ACTION_CALLBACK;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_LOG_MSG"))
- return XGL_DBG_LAYER_ACTION_LOG_MSG;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_BREAK"))
- return XGL_DBG_LAYER_ACTION_BREAK;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_INFO"))
- return XGL_DBG_LAYER_LEVEL_INFO;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_WARN"))
- return XGL_DBG_LAYER_LEVEL_WARN;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_PERF_WARN"))
- return XGL_DBG_LAYER_LEVEL_PERF_WARN;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_ERROR"))
- return XGL_DBG_LAYER_LEVEL_ERROR;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_NONE"))
- return XGL_DBG_LAYER_LEVEL_NONE;
+ if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_IGNORE"))
+ return VK_DBG_LAYER_ACTION_IGNORE;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_CALLBACK"))
+ return VK_DBG_LAYER_ACTION_CALLBACK;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_LOG_MSG"))
+ return VK_DBG_LAYER_ACTION_LOG_MSG;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_BREAK"))
+ return VK_DBG_LAYER_ACTION_BREAK;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_INFO"))
+ return VK_DBG_LAYER_LEVEL_INFO;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_WARN"))
+ return VK_DBG_LAYER_LEVEL_WARN;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_PERF_WARN"))
+ return VK_DBG_LAYER_LEVEL_PERF_WARN;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_ERROR"))
+ return VK_DBG_LAYER_LEVEL_ERROR;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_NONE"))
+ return VK_DBG_LAYER_LEVEL_NONE;
return 0;
}
const char *getLayerOption(const char *_option)
std::map<std::string, std::string>::const_iterator it;
if (!m_fileIsParsed)
{
- parseFile("xgl_layer_settings.txt");
+ parseFile("vk_layer_settings.txt");
}
if ((it = m_valueMap.find(_option)) == m_valueMap.end())
{
if (!m_fileIsParsed)
{
- parseFile("xgl_layer_settings.txt");
+ parseFile("vk_layer_settings.txt");
}
m_valueMap[_option] = _val;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <stdio.h>
#include <stdbool.h>
-static XGL_LAYER_DBG_FUNCTION_NODE *g_pDbgFunctionHead = NULL;
-static XGL_LAYER_DBG_REPORT_LEVEL g_reportingLevel = XGL_DBG_LAYER_LEVEL_INFO;
-static XGL_LAYER_DBG_ACTION g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+static VK_LAYER_DBG_FUNCTION_NODE *g_pDbgFunctionHead = NULL;
+static VK_LAYER_DBG_REPORT_LEVEL g_reportingLevel = VK_DBG_LAYER_LEVEL_INFO;
+static VK_LAYER_DBG_ACTION g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
static bool g_actionIsDefault = true;
static FILE *g_logFile = NULL;
// Utility function to handle reporting
// If callbacks are enabled, use them, otherwise use printf
-static void layerCbMsg(XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+static void layerCbMsg(VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pLayerPrefix,
g_logFile = stdout;
}
- if (g_debugAction & (XGL_DBG_LAYER_ACTION_LOG_MSG | XGL_DBG_LAYER_ACTION_CALLBACK)) {
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ if (g_debugAction & (VK_DBG_LAYER_ACTION_LOG_MSG | VK_DBG_LAYER_ACTION_CALLBACK)) {
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
switch (msgType) {
- case XGL_DBG_MSG_ERROR:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_ERROR) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG) {
+ case VK_DBG_MSG_ERROR:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_ERROR) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG) {
fprintf(g_logFile, "{%s}ERROR : %s\n", pLayerPrefix, pMsg);
fflush(g_logFile);
}
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
}
break;
- case XGL_DBG_MSG_WARNING:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_WARN) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ case VK_DBG_MSG_WARNING:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_WARN) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}WARN : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
}
break;
- case XGL_DBG_MSG_PERF_WARNING:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_PERF_WARN) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ case VK_DBG_MSG_PERF_WARNING:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_PERF_WARN) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}PERF_WARN : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
break;
default:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_INFO) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_INFO) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}INFO : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
using namespace std;
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xgl_struct_string_helper_cpp.h"
+#include "vk_dispatch_table_helper.h"
+#include "vk_struct_string_helper_cpp.h"
#include "mem_tracker.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
#include "loader_platform.h"
#include "layers_msg.h"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(g_initOnce);
// TODO : This can be much smarter, using separate locks for separate global data
static int globalLockInitialized = 0;
#define MAX_BINDING 0xFFFFFFFF
-map<XGL_CMD_BUFFER, MT_CB_INFO*> cbMap;
-map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*> memObjMap;
-map<XGL_OBJECT, MT_OBJ_INFO*> objectMap;
+map<VK_CMD_BUFFER, MT_CB_INFO*> cbMap;
+map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*> memObjMap;
+map<VK_OBJECT, MT_OBJ_INFO*> objectMap;
map<uint64_t, MT_FENCE_INFO*> fenceMap; // Map fenceId to fence info
-map<XGL_QUEUE, MT_QUEUE_INFO*> queueMap;
+map<VK_QUEUE, MT_QUEUE_INFO*> queueMap;
// TODO : Add per-device fence completion
static uint64_t g_currentFenceId = 1;
-static XGL_DEVICE globalDevice = NULL;
+static VK_DEVICE globalDevice = NULL;
// Add new queue for this device to map container
-static void addQueueInfo(const XGL_QUEUE queue)
+static void addQueueInfo(const VK_QUEUE queue)
{
MT_QUEUE_INFO* pInfo = new MT_QUEUE_INFO;
pInfo->lastRetiredId = 0;
static void deleteQueueInfoList(void)
{
// Process queue list, cleaning up each entry before deleting
- for (map<XGL_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
+ for (map<VK_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
(*ii).second->pQueueCmdBuffers.clear();
}
queueMap.clear();
}
// Add new CBInfo for this cb to map container
-static void addCBInfo(const XGL_CMD_BUFFER cb)
+static void addCBInfo(const VK_CMD_BUFFER cb)
{
MT_CB_INFO* pInfo = new MT_CB_INFO;
- memset(pInfo, 0, (sizeof(MT_CB_INFO) - sizeof(list<XGL_GPU_MEMORY>)));
+ memset(pInfo, 0, (sizeof(MT_CB_INFO) - sizeof(list<VK_GPU_MEMORY>)));
pInfo->cmdBuffer = cb;
cbMap[cb] = pInfo;
}
// Return ptr to Info in CB map, or NULL if not found
-static MT_CB_INFO* getCBInfo(const XGL_CMD_BUFFER cb)
+static MT_CB_INFO* getCBInfo(const VK_CMD_BUFFER cb)
{
MT_CB_INFO* pCBInfo = NULL;
if (cbMap.find(cb) != cbMap.end()) {
}
// Return object info for 'object' or return NULL if no info exists
-static MT_OBJ_INFO* getObjectInfo(const XGL_OBJECT object)
+static MT_OBJ_INFO* getObjectInfo(const VK_OBJECT object)
{
MT_OBJ_INFO* pObjInfo = NULL;
return pObjInfo;
}
-static MT_OBJ_INFO* addObjectInfo(XGL_OBJECT object, XGL_STRUCTURE_TYPE sType, const void *pCreateInfo, const int struct_size, const char *name_prefix)
+static MT_OBJ_INFO* addObjectInfo(VK_OBJECT object, VK_STRUCTURE_TYPE sType, const void *pCreateInfo, const int struct_size, const char *name_prefix)
{
MT_OBJ_INFO* pInfo = new MT_OBJ_INFO;
memset(pInfo, 0, sizeof(MT_OBJ_INFO));
}
// Add a fence, creating one if necessary to our list of fences/fenceIds
-static uint64_t addFenceInfo(XGL_FENCE fence, XGL_QUEUE queue)
+static uint64_t addFenceInfo(VK_FENCE fence, VK_QUEUE queue)
{
// Create fence object
MT_FENCE_INFO* pFenceInfo = new MT_FENCE_INFO;
memset(pFenceInfo, 0, sizeof(MT_FENCE_INFO));
// If no fence, create an internal fence to track the submissions
if (fence == NULL) {
- XGL_FENCE_CREATE_INFO fci;
- fci.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ VK_FENCE_CREATE_INFO fci;
+ fci.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
fci.pNext = NULL;
- fci.flags = static_cast<XGL_FENCE_CREATE_FLAGS>(0);
+ fci.flags = static_cast<VK_FENCE_CREATE_FLAGS>(0);
nextTable.CreateFence(globalDevice, &fci, &pFenceInfo->fence);
- addObjectInfo(pFenceInfo->fence, fci.sType, &fci, sizeof(XGL_FENCE_CREATE_INFO), "internalFence");
- pFenceInfo->localFence = XGL_TRUE;
+ addObjectInfo(pFenceInfo->fence, fci.sType, &fci, sizeof(VK_FENCE_CREATE_INFO), "internalFence");
+ pFenceInfo->localFence = VK_TRUE;
} else {
- pFenceInfo->localFence = XGL_FALSE;
+ pFenceInfo->localFence = VK_FALSE;
pFenceInfo->fence = fence;
}
pFenceInfo->queue = queue;
map<uint64_t, MT_FENCE_INFO*>::iterator item;
MT_FENCE_INFO* pDelInfo = fenceMap[fenceId];
if (pDelInfo != NULL) {
- if (pDelInfo->localFence == XGL_TRUE) {
+ if (pDelInfo->localFence == VK_TRUE) {
nextTable.DestroyObject(pDelInfo->fence);
}
delete pDelInfo;
}
// Search through list for this fence, deleting all items before it (with lower IDs) and updating lastRetiredId
-static void updateFenceTracking(XGL_FENCE fence)
+static void updateFenceTracking(VK_FENCE fence)
{
MT_FENCE_INFO *pCurFenceInfo = NULL;
uint64_t fenceId = 0;
- XGL_QUEUE queue = NULL;
+ VK_QUEUE queue = NULL;
for (map<uint64_t, MT_FENCE_INFO*>::iterator ii=fenceMap.begin(); ii!=fenceMap.end(); ++ii) {
if ((*ii).second != NULL) {
// Update fence state in fenceCreateInfo structure
MT_OBJ_INFO* pObjectInfo = getObjectInfo(fence);
if (pObjectInfo != NULL) {
- pObjectInfo->create_info.fence_create_info.flags = XGL_FENCE_CREATE_SIGNALED_BIT;
+ pObjectInfo->create_info.fence_create_info.flags = VK_FENCE_CREATE_SIGNALED_BIT;
}
}
}
// Utility function that determines if a fenceId has been retired yet
static bool32_t fenceRetired(uint64_t fenceId)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
MT_FENCE_INFO* pFenceInfo = fenceMap[fenceId];
if (pFenceInfo != 0)
{
MT_QUEUE_INFO* pQueueInfo = queueMap[pFenceInfo->queue];
if (fenceId <= pQueueInfo->lastRetiredId)
{
- result = XGL_TRUE;
+ result = VK_TRUE;
}
} else { // If not in list, fence has been retired and deleted
- result = XGL_TRUE;
+ result = VK_TRUE;
}
return result;
}
// Return the fence associated with a fenceId
-static XGL_FENCE getFenceFromId(uint64_t fenceId)
+static VK_FENCE getFenceFromId(uint64_t fenceId)
{
- XGL_FENCE fence = NULL;
+ VK_FENCE fence = NULL;
if (fenceId != 0) {
// Search for an item with this fenceId
if (fenceMap.find(fenceId) != fenceMap.end()) {
}
// Helper routine that updates the fence list for a specific queue to all-retired
-static void retireQueueFences(XGL_QUEUE queue)
+static void retireQueueFences(VK_QUEUE queue)
{
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
}
// Helper routine that updates fence list for all queues to all-retired
-static void retireDeviceFences(XGL_DEVICE device)
+static void retireDeviceFences(VK_DEVICE device)
{
// Process each queue for device
// TODO: Add multiple device support
- for (map<XGL_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
+ for (map<VK_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
retireQueueFences((*ii).first);
}
}
// Returns True if a memory reference is present in a Queue's memory reference list
// Queue is validated by caller
static bool32_t checkMemRef(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_FALSE;
- list<XGL_GPU_MEMORY>::iterator it;
+ bool32_t result = VK_FALSE;
+ list<VK_GPU_MEMORY>::iterator it;
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
for (it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
if ((*it) == mem) {
- result = XGL_TRUE;
+ result = VK_TRUE;
break;
}
}
}
static bool32_t validateQueueMemRefs(
- XGL_QUEUE queue,
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER *pCmdBuffers)
+ const VK_CMD_BUFFER *pCmdBuffers)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// Verify Queue
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
- sprintf(str, "Unknown Queue %p specified in xglQueueSubmit", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ sprintf(str, "Unknown Queue %p specified in vkQueueSubmit", queue);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
// Iterate through all CBs in pCmdBuffers
MT_CB_INFO* pCBInfo = getCBInfo(pCmdBuffers[i]);
if (!pCBInfo) {
char str[1024];
- sprintf(str, "Unable to find info for CB %p in order to check memory references in xglQueueSubmit for queue %p", (void*)pCmdBuffers[i], queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ sprintf(str, "Unable to find info for CB %p in order to check memory references in vkQueueSubmit for queue %p", (void*)pCmdBuffers[i], queue);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
// Validate that all actual references are accounted for in pMemRefs
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
// Search for each memref in queues memreflist.
if (checkMemRef(queue, *it)) {
char str[1024];
sprintf(str, "Found Mem Obj %p binding to CB %p for queue %p", (*it), pCmdBuffers[i], queue);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_NONE, "MEM", str);
}
else {
char str[1024];
sprintf(str, "Queue %p Memory reference list for Command Buffer %p is missing ref to mem obj %p", queue, pCmdBuffers[i], (*it));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ result = VK_FALSE;
}
}
}
}
- if (result == XGL_TRUE) {
+ if (result == VK_TRUE) {
char str[1024];
sprintf(str, "Verified all memory dependencies for Queue %p are included in pMemRefs list", queue);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
// TODO : Could report mem refs in pMemRefs that AREN'T in mem list, that would be primarily informational
// Currently just noting that there is a difference
}
// Return ptr to info in map container containing mem, or NULL if not found
// Calls to this function should be wrapped in mutex
-static MT_MEM_OBJ_INFO* getMemObjInfo(const XGL_GPU_MEMORY mem)
+static MT_MEM_OBJ_INFO* getMemObjInfo(const VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pMemObjInfo = NULL;
return pMemObjInfo;
}
-static void addMemObjInfo(const XGL_GPU_MEMORY mem, const XGL_MEMORY_ALLOC_INFO* pAllocInfo)
+static void addMemObjInfo(const VK_GPU_MEMORY mem, const VK_MEMORY_ALLOC_INFO* pAllocInfo)
{
MT_MEM_OBJ_INFO* pInfo = new MT_MEM_OBJ_INFO;
pInfo->refCount = 0;
- memset(&pInfo->allocInfo, 0, sizeof(XGL_MEMORY_ALLOC_INFO));
+ memset(&pInfo->allocInfo, 0, sizeof(VK_MEMORY_ALLOC_INFO));
- if (pAllocInfo) { // MEM alloc created by xglWsiX11CreatePresentableImage() doesn't have alloc info struct
- memcpy(&pInfo->allocInfo, pAllocInfo, sizeof(XGL_MEMORY_ALLOC_INFO));
+ if (pAllocInfo) { // MEM alloc created by vkWsiX11CreatePresentableImage() doesn't have alloc info struct
+ memcpy(&pInfo->allocInfo, pAllocInfo, sizeof(VK_MEMORY_ALLOC_INFO));
// TODO: Update for real hardware, actually process allocation info structures
pInfo->allocInfo.pNext = NULL;
}
// Find CB Info and add mem binding to list container
// Find Mem Obj Info and add CB binding to list container
-static bool32_t updateCBBinding(const XGL_CMD_BUFFER cb, const XGL_GPU_MEMORY mem)
+static bool32_t updateCBBinding(const VK_CMD_BUFFER cb, const VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// First update CB binding in MemObj mini CB list
MT_MEM_OBJ_INFO* pMemInfo = getMemObjInfo(mem);
if (!pMemInfo) {
char str[1024];
sprintf(str, "Trying to bind mem obj %p to CB %p but no info for that mem obj.\n Was it correctly allocated? Did it already get freed?", mem, cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Search for cmd buffer object in memory object's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_CMD_BUFFER>::iterator it = pMemInfo->pCmdBufferBindings.begin(); it != pMemInfo->pCmdBufferBindings.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_CMD_BUFFER>::iterator it = pMemInfo->pCmdBufferBindings.begin(); it != pMemInfo->pCmdBufferBindings.end(); ++it) {
if ((*it) == cb) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pMemInfo->pCmdBufferBindings.push_front(cb);
pMemInfo->refCount++;
}
if (!pCBInfo) {
char str[1024];
sprintf(str, "Trying to bind mem obj %p to CB %p but no info for that CB. Was it CB incorrectly destroyed?", mem, cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Search for memory object in cmd buffer's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
if ((*it) == mem) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pCBInfo->pMemObjList.push_front(mem);
}
}
// Clear the CB Binding for mem
// Calls to this function should be wrapped in mutex
-static void clearCBBinding(const XGL_CMD_BUFFER cb, const XGL_GPU_MEMORY mem)
+static void clearCBBinding(const VK_CMD_BUFFER cb, const VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
// TODO : Having this check is not ideal, really if memInfo was deleted,
}
// Free bindings related to CB
-static bool32_t freeCBBindings(const XGL_CMD_BUFFER cb)
+static bool32_t freeCBBindings(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
MT_CB_INFO* pCBInfo = getCBInfo(cb);
if (!pCBInfo) {
char str[1024];
sprintf(str, "Unable to find global CB info %p for deletion", cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
if (!fenceRetired(pCBInfo->fenceId)) {
deleteFenceInfo(pCBInfo->fenceId);
}
- for (list<XGL_GPU_MEMORY>::iterator it=pCBInfo->pMemObjList.begin(); it!=pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it=pCBInfo->pMemObjList.begin(); it!=pCBInfo->pMemObjList.end(); ++it) {
clearCBBinding(cb, (*it));
}
pCBInfo->pMemObjList.clear();
// Delete CBInfo from list along with all of it's mini MemObjInfo
// and also clear mem references to CB
// TODO : When should this be called? There's no Destroy of CBs that I see
-static bool32_t deleteCBInfo(const XGL_CMD_BUFFER cb)
+static bool32_t deleteCBInfo(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
result = freeCBBindings(cb);
// Delete the CBInfo info
- if (result == XGL_TRUE) {
+ if (result == VK_TRUE) {
if (cbMap.find(cb) != cbMap.end()) {
MT_CB_INFO* pDelInfo = cbMap[cb];
delete pDelInfo;
// Delete the entire CB list
static bool32_t deleteCBInfoList()
{
- bool32_t result = XGL_TRUE;
- for (map<XGL_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
+ bool32_t result = VK_TRUE;
+ for (map<VK_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
freeCBBindings((*ii).first);
delete (*ii).second;
}
{
uint32_t refCount = 0; // Count found references
- for (list<XGL_CMD_BUFFER>::const_iterator it = pMemObjInfo->pCmdBufferBindings.begin(); it != pMemObjInfo->pCmdBufferBindings.end(); ++it) {
+ for (list<VK_CMD_BUFFER>::const_iterator it = pMemObjInfo->pCmdBufferBindings.begin(); it != pMemObjInfo->pCmdBufferBindings.end(); ++it) {
refCount++;
char str[1024];
sprintf(str, "Command Buffer %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
}
- for (list<XGL_OBJECT>::const_iterator it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
+ for (list<VK_OBJECT>::const_iterator it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
char str[1024];
- sprintf(str, "XGL Object %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, "VK Object %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
}
if (refCount != pMemObjInfo->refCount) {
char str[1024];
sprintf(str, "Refcount of %u for Mem Obj %p does't match reported refs of %u", pMemObjInfo->refCount, pMemObjInfo->mem, refCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pMemObjInfo->mem, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pMemObjInfo->mem, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
}
-static void deleteMemObjInfo(XGL_GPU_MEMORY mem)
+static void deleteMemObjInfo(VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pDelInfo = memObjMap[mem];
if (memObjMap.find(mem) != memObjMap.end()) {
}
// Check if fence for given CB is completed
-static bool32_t checkCBCompleted(const XGL_CMD_BUFFER cb)
+static bool32_t checkCBCompleted(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
MT_CB_INFO* pCBInfo = getCBInfo(cb);
if (!pCBInfo) {
char str[1024];
sprintf(str, "Unable to find global CB info %p to check for completion", cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
if (!fenceRetired(pCBInfo->fenceId)) {
char str[1024];
sprintf(str, "FenceId %" PRIx64", fence %p for CB %p has not been checked for completion", pCBInfo->fenceId, getFenceFromId(pCBInfo->fenceId), cb);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_NONE, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_NONE, "MEM", str);
+ result = VK_FALSE;
}
}
return result;
}
-static bool32_t freeMemObjInfo(XGL_GPU_MEMORY mem, bool internal)
+static bool32_t freeMemObjInfo(VK_GPU_MEMORY mem, bool internal)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// Parse global list to find info w/ mem
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
if (!pInfo) {
char str[1024];
sprintf(str, "Couldn't find mem info object for %p\n Was %p never allocated or previously freed?", (void*)mem, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
if (pInfo->allocInfo.allocationSize == 0 && !internal) {
char str[1024];
sprintf(str, "Attempting to free memory associated with a Presentable Image, %p, this should not be explicitly freed\n", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Clear any CB bindings for completed CBs
// TODO : Is there a better place to do this?
- list<XGL_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin();
- list<XGL_CMD_BUFFER>::iterator temp;
+ list<VK_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin();
+ list<VK_CMD_BUFFER>::iterator temp;
while (it != pInfo->pCmdBufferBindings.end()) {
- if (XGL_TRUE == checkCBCompleted(*it)) {
+ if (VK_TRUE == checkCBCompleted(*it)) {
temp = it;
++temp;
freeCBBindings(*it);
// If references remain, report the error and can search CB list to find references
char str[1024];
sprintf(str, "Freeing mem obj %p while it still has references", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREED_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREED_MEM_REF, "MEM", str);
reportMemReferences(pInfo);
- result = XGL_FALSE;
+ result = VK_FALSE;
}
// Delete mem obj info
deleteMemObjInfo(mem);
// 1. Remove ObjectInfo from MemObjInfo list container of obj bindings & free it
// 2. Decrement refCount for MemObjInfo
// 3. Clear MemObjInfo ptr from ObjectInfo
-static bool32_t clearObjectBinding(XGL_OBJECT object)
+static bool32_t clearObjectBinding(VK_OBJECT object)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (!pObjInfo) {
char str[1024];
sprintf(str, "Attempting to clear mem binding for object %p: devices, queues, command buffers, shaders and memory objects do not have external memory requirements and it is unneccessary to call bind/unbindObjectMemory on them.", object);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
} else {
if (!pObjInfo->pMemObjInfo) {
char str[1024];
sprintf(str, "Attempting to clear mem binding on obj %p but it has no binding.", (void*)object);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, "MEM", str);
} else {
- for (list<XGL_OBJECT>::iterator it = pObjInfo->pMemObjInfo->pObjBindings.begin(); it != pObjInfo->pMemObjInfo->pObjBindings.end(); ++it) {
+ for (list<VK_OBJECT>::iterator it = pObjInfo->pMemObjInfo->pObjBindings.begin(); it != pObjInfo->pMemObjInfo->pObjBindings.end(); ++it) {
pObjInfo->pMemObjInfo->refCount--;
pObjInfo->pMemObjInfo = NULL;
it = pObjInfo->pMemObjInfo->pObjBindings.erase(it);
- result = XGL_TRUE;
+ result = VK_TRUE;
break;
}
- if (result == XGL_FALSE) {
+ if (result == VK_FALSE) {
char str[1024];
sprintf(str, "While trying to clear mem binding for object %p, unable to find that object referenced by mem obj %p",
object, pObjInfo->pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
}
}
// IF a previous binding existed, clear it
// Add reference from objectInfo to memoryInfo
// Add reference off of objInfo
-// Return XGL_TRUE if addition is successful, XGL_FALSE otherwise
-static bool32_t updateObjectBinding(XGL_OBJECT object, XGL_GPU_MEMORY mem)
+// Return VK_TRUE if addition is successful, VK_FALSE otherwise
+static bool32_t updateObjectBinding(VK_OBJECT object, VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
// Handle NULL case separately, just clear previous binding & decrement reference
- if (mem == XGL_NULL_HANDLE) {
+ if (mem == VK_NULL_HANDLE) {
clearObjectBinding(object);
- result = XGL_TRUE;
+ result = VK_TRUE;
} else {
char str[1024];
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (!pObjInfo) {
sprintf(str, "Attempting to update Binding of Obj(%p) that's not in global list()", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
- return XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ return VK_FALSE;
}
// non-null case so should have real mem obj
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
if (!pInfo) {
sprintf(str, "While trying to bind mem for obj %p, couldn't find info for mem obj %p", (void*)object, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
} else {
// Search for object in memory object's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
if ((*it) == object) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pInfo->pObjBindings.push_front(object);
pInfo->refCount++;
}
if (pObjInfo->pMemObjInfo) {
clearObjectBinding(object); // Need to clear the previous object binding before setting new binding
sprintf(str, "Updating memory binding for object %p from mem obj %p to %p", object, pObjInfo->pMemObjInfo->mem, mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_NONE, "MEM", str);
}
// For image objects, make sure default memory state is correctly set
// TODO : What's the best/correct way to handle this?
- if (XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO == pObjInfo->sType) {
- if (pObjInfo->create_info.image_create_info.usage & (XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT)) {
+ if (VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO == pObjInfo->sType) {
+ if (pObjInfo->create_info.image_create_info.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_BIT)) {
// TODO:: More memory state transition stuff.
}
}
pObjInfo->pMemObjInfo = pInfo;
}
}
- return XGL_TRUE;
+ return VK_TRUE;
}
// Print details of global Obj tracking list
MT_OBJ_INFO* pInfo = NULL;
char str[1024];
sprintf(str, "Details of Object list of size %lu elements", objectMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_OBJECT, MT_OBJ_INFO*>::iterator ii=objectMap.begin(); ii!=objectMap.end(); ++ii) {
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (map<VK_OBJECT, MT_OBJ_INFO*>::iterator ii=objectMap.begin(); ii!=objectMap.end(); ++ii) {
pInfo = (*ii).second;
sprintf(str, " ObjInfo %p has object %p, pMemObjInfo %p", pInfo, pInfo->object, pInfo->pMemObjInfo);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pInfo->object, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pInfo->object, 0, MEMTRACK_NONE, "MEM", str);
}
}
// For given Object, get 'mem' obj that it's bound to or NULL if no binding
-static XGL_GPU_MEMORY getMemBindingFromObject(const XGL_OBJECT object)
+static VK_GPU_MEMORY getMemBindingFromObject(const VK_OBJECT object)
{
- XGL_GPU_MEMORY mem = NULL;
+ VK_GPU_MEMORY mem = NULL;
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (pObjInfo) {
if (pObjInfo->pMemObjInfo) {
else {
char str[1024];
sprintf(str, "Trying to get mem binding for object %p but object has no mem binding", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", str);
printObjList();
}
}
else {
char str[1024];
sprintf(str, "Trying to get mem binding for object %p but no such object in global list", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
printObjList();
}
return mem;
// Just printing each msg individually for now, may want to package these into single large print
char str[1024];
sprintf(str, "MEM INFO : Details of Memory Object list of size %lu elements", memObjMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
+ for (map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
pInfo = (*ii).second;
sprintf(str, " ===MemObjInfo at %p===", (void*)pInfo);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
sprintf(str, " Mem object: %p", (void*)pInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
sprintf(str, " Ref Count: %u", pInfo->refCount);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
if (0 != pInfo->allocInfo.allocationSize) {
- string pAllocInfoMsg = xgl_print_xgl_memory_alloc_info(&pInfo->allocInfo, "{MEM}INFO : ");
+ string pAllocInfoMsg = vk_print_vk_memory_alloc_info(&pInfo->allocInfo, "{MEM}INFO : ");
sprintf(str, " Mem Alloc info:\n%s", pAllocInfoMsg.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
} else {
- sprintf(str, " Mem Alloc info is NULL (alloc done by xglWsiX11CreatePresentableImage())");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " Mem Alloc info is NULL (alloc done by vkWsiX11CreatePresentableImage())");
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
- sprintf(str, " XGL OBJECT Binding list of size %lu elements:", pInfo->pObjBindings.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
- sprintf(str, " XGL OBJECT %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " VK OBJECT Binding list of size %lu elements:", pInfo->pObjBindings.size());
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (list<VK_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ sprintf(str, " VK OBJECT %p", (*it));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
- sprintf(str, " XGL Command Buffer (CB) binding list of size %lu elements", pInfo->pCmdBufferBindings.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin(); it != pInfo->pCmdBufferBindings.end(); ++it) {
- sprintf(str, " XGL CB %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " VK Command Buffer (CB) binding list of size %lu elements", pInfo->pCmdBufferBindings.size());
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (list<VK_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin(); it != pInfo->pCmdBufferBindings.end(); ++it) {
+ sprintf(str, " VK CB %p", (*it));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
}
}
char str[1024] = {0};
MT_CB_INFO* pCBInfo = NULL;
sprintf(str, "Details of CB list of size %lu elements", cbMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
+ for (map<VK_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
pCBInfo = (*ii).second;
sprintf(str, " CB Info (%p) has CB %p, fenceId %" PRIx64", and fence %p",
(void*)pCBInfo, (void*)pCBInfo->cmdBuffer, pCBInfo->fenceId,
(void*)getFenceFromId(pCBInfo->fenceId));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
sprintf(str, " Mem obj %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
}
}
getLayerOptionEnum("MemTrackerReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("MemTrackerDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("MemTrackerLogFilename");
if (strOpt)
// initialize Layer dispatch table
// TODO handle multiple GPUs
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
- xglGetProcAddrType fpGetProcAddr = (xglGetProcAddrType)fpNextGPA((XGL_PHYSICAL_GPU) pCurObj->nextObject, (char *) "xglGetProcAddr");
+ vkGetProcAddrType fpGetProcAddr = (vkGetProcAddrType)fpNextGPA((VK_PHYSICAL_GPU) pCurObj->nextObject, (char *) "vkGetProcAddr");
nextTable.GetProcAddr = fpGetProcAddr;
if (!globalLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How??? One
- // suggestion is to call this during xglCreateInstance(), and then we
- // can clean it up during xglDestroyInstance(). However, that requires
+ // suggestion is to call this during vkCreateInstance(), and then we
+ // can clean it up during vkDestroyInstance(). However, that requires
// that the layer have per-instance locks. We need to come back and
// address this soon.
loader_platform_thread_create_mutex(&globalLock);
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// Save off device in case we need it to create Fences
globalDevice = *pDevice;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
char str[1024];
- sprintf(str, "Printing List details prior to xglDestroyDevice()");
+ sprintf(str, "Printing List details prior to vkDestroyDevice()");
loader_platform_thread_lock_mutex(&globalLock);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, device, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, device, 0, MEMTRACK_NONE, "MEM", str);
printMemList();
printCBList();
printObjList();
- if (XGL_FALSE == deleteCBInfoList()) {
- sprintf(str, "Issue deleting global CB list in xglDestroyDevice()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ if (VK_FALSE == deleteCBInfoList()) {
+ sprintf(str, "Issue deleting global CB list in vkDestroyDevice()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
// Report any memory leaks
MT_MEM_OBJ_INFO* pInfo = NULL;
- for (map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
+ for (map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
pInfo = (*ii).second;
if (pInfo->allocInfo.allocationSize != 0) {
- sprintf(str, "Mem Object %p has not been freed. You should clean up this memory by calling xglFreeMemory(%p) prior to xglDestroyDevice().",
+ sprintf(str, "Mem Object %p has not been freed. You should clean up this memory by calling vkFreeMemory(%p) prior to vkDestroyDevice().",
pInfo->mem, pInfo->mem);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pInfo->mem, 0, MEMTRACK_MEMORY_LEAK, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pInfo->mem, 0, MEMTRACK_MEMORY_LEAK, "MEM", str);
}
}
deleteQueueInfoList();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
/* This entrypoint is NOT going to init its own dispatch table since loader calls here early */
if (!strcmp(pExtName, "MemTracker"))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (nextTable.GetExtensionSupport != NULL)
{
- result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount,
size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount,
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount,
maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], "MemTracker", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
addQueueInfo(*pQueue);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueAddMemReference(queue, mem);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.QueueAddMemReference(queue, mem);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
sprintf(str, "Unknown Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
- if (checkMemRef(queue, mem) == XGL_TRUE) {
+ if (checkMemRef(queue, mem) == VK_TRUE) {
// Alread in list, just warn
char str[1024];
sprintf(str, "Request to add a memory reference (%p) to Queue %p -- ref is already present in the queue's reference list", mem, queue);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
}
else {
// Add to queue's memory reference list
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
// TODO : Decrement ref count for this memory reference on this queue. Remove if ref count is zero.
- XGL_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
sprintf(str, "Unknown Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
- for (list<XGL_GPU_MEMORY>::iterator it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
if ((*it) == mem) {
it = pQueueInfo->pMemRefList.erase(it);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER *pCmdBuffers,
- XGL_FENCE fence)
+ const VK_CMD_BUFFER *pCmdBuffers,
+ VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&globalLock);
// TODO : Need to track fence and clear mem references when fence clears
pCBInfo->fenceId = fenceId;
}
- if (XGL_FALSE == validateQueueMemRefs(queue, cmdBufferCount, pCmdBuffers)) {
+ if (VK_FALSE == validateQueueMemRefs(queue, cmdBufferCount, pCmdBuffers)) {
char str[1024];
sprintf(str, "Unable to verify memory references for Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, getFenceFromId(fenceId));
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, getFenceFromId(fenceId));
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
// TODO : Track allocations and overall size here
loader_platform_thread_lock_mutex(&globalLock);
addMemObjInfo(*pMem, pAllocInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- /* From spec : A memory object is freed by calling xglFreeMemory() when it is no longer needed. Before
+ /* From spec : A memory object is freed by calling vkFreeMemory() when it is no longer needed. Before
* freeing a memory object, an application must ensure the memory object is unbound from
* all API objects referencing it and that it is not referenced by any queued command buffers
*/
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_FALSE == freeMemObjInfo(mem, false)) {
+ if (VK_FALSE == freeMemObjInfo(mem, false)) {
char str[1024];
sprintf(str, "Issue while freeing mem obj %p", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREE_MEM_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREE_MEM_ERROR, "MEM", str);
}
printMemList();
printObjList();
printCBList();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
// TODO : Update tracking for this alloc
// Make sure memory is not pinned, which can't have priority set
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
// TODO : Track when memory is mapped
loader_platform_thread_lock_mutex(&globalLock);
MT_MEM_OBJ_INFO *pMemObj = getMemObjInfo(mem);
- if ((pMemObj->allocInfo.memProps & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) == 0) {
+ if ((pMemObj->allocInfo.memProps & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) == 0) {
char str[1024];
- sprintf(str, "Mapping Memory (%p) without XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT set", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_STATE, "MEM", str);
+ sprintf(str, "Mapping Memory (%p) without VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT set", (void*)mem);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_STATE, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
// TODO : Track as memory gets unmapped, do we want to check what changed following map?
// Make sure that memory was ever mapped to begin with
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
// Verify that memory is actually pinnable
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
loader_platform_thread_lock_mutex(&globalLock);
// First check if this is a CmdBuffer
- if (NULL != getCBInfo((XGL_CMD_BUFFER)object)) {
- deleteCBInfo((XGL_CMD_BUFFER)object);
+ if (NULL != getCBInfo((VK_CMD_BUFFER)object)) {
+ deleteCBInfo((VK_CMD_BUFFER)object);
}
if (objectMap.find(object) != objectMap.end()) {
if (pDelInfo->pMemObjInfo) {
// Wsi allocated Memory is tied to image object so clear the binding and free that memory automatically
if (0 == pDelInfo->pMemObjInfo->allocInfo.allocationSize) { // Wsi allocated memory has NULL allocInfo w/ 0 size
- XGL_GPU_MEMORY memToFree = pDelInfo->pMemObjInfo->mem;
+ VK_GPU_MEMORY memToFree = pDelInfo->pMemObjInfo->mem;
clearObjectBinding(object);
freeMemObjInfo(memToFree, true);
}
else {
char str[1024];
- sprintf(str, "Destroying obj %p that is still bound to memory object %p\nYou should first clear binding by calling xglBindObjectMemory(%p, 0, XGL_NULL_HANDLE, 0)", object, (void*)pDelInfo->pMemObjInfo->mem, object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_DESTROY_OBJECT_ERROR, "MEM", str);
+ sprintf(str, "Destroying obj %p that is still bound to memory object %p\nYou should first clear binding by calling vkBindObjectMemory(%p, 0, VK_NULL_HANDLE, 0)", object, (void*)pDelInfo->pMemObjInfo->mem, object);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_DESTROY_OBJECT_ERROR, "MEM", str);
// From the spec : If an object has previous memory binding, it is required to unbind memory from an API object before it is destroyed.
clearObjectBinding(object);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
// TODO : What to track here?
// Could potentially save returned mem requirements and validate values passed into BindObjectMemory for this object
// From spec : The only objects that are guaranteed to have no external memory requirements are devices, queues, command buffers, shaders and memory objects.
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
loader_platform_thread_lock_mutex(&globalLock);
// Track objects tied to memory
- if (XGL_FALSE == updateObjectBinding(object, mem)) {
+ if (VK_FALSE == updateObjectBinding(object, mem)) {
char str[1024];
sprintf(str, "Unable to set object %p binding to mem obj %p", (void*)object, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
printObjList();
printMemList();
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pFence, pCreateInfo->sType, pCreateInfo, sizeof(XGL_FENCE_CREATE_INFO), "fence");
+ addObjectInfo(*pFence, pCreateInfo->sType, pCreateInfo, sizeof(VK_FENCE_CREATE_INFO), "fence");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetFences(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetFences(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences)
{
- XGL_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
// Reset fence state in fenceCreateInfo structure
for (uint32_t i = 0; i < fenceCount; i++) {
MT_OBJ_INFO* pObjectInfo = getObjectInfo(pFences[i]);
if (pObjectInfo != NULL) {
pObjectInfo->create_info.fence_create_info.flags =
- static_cast<XGL_FENCE_CREATE_FLAGS>(pObjectInfo->create_info.fence_create_info.flags & ~XGL_FENCE_CREATE_SIGNALED_BIT);
+ static_cast<VK_FENCE_CREATE_FLAGS>(pObjectInfo->create_info.fence_create_info.flags & ~VK_FENCE_CREATE_SIGNALED_BIT);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
updateFenceTracking(fence);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
// Verify fence status of submitted fences
for(uint32_t i = 0; i < fenceCount; i++) {
MT_OBJ_INFO* pObjectInfo = getObjectInfo(pFences[i]);
if (pObjectInfo != NULL) {
- if (pObjectInfo->create_info.fence_create_info.flags == XGL_FENCE_CREATE_SIGNALED_BIT) {
+ if (pObjectInfo->create_info.fence_create_info.flags == VK_FENCE_CREATE_SIGNALED_BIT) {
char str[1024];
- sprintf(str, "xglWaitForFences specified signaled-state Fence %p. Fences must be reset before being submitted", pFences[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pFences[i], 0, MEMTRACK_INVALID_FENCE_STATE, "MEM", str);
+ sprintf(str, "vkWaitForFences specified signaled-state Fence %p. Fences must be reset before being submitted", pFences[i]);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pFences[i], 0, MEMTRACK_INVALID_FENCE_STATE, "MEM", str);
}
}
}
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
if (waitAll || fenceCount == 1) { // Clear all the fences
for(uint32_t i = 0; i < fenceCount; i++) {
updateFenceTracking(pFences[i]);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
retireQueueFences(queue);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
retireDeviceFences(device);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pEvent, pCreateInfo->sType, pCreateInfo, sizeof(XGL_EVENT_CREATE_INFO), "event");
+ addObjectInfo(*pEvent, pCreateInfo->sType, pCreateInfo, sizeof(VK_EVENT_CREATE_INFO), "event");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pQueryPool, pCreateInfo->sType, pCreateInfo, sizeof(XGL_QUERY_POOL_CREATE_INFO), "query_pool");
+ addObjectInfo(*pQueryPool, pCreateInfo->sType, pCreateInfo, sizeof(VK_QUERY_POOL_CREATE_INFO), "query_pool");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pBuffer, pCreateInfo->sType, pCreateInfo, sizeof(XGL_BUFFER_CREATE_INFO), "buffer");
+ addObjectInfo(*pBuffer, pCreateInfo->sType, pCreateInfo, sizeof(VK_BUFFER_CREATE_INFO), "buffer");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_BUFFER_VIEW_CREATE_INFO), "buffer_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_BUFFER_VIEW_CREATE_INFO), "buffer_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pImage, pCreateInfo->sType, pCreateInfo, sizeof(XGL_IMAGE_CREATE_INFO), "image");
+ addObjectInfo(*pImage, pCreateInfo->sType, pCreateInfo, sizeof(VK_IMAGE_CREATE_INFO), "image");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_IMAGE_VIEW_CREATE_INFO), "image_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_IMAGE_VIEW_CREATE_INFO), "image_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO), "color_attachment_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO), "color_attachment_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DEPTH_STENCIL_VIEW_CREATE_INFO), "ds_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_DEPTH_STENCIL_VIEW_CREATE_INFO), "ds_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_COMPUTE_PIPELINE_CREATE_INFO), "compute_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_COMPUTE_PIPELINE_CREATE_INFO), "compute_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pSampler, pCreateInfo->sType, pCreateInfo, sizeof(XGL_SAMPLER_CREATE_INFO), "sampler");
+ addObjectInfo(*pSampler, pCreateInfo->sType, pCreateInfo, sizeof(VK_SAMPLER_CREATE_INFO), "sampler");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_VP_STATE_CREATE_INFO), "viewport_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_VP_STATE_CREATE_INFO), "viewport_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_RS_STATE_CREATE_INFO), "raster_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_RS_STATE_CREATE_INFO), "raster_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_CB_STATE_CREATE_INFO), "cb_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_CB_STATE_CREATE_INFO), "cb_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_DS_STATE_CREATE_INFO), "ds_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_DS_STATE_CREATE_INFO), "ds_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
// At time of cmd buffer creation, create global cmd buffer info for the returned cmd buffer
loader_platform_thread_lock_mutex(&globalLock);
if (*pCmdBuffer)
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
// This implicitly resets the Cmd Buffer so make sure any fence is done and then clear memory references
MT_CB_INFO* pCBInfo = getCBInfo(cmdBuffer);
if (pCBInfo && (!fenceRetired(pCBInfo->fenceId))) {
bool32_t cbDone = checkCBCompleted(cmdBuffer);
- if (XGL_FALSE == cbDone) {
+ if (VK_FALSE == cbDone) {
char str[1024];
- sprintf(str, "Calling xglBeginCommandBuffer() on active CB %p before it has completed. You must check CB flag before this call.", cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
+ sprintf(str, "Calling vkBeginCommandBuffer() on active CB %p before it has completed. You must check CB flag before this call.", cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
}
}
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
loader_platform_thread_lock_mutex(&globalLock);
freeCBBindings(cmdBuffer);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
// TODO : Anything to do here?
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
// Verify that CB is complete (not in-flight)
MT_CB_INFO* pCBInfo = getCBInfo(cmdBuffer);
if (pCBInfo && (!fenceRetired(pCBInfo->fenceId))) {
bool32_t cbDone = checkCBCompleted(cmdBuffer);
- if (XGL_FALSE == cbDone) {
+ if (VK_FALSE == cbDone) {
char str[1024];
- sprintf(str, "Resetting CB %p before it has completed. You must check CB flag before calling xglResetCommandBuffer().", cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
+ sprintf(str, "Resetting CB %p before it has completed. You must check CB flag before calling vkResetCommandBuffer().", cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
}
}
// Clear memory references as this point.
loader_platform_thread_lock_mutex(&globalLock);
freeCBBindings(cmdBuffer);
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-// TODO : For any xglCmdBind* calls that include an object which has mem bound to it,
+// TODO : For any vkCmdBind* calls that include an object which has mem bound to it,
// need to account for that mem now having binding to given cmdBuffer
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
#if 0
// TODO : If memory bound to pipeline, then need to tie that mem to cmdBuffer
} else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p to non-existant command buffer %p!", (void*)pipeline, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, (char *) "DS", (char *) str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, (char *) "DS", (char *) str);
}
}
else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p that doesn't exist!", (void*)pipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, MEMTRACK_INVALID_OBJECT, (char *) "DS", (char *) str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, MEMTRACK_INVALID_OBJECT, (char *) "DS", (char *) str);
}
#endif
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
MT_OBJ_INFO *pObjInfo;
loader_platform_thread_lock_mutex(&globalLock);
if (!pCmdBuf) {
char str[1024];
sprintf(str, "Unable to find command buffer object %p, was it ever created?", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, "DD", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, "DD", str);
}
pObjInfo = getObjectInfo(state);
if (!pObjInfo) {
char str[1024];
sprintf(str, "Unable to find dynamic state object %p, was it ever created?", (void*)state);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, state, 0, MEMTRACK_INVALID_OBJECT, "DD", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, state, 0, MEMTRACK_INVALID_OBJECT, "DD", str);
}
pCmdBuf->pDynamicState[stateBindPoint] = pObjInfo;
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData)
{
// TODO : Somewhere need to verify that all textures referenced by shaders in DS are in some type of *SHADER_READ* state
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDrawIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDrawIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDrawIndexedIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDrawIndexedIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDispatchIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDispatchIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyBuffer() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyBuffer() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyBuffer() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyBuffer() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
// TODO : Each image will have mem mapping so track them
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
// TODO : Each image will have mem mapping so track them
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
// TODO : Track this
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyMemoryToImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyMemoryToImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(srcBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyMemoryToImage() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyMemoryToImage() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
// TODO : Track this
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyImageToMemory() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyImageToMemory() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyImageToMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyImageToMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
// TODO : Each image will have mem mapping so track them
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCloneImageData() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCloneImageData() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCloneImageData() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCloneImageData() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdUpdateMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdUpdateMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdFillMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdFillMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
- // TODO : Verify memory is in XGL_IMAGE_STATE_CLEAR state
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(image);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(image);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdClearColorImage() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdClearColorImage() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
float depth, uint32_t stencil,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
- // TODO : Verify memory is in XGL_IMAGE_STATE_CLEAR state
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(image);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(image);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdClearDepthStencil() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdClearDepthStencil() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResolveImage() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResolveImage() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResolveImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResolveImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdBeginQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdBeginQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdEndQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdEndQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResetQueryPool() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResetQueryPool() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pInfo = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pInfo;
+ VK_LAYER_DBG_FUNCTION_NODE *pInfo = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pInfo;
while (pInfo) {
if (pInfo->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pInfo->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
} else {
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
#if !defined(WIN32)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
// Add image object, then insert the new Mem Object and then bind it to created image
- addObjectInfo(*pImage, _XGL_STRUCTURE_TYPE_MAX_ENUM, pCreateInfo, sizeof(XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO), "wsi_x11_image");
+ addObjectInfo(*pImage, _VK_STRUCTURE_TYPE_MAX_ENUM, pCreateInfo, sizeof(VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO), "wsi_x11_image");
addMemObjInfo(*pMem, NULL);
- if (XGL_FALSE == updateObjectBinding(*pImage, *pMem)) {
+ if (VK_FALSE == updateObjectBinding(*pImage, *pMem)) {
char str[1024];
- sprintf(str, "In xglWsiX11CreatePresentableImage(), unable to set image %p binding to mem obj %p", (void*)*pImage, (void*)*pMem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, *pImage, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkWsiX11CreatePresentableImage(), unable to set image %p binding to mem obj %p", (void*)*pImage, (void*)*pMem);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, *pImage, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
}
printObjList();
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&globalLock);
addFenceInfo(fence, queue);
char str[1024];
- sprintf(str, "In xglWsiX11QueuePresent(), checking queue %p for fence %p", queue, fence);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, "In vkWsiX11QueuePresent(), checking queue %p for fence %p", queue, fence);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
#endif // WIN32
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- if (!strcmp(funcName, "xglGetProcAddr"))
- return (void *) xglGetProcAddr;
- if (!strcmp(funcName, "xglCreateDevice"))
- return (void*) xglCreateDevice;
- if (!strcmp(funcName, "xglDestroyDevice"))
- return (void*) xglDestroyDevice;
- if (!strcmp(funcName, "xglGetExtensionSupport"))
- return (void*) xglGetExtensionSupport;
- if (!strcmp(funcName, "xglEnumerateLayers"))
- return (void*) xglEnumerateLayers;
- if (!strcmp(funcName, "xglQueueSubmit"))
- return (void*) xglQueueSubmit;
- if (!strcmp(funcName, "xglAllocMemory"))
- return (void*) xglAllocMemory;
- if (!strcmp(funcName, "xglFreeMemory"))
- return (void*) xglFreeMemory;
- if (!strcmp(funcName, "xglSetMemoryPriority"))
- return (void*) xglSetMemoryPriority;
- if (!strcmp(funcName, "xglMapMemory"))
- return (void*) xglMapMemory;
- if (!strcmp(funcName, "xglUnmapMemory"))
- return (void*) xglUnmapMemory;
- if (!strcmp(funcName, "xglPinSystemMemory"))
- return (void*) xglPinSystemMemory;
- if (!strcmp(funcName, "xglOpenSharedMemory"))
- return (void*) xglOpenSharedMemory;
- if (!strcmp(funcName, "xglOpenPeerMemory"))
- return (void*) xglOpenPeerMemory;
- if (!strcmp(funcName, "xglOpenPeerImage"))
- return (void*) xglOpenPeerImage;
- if (!strcmp(funcName, "xglDestroyObject"))
- return (void*) xglDestroyObject;
- if (!strcmp(funcName, "xglGetObjectInfo"))
- return (void*) xglGetObjectInfo;
- if (!strcmp(funcName, "xglBindObjectMemory"))
- return (void*) xglBindObjectMemory;
- if (!strcmp(funcName, "xglCreateFence"))
- return (void*) xglCreateFence;
- if (!strcmp(funcName, "xglGetFenceStatus"))
- return (void*) xglGetFenceStatus;
- if (!strcmp(funcName, "xglResetFences"))
- return (void*) xglResetFences;
- if (!strcmp(funcName, "xglWaitForFences"))
- return (void*) xglWaitForFences;
- if (!strcmp(funcName, "xglQueueWaitIdle"))
- return (void*) xglQueueWaitIdle;
- if (!strcmp(funcName, "xglDeviceWaitIdle"))
- return (void*) xglDeviceWaitIdle;
- if (!strcmp(funcName, "xglCreateEvent"))
- return (void*) xglCreateEvent;
- if (!strcmp(funcName, "xglCreateQueryPool"))
- return (void*) xglCreateQueryPool;
- if (!strcmp(funcName, "xglCreateBuffer"))
- return (void*) xglCreateBuffer;
- if (!strcmp(funcName, "xglCreateBufferView"))
- return (void*) xglCreateBufferView;
- if (!strcmp(funcName, "xglCreateImage"))
- return (void*) xglCreateImage;
- if (!strcmp(funcName, "xglCreateImageView"))
- return (void*) xglCreateImageView;
- if (!strcmp(funcName, "xglCreateColorAttachmentView"))
- return (void*) xglCreateColorAttachmentView;
- if (!strcmp(funcName, "xglCreateDepthStencilView"))
- return (void*) xglCreateDepthStencilView;
- if (!strcmp(funcName, "xglCreateShader"))
- return (void*) xglCreateShader;
- if (!strcmp(funcName, "xglCreateGraphicsPipeline"))
- return (void*) xglCreateGraphicsPipeline;
- if (!strcmp(funcName, "xglCreateGraphicsPipelineDerivative"))
- return (void*) xglCreateGraphicsPipelineDerivative;
- if (!strcmp(funcName, "xglCreateComputePipeline"))
- return (void*) xglCreateComputePipeline;
- if (!strcmp(funcName, "xglCreateSampler"))
- return (void*) xglCreateSampler;
- if (!strcmp(funcName, "xglCreateDynamicViewportState"))
- return (void*) xglCreateDynamicViewportState;
- if (!strcmp(funcName, "xglCreateDynamicRasterState"))
- return (void*) xglCreateDynamicRasterState;
- if (!strcmp(funcName, "xglCreateDynamicColorBlendState"))
- return (void*) xglCreateDynamicColorBlendState;
- if (!strcmp(funcName, "xglCreateDynamicDepthStencilState"))
- return (void*) xglCreateDynamicDepthStencilState;
- if (!strcmp(funcName, "xglCreateCommandBuffer"))
- return (void*) xglCreateCommandBuffer;
- if (!strcmp(funcName, "xglBeginCommandBuffer"))
- return (void*) xglBeginCommandBuffer;
- if (!strcmp(funcName, "xglEndCommandBuffer"))
- return (void*) xglEndCommandBuffer;
- if (!strcmp(funcName, "xglResetCommandBuffer"))
- return (void*) xglResetCommandBuffer;
- if (!strcmp(funcName, "xglCmdBindPipeline"))
- return (void*) xglCmdBindPipeline;
- if (!strcmp(funcName, "xglCmdBindDynamicStateObject"))
- return (void*) xglCmdBindDynamicStateObject;
- if (!strcmp(funcName, "xglCmdBindDescriptorSets"))
- return (void*) xglCmdBindDescriptorSets;
- if (!strcmp(funcName, "xglCmdBindVertexBuffer"))
- return (void*) xglCmdBindVertexBuffer;
- if (!strcmp(funcName, "xglCmdBindIndexBuffer"))
- return (void*) xglCmdBindIndexBuffer;
- if (!strcmp(funcName, "xglCmdDrawIndirect"))
- return (void*) xglCmdDrawIndirect;
- if (!strcmp(funcName, "xglCmdDrawIndexedIndirect"))
- return (void*) xglCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "xglCmdDispatchIndirect"))
- return (void*) xglCmdDispatchIndirect;
- if (!strcmp(funcName, "xglCmdCopyBuffer"))
- return (void*) xglCmdCopyBuffer;
- if (!strcmp(funcName, "xglCmdCopyImage"))
- return (void*) xglCmdCopyImage;
- if (!strcmp(funcName, "xglCmdCopyBufferToImage"))
- return (void*) xglCmdCopyBufferToImage;
- if (!strcmp(funcName, "xglCmdCopyImageToBuffer"))
- return (void*) xglCmdCopyImageToBuffer;
- if (!strcmp(funcName, "xglCmdCloneImageData"))
- return (void*) xglCmdCloneImageData;
- if (!strcmp(funcName, "xglCmdUpdateBuffer"))
- return (void*) xglCmdUpdateBuffer;
- if (!strcmp(funcName, "xglCmdFillBuffer"))
- return (void*) xglCmdFillBuffer;
- if (!strcmp(funcName, "xglCmdClearColorImage"))
- return (void*) xglCmdClearColorImage;
- if (!strcmp(funcName, "xglCmdClearDepthStencil"))
- return (void*) xglCmdClearDepthStencil;
- if (!strcmp(funcName, "xglCmdResolveImage"))
- return (void*) xglCmdResolveImage;
- if (!strcmp(funcName, "xglCmdBeginQuery"))
- return (void*) xglCmdBeginQuery;
- if (!strcmp(funcName, "xglCmdEndQuery"))
- return (void*) xglCmdEndQuery;
- if (!strcmp(funcName, "xglCmdResetQueryPool"))
- return (void*) xglCmdResetQueryPool;
- if (!strcmp(funcName, "xglDbgRegisterMsgCallback"))
- return (void*) xglDbgRegisterMsgCallback;
- if (!strcmp(funcName, "xglDbgUnregisterMsgCallback"))
- return (void*) xglDbgUnregisterMsgCallback;
- if (!strcmp(funcName, "xglGetDeviceQueue"))
- return (void*) xglGetDeviceQueue;
- if (!strcmp(funcName, "xglQueueAddMemReference"))
- return (void*) xglQueueAddMemReference;
- if (!strcmp(funcName, "xglQueueRemoveMemReference"))
- return (void*) xglQueueRemoveMemReference;
+ if (!strcmp(funcName, "vkGetProcAddr"))
+ return (void *) vkGetProcAddr;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (void*) vkCreateDevice;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (void*) vkDestroyDevice;
+ if (!strcmp(funcName, "vkGetExtensionSupport"))
+ return (void*) vkGetExtensionSupport;
+ if (!strcmp(funcName, "vkEnumerateLayers"))
+ return (void*) vkEnumerateLayers;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (void*) vkQueueSubmit;
+ if (!strcmp(funcName, "vkAllocMemory"))
+ return (void*) vkAllocMemory;
+ if (!strcmp(funcName, "vkFreeMemory"))
+ return (void*) vkFreeMemory;
+ if (!strcmp(funcName, "vkSetMemoryPriority"))
+ return (void*) vkSetMemoryPriority;
+ if (!strcmp(funcName, "vkMapMemory"))
+ return (void*) vkMapMemory;
+ if (!strcmp(funcName, "vkUnmapMemory"))
+ return (void*) vkUnmapMemory;
+ if (!strcmp(funcName, "vkPinSystemMemory"))
+ return (void*) vkPinSystemMemory;
+ if (!strcmp(funcName, "vkOpenSharedMemory"))
+ return (void*) vkOpenSharedMemory;
+ if (!strcmp(funcName, "vkOpenPeerMemory"))
+ return (void*) vkOpenPeerMemory;
+ if (!strcmp(funcName, "vkOpenPeerImage"))
+ return (void*) vkOpenPeerImage;
+ if (!strcmp(funcName, "vkDestroyObject"))
+ return (void*) vkDestroyObject;
+ if (!strcmp(funcName, "vkGetObjectInfo"))
+ return (void*) vkGetObjectInfo;
+ if (!strcmp(funcName, "vkBindObjectMemory"))
+ return (void*) vkBindObjectMemory;
+ if (!strcmp(funcName, "vkCreateFence"))
+ return (void*) vkCreateFence;
+ if (!strcmp(funcName, "vkGetFenceStatus"))
+ return (void*) vkGetFenceStatus;
+ if (!strcmp(funcName, "vkResetFences"))
+ return (void*) vkResetFences;
+ if (!strcmp(funcName, "vkWaitForFences"))
+ return (void*) vkWaitForFences;
+ if (!strcmp(funcName, "vkQueueWaitIdle"))
+ return (void*) vkQueueWaitIdle;
+ if (!strcmp(funcName, "vkDeviceWaitIdle"))
+ return (void*) vkDeviceWaitIdle;
+ if (!strcmp(funcName, "vkCreateEvent"))
+ return (void*) vkCreateEvent;
+ if (!strcmp(funcName, "vkCreateQueryPool"))
+ return (void*) vkCreateQueryPool;
+ if (!strcmp(funcName, "vkCreateBuffer"))
+ return (void*) vkCreateBuffer;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (void*) vkCreateBufferView;
+ if (!strcmp(funcName, "vkCreateImage"))
+ return (void*) vkCreateImage;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (void*) vkCreateImageView;
+ if (!strcmp(funcName, "vkCreateColorAttachmentView"))
+ return (void*) vkCreateColorAttachmentView;
+ if (!strcmp(funcName, "vkCreateDepthStencilView"))
+ return (void*) vkCreateDepthStencilView;
+ if (!strcmp(funcName, "vkCreateShader"))
+ return (void*) vkCreateShader;
+ if (!strcmp(funcName, "vkCreateGraphicsPipeline"))
+ return (void*) vkCreateGraphicsPipeline;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelineDerivative"))
+ return (void*) vkCreateGraphicsPipelineDerivative;
+ if (!strcmp(funcName, "vkCreateComputePipeline"))
+ return (void*) vkCreateComputePipeline;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (void*) vkCreateSampler;
+ if (!strcmp(funcName, "vkCreateDynamicViewportState"))
+ return (void*) vkCreateDynamicViewportState;
+ if (!strcmp(funcName, "vkCreateDynamicRasterState"))
+ return (void*) vkCreateDynamicRasterState;
+ if (!strcmp(funcName, "vkCreateDynamicColorBlendState"))
+ return (void*) vkCreateDynamicColorBlendState;
+ if (!strcmp(funcName, "vkCreateDynamicDepthStencilState"))
+ return (void*) vkCreateDynamicDepthStencilState;
+ if (!strcmp(funcName, "vkCreateCommandBuffer"))
+ return (void*) vkCreateCommandBuffer;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (void*) vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (void*) vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (void*) vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (void*) vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdBindDynamicStateObject"))
+ return (void*) vkCmdBindDynamicStateObject;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (void*) vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffer"))
+ return (void*) vkCmdBindVertexBuffer;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (void*) vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (void*) vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (void*) vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (void*) vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (void*) vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (void*) vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (void*) vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (void*) vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdCloneImageData"))
+ return (void*) vkCmdCloneImageData;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (void*) vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (void*) vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (void*) vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdClearDepthStencil"))
+ return (void*) vkCmdClearDepthStencil;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (void*) vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (void*) vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (void*) vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (void*) vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkDbgRegisterMsgCallback"))
+ return (void*) vkDbgRegisterMsgCallback;
+ if (!strcmp(funcName, "vkDbgUnregisterMsgCallback"))
+ return (void*) vkDbgUnregisterMsgCallback;
+ if (!strcmp(funcName, "vkGetDeviceQueue"))
+ return (void*) vkGetDeviceQueue;
+ if (!strcmp(funcName, "vkQueueAddMemReference"))
+ return (void*) vkQueueAddMemReference;
+ if (!strcmp(funcName, "vkQueueRemoveMemReference"))
+ return (void*) vkQueueRemoveMemReference;
#if !defined(WIN32)
- if (!strcmp(funcName, "xglWsiX11CreatePresentableImage"))
- return (void*) xglWsiX11CreatePresentableImage;
- if (!strcmp(funcName, "xglWsiX11QueuePresent"))
- return (void*) xglWsiX11QueuePresent;
+ if (!strcmp(funcName, "vkWsiX11CreatePresentableImage"))
+ return (void*) vkWsiX11CreatePresentableImage;
+ if (!strcmp(funcName, "vkWsiX11QueuePresent"))
+ return (void*) vkWsiX11QueuePresent;
#endif
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
* DEALINGS IN THE SOFTWARE.
*/
#pragma once
-#include "xglLayer.h"
+#include "vkLayer.h"
#ifdef __cplusplus
extern "C" {
MEMTRACK_FREED_MEM_REF = 6, // MEM Obj freed while it still has obj and/or CB refs
MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS = 7, // Clearing bindings on mem obj that doesn't have any bindings
MEMTRACK_MISSING_MEM_BINDINGS = 8, // Trying to retrieve mem bindings, but none found (may be internal error)
- MEMTRACK_INVALID_OBJECT = 9, // Attempting to reference generic XGL Object that is invalid
- MEMTRACK_FREE_MEM_ERROR = 10, // Error while calling xglFreeMemory
+ MEMTRACK_INVALID_OBJECT = 9, // Attempting to reference generic VK Object that is invalid
+ MEMTRACK_FREE_MEM_ERROR = 10, // Error while calling vkFreeMemory
MEMTRACK_DESTROY_OBJECT_ERROR = 11, // Destroying an object that has a memory reference
MEMTRACK_MEMORY_BINDING_ERROR = 12, // Error during one of many calls that bind memory to object or CB
MEMTRACK_OUT_OF_MEMORY_ERROR = 13, // malloc failed
- MEMTRACK_MEMORY_LEAK = 14, // Failure to call xglFreeMemory on Mem Obj prior to DestroyDevice
+ MEMTRACK_MEMORY_LEAK = 14, // Failure to call vkFreeMemory on Mem Obj prior to DestroyDevice
MEMTRACK_INVALID_STATE = 15, // Memory not in the correct state
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT = 16, // xglResetCommandBuffer() called on a CB that hasn't completed
+ MEMTRACK_RESET_CB_WHILE_IN_FLIGHT = 16, // vkResetCommandBuffer() called on a CB that hasn't completed
MEMTRACK_INVALID_QUEUE = 17, // Invalid queue requested or selected
MEMTRACK_INVALID_FENCE_STATE = 18, // Invalid Fence State signaled or used
} MEM_TRACK_ERROR;
* memObjMap -- map of Memory Objects to MT_MEM_OBJ_INFO structures
* Each MT_MEM_OBJ_INFO has two stl list containers with:
* -- all CBs referencing this mem obj
- * -- all XGL Objects that are bound to this memory
+ * -- all VK Objects that are bound to this memory
* objectMap -- map of objects to MT_OBJ_INFO structures
*
* Algorithm overview
// Data struct for tracking memory object
struct MT_MEM_OBJ_INFO {
uint32_t refCount; // Count of references (obj bindings or CB use)
- XGL_GPU_MEMORY mem;
- XGL_MEMORY_ALLOC_INFO allocInfo;
- list<XGL_OBJECT> pObjBindings; // list container of objects bound to this memory
- list<XGL_CMD_BUFFER> pCmdBufferBindings; // list container of cmd buffers that reference this mem object
+ VK_GPU_MEMORY mem;
+ VK_MEMORY_ALLOC_INFO allocInfo;
+ list<VK_OBJECT> pObjBindings; // list container of objects bound to this memory
+ list<VK_CMD_BUFFER> pCmdBufferBindings; // list container of cmd buffers that reference this mem object
};
struct MT_OBJ_INFO {
MT_MEM_OBJ_INFO* pMemObjInfo;
- XGL_OBJECT object;
- XGL_STRUCTURE_TYPE sType;
+ VK_OBJECT object;
+ VK_STRUCTURE_TYPE sType;
uint32_t ref_count;
// Capture all object types that may have memory bound. From prog guide:
// The only objects that are guaranteed to have no external memory
// requirements are devices, queues, command buffers, shaders and memory objects.
union {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view_create_info;
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO ds_view_create_info;
- XGL_IMAGE_VIEW_CREATE_INFO image_view_create_info;
- XGL_IMAGE_CREATE_INFO image_create_info;
- XGL_GRAPHICS_PIPELINE_CREATE_INFO graphics_pipeline_create_info;
- XGL_COMPUTE_PIPELINE_CREATE_INFO compute_pipeline_create_info;
- XGL_SAMPLER_CREATE_INFO sampler_create_info;
- XGL_FENCE_CREATE_INFO fence_create_info;
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view_create_info;
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO ds_view_create_info;
+ VK_IMAGE_VIEW_CREATE_INFO image_view_create_info;
+ VK_IMAGE_CREATE_INFO image_create_info;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO graphics_pipeline_create_info;
+ VK_COMPUTE_PIPELINE_CREATE_INFO compute_pipeline_create_info;
+ VK_SAMPLER_CREATE_INFO sampler_create_info;
+ VK_FENCE_CREATE_INFO fence_create_info;
#ifndef _WIN32
- XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO wsi_x11_presentable_image_create_info;
+ VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO wsi_x11_presentable_image_create_info;
#endif // _WIN32
} create_info;
char object_name[64];
// Track all command buffers
struct MT_CB_INFO {
- XGL_CMD_BUFFER_CREATE_INFO createInfo;
- MT_OBJ_INFO* pDynamicState[XGL_NUM_STATE_BIND_POINT];
- XGL_PIPELINE pipelines[XGL_NUM_PIPELINE_BIND_POINT];
+ VK_CMD_BUFFER_CREATE_INFO createInfo;
+ MT_OBJ_INFO* pDynamicState[VK_NUM_STATE_BIND_POINT];
+ VK_PIPELINE pipelines[VK_NUM_PIPELINE_BIND_POINT];
uint32_t colorAttachmentCount;
- XGL_DEPTH_STENCIL_BIND_INFO dsBindInfo;
- XGL_CMD_BUFFER cmdBuffer;
+ VK_DEPTH_STENCIL_BIND_INFO dsBindInfo;
+ VK_CMD_BUFFER cmdBuffer;
uint64_t fenceId;
// Order dependent, stl containers must be at end of struct
- list<XGL_GPU_MEMORY> pMemObjList; // List container of Mem objs referenced by this CB
+ list<VK_GPU_MEMORY> pMemObjList; // List container of Mem objs referenced by this CB
};
// Associate fenceId with a fence object
struct MT_FENCE_INFO {
- XGL_FENCE fence; // Handle to fence object
- XGL_QUEUE queue; // Queue that this fence is submitted against
+ VK_FENCE fence; // Handle to fence object
+ VK_QUEUE queue; // Queue that this fence is submitted against
bool32_t localFence; // Is fence created by layer?
};
struct MT_QUEUE_INFO {
uint64_t lastRetiredId;
uint64_t lastSubmittedId;
- list<XGL_CMD_BUFFER> pQueueCmdBuffers;
- list<XGL_GPU_MEMORY> pMemRefList;
+ list<VK_CMD_BUFFER> pQueueCmdBuffers;
+ list<VK_GPU_MEMORY> pMemRefList;
};
#ifdef __cplusplus
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <assert.h>
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xglLayer.h"
+#include "vk_dispatch_table_helper.h"
+#include "vkLayer.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
-static void initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw, XGL_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum);
+static void initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw, VK_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum);
/******************************** Layer multi1 functions **************************/
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap1;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap1;
static bool layer1_first_activated = false;
-static XGL_LAYER_DISPATCH_TABLE * getLayer1Table(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * getLayer1Table(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap1.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap1.find((void *) gpuw);
if (it == tableMap1.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap1[(void *) gpuw] = pTable;
initLayerTable(gpuw, pTable, 1);
return pTable;
#endif
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1CreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1CreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
- printf("At start of multi1 layer xglCreateDevice()\n");
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of multi1 layer vkCreateDevice()\n");
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table
tableMap1.emplace(*pDevice, pTable);
- printf("Completed multi1 layer xglCreateDevice()\n");
+ printf("Completed multi1 layer vkCreateDevice()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1CreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1CreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[device];
- printf("At start of multi1 layer xglCreateGraphicsPipeline()\n");
- XGL_RESULT result = pTable->CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ printf("At start of multi1 layer vkCreateGraphicsPipeline()\n");
+ VK_RESULT result = pTable->CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
// create a mapping for the pipeline object into the dispatch table
tableMap1.emplace(*pPipeline, pTable);
- printf("Completed multi1 layer xglCreateGraphicsPipeline()\n");
+ printf("Completed multi1 layer vkCreateGraphicsPipeline()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1StorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1StorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[pipeline];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[pipeline];
- printf("At start of multi1 layer xglStorePipeline()\n");
- XGL_RESULT result = pTable->StorePipeline(pipeline, pDataSize, pData);
- printf("Completed multi1 layer xglStorePipeline()\n");
+ printf("At start of multi1 layer vkStorePipeline()\n");
+ VK_RESULT result = pTable->StorePipeline(pipeline, pDataSize, pData);
+ printf("Completed multi1 layer vkStorePipeline()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1EnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1EnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (gpu == NULL)
- return xglEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ return vkEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
- printf("At start of multi1 layer xglEnumerateLayers()\n");
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- printf("Completed multi1 layer xglEnumerateLayers()\n");
+ printf("At start of multi1 layer vkEnumerateLayers()\n");
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("Completed multi1 layer vkEnumerateLayers()\n");
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI multi1GetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI multi1GetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
getLayer1Table(gpuw);
- if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
+ if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
return (void *) multi1CreateDevice;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
return (void *) multi1EnumerateLayers;
- else if (!strncmp("xglCreateGraphicsPipeline", pName, sizeof ("xglCreateGraphicsPipeline")))
+ else if (!strncmp("vkCreateGraphicsPipeline", pName, sizeof ("vkCreateGraphicsPipeline")))
return (void *) multi1CreateGraphicsPipeline;
- else if (!strncmp("xglStorePipeline", pName, sizeof ("xglStorePipeline")))
+ else if (!strncmp("vkStorePipeline", pName, sizeof ("vkStorePipeline")))
return (void *) multi1StorePipeline;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/******************************** Layer multi2 functions **************************/
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap2;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap2;
static bool layer2_first_activated = false;
-static XGL_LAYER_DISPATCH_TABLE * getLayer2Table(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * getLayer2Table(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap2.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap2.find((void *) gpuw);
if (it == tableMap2.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap2[(void *) gpuw] = pTable;
initLayerTable(gpuw, pTable, 2);
return pTable;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2CreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2CreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
- printf("At start of multi2 xglCreateDevice()\n");
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of multi2 vkCreateDevice()\n");
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table for layer2
tableMap2.emplace(*pDevice, pTable);
- printf("Completed multi2 layer xglCreateDevice()\n");
+ printf("Completed multi2 layer vkCreateDevice()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2CreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2CreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[device];
- printf("At start of multi2 layer xglCreateCommandBuffer()\n");
- XGL_RESULT result = pTable->CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ printf("At start of multi2 layer vkCreateCommandBuffer()\n");
+ VK_RESULT result = pTable->CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
// create a mapping for CmdBuffer object into the dispatch table for layer 2
tableMap2.emplace(*pCmdBuffer, pTable);
- printf("Completed multi2 layer xglCreateCommandBuffer()\n");
+ printf("Completed multi2 layer vkCreateCommandBuffer()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2BeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2BeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[cmdBuffer];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[cmdBuffer];
- printf("At start of multi2 layer xglBeginCommandBuffer()\n");
- XGL_RESULT result = pTable->BeginCommandBuffer(cmdBuffer, pBeginInfo);
- printf("Completed multi2 layer xglBeginCommandBuffer()\n");
+ printf("At start of multi2 layer vkBeginCommandBuffer()\n");
+ VK_RESULT result = pTable->BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ printf("Completed multi2 layer vkBeginCommandBuffer()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2EnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2EnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (gpu == NULL)
- return xglEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ return vkEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
- printf("At start of multi2 layer xglEnumerateLayers()\n");
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- printf("Completed multi2 layer xglEnumerateLayers()\n");
+ printf("At start of multi2 layer vkEnumerateLayers()\n");
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("Completed multi2 layer vkEnumerateLayers()\n");
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI multi2GetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI multi2GetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
getLayer2Table(gpuw);
- if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
+ if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
return (void *) multi2CreateDevice;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
return (void *) multi2EnumerateLayers;
- else if (!strncmp("xglCreateCommandBuffer", pName, sizeof ("xglCreateCommandBuffer")))
+ else if (!strncmp("vkCreateCommandBuffer", pName, sizeof ("vkCreateCommandBuffer")))
return (void *) multi2CreateCommandBuffer;
- else if (!strncmp("xglBeginCommandBuffer", pName, sizeof ("xglBeginCommandBuffer")))
+ else if (!strncmp("vkBeginCommandBuffer", pName, sizeof ("vkBeginCommandBuffer")))
return (void *) multi2BeginCommandBuffer;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/********************************* Common functions ********************************/
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL || pOutLayers[1] == NULL || pReserved == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (maxLayerCount < 2)
- return XGL_ERROR_INITIALIZATION_FAILED;
+ return VK_ERROR_INITIALIZATION_FAILED;
*pOutLayerCount = 2;
strncpy((char *) pOutLayers[0], "multi1", maxStringSize);
strncpy((char *) pOutLayers[1], "multi2", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_RESULT result;
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
/* This entrypoint is NOT going to init it's own dispatch table since loader calls here early */
if (!strncmp(pExtName, "multi1", strlen("multi1")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!strncmp(pExtName, "multi2", strlen("multi2")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!tableMap1.empty() && (tableMap1.find(gpuw) != tableMap1.end()))
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else if (!tableMap2.empty() && (tableMap2.find(gpuw) != tableMap2.end()))
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
// to find each layers GPA routine Loader will search via "<layerName>GetProcAddr"
if (!strncmp("multi1GetProcAddr", pName, sizeof("multi1GetProcAddr")))
return (void *) multi1GetProcAddr;
else if (!strncmp("multi2GetProcAddr", pName, sizeof("multi2GetProcAddr")))
return (void *) multi2GetProcAddr;
- else if (!strncmp("xglGetProcAddr", pName, sizeof("xglGetProcAddr")))
- return (void *) xglGetProcAddr;
+ else if (!strncmp("vkGetProcAddr", pName, sizeof("vkGetProcAddr")))
+ return (void *) vkGetProcAddr;
// use first layer activated as GPA dispatch table activation happens in order
else if (layer1_first_activated)
} //extern "C"
#endif
-static void initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw, XGL_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum)
+static void initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw, VK_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum)
{
if (layerNum == 2 && layer1_first_activated == false)
layer2_first_activated = true;
if (layerNum == 1 && layer2_first_activated == false)
layer1_first_activated = true;
- layer_initialize_dispatch_table(pTable, gpuw->pGPA, (XGL_PHYSICAL_GPU) gpuw->nextObject);
+ layer_initialize_dispatch_table(pTable, gpuw->pGPA, (VK_PHYSICAL_GPU) gpuw->nextObject);
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
* DEALINGS IN THE SOFTWARE.
*/
-#include "xglLayer.h"
+#include "vkLayer.h"
// Object Tracker ERROR codes
typedef enum _OBJECT_TRACK_ERROR
{
} OBJECT_STATUS;
// TODO : Make this code-generated
// Object type enum
-typedef enum _XGL_OBJECT_TYPE
+typedef enum _VK_OBJECT_TYPE
{
- XGL_OBJECT_TYPE_SAMPLER,
- XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET,
- XGL_OBJECT_TYPE_DESCRIPTOR_POOL,
- XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
- XGL_OBJECT_TYPE_IMAGE_VIEW,
- XGL_OBJECT_TYPE_SEMAPHORE,
- XGL_OBJECT_TYPE_SHADER,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN,
- XGL_OBJECT_TYPE_BUFFER,
- XGL_OBJECT_TYPE_PIPELINE,
- XGL_OBJECT_TYPE_DEVICE,
- XGL_OBJECT_TYPE_QUERY_POOL,
- XGL_OBJECT_TYPE_EVENT,
- XGL_OBJECT_TYPE_QUEUE,
- XGL_OBJECT_TYPE_PHYSICAL_GPU,
- XGL_OBJECT_TYPE_RENDER_PASS,
- XGL_OBJECT_TYPE_FRAMEBUFFER,
- XGL_OBJECT_TYPE_IMAGE,
- XGL_OBJECT_TYPE_BUFFER_VIEW,
- XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
- XGL_OBJECT_TYPE_INSTANCE,
- XGL_OBJECT_TYPE_PIPELINE_DELTA,
- XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
- XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
- XGL_OBJECT_TYPE_GPU_MEMORY,
- XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
- XGL_OBJECT_TYPE_FENCE,
- XGL_OBJECT_TYPE_CMD_BUFFER,
- XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
+ VK_OBJECT_TYPE_SAMPLER,
+ VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET,
+ VK_OBJECT_TYPE_DESCRIPTOR_POOL,
+ VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
+ VK_OBJECT_TYPE_IMAGE_VIEW,
+ VK_OBJECT_TYPE_SEMAPHORE,
+ VK_OBJECT_TYPE_SHADER,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN,
+ VK_OBJECT_TYPE_BUFFER,
+ VK_OBJECT_TYPE_PIPELINE,
+ VK_OBJECT_TYPE_DEVICE,
+ VK_OBJECT_TYPE_QUERY_POOL,
+ VK_OBJECT_TYPE_EVENT,
+ VK_OBJECT_TYPE_QUEUE,
+ VK_OBJECT_TYPE_PHYSICAL_GPU,
+ VK_OBJECT_TYPE_RENDER_PASS,
+ VK_OBJECT_TYPE_FRAMEBUFFER,
+ VK_OBJECT_TYPE_IMAGE,
+ VK_OBJECT_TYPE_BUFFER_VIEW,
+ VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
+ VK_OBJECT_TYPE_INSTANCE,
+ VK_OBJECT_TYPE_PIPELINE_DELTA,
+ VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
+ VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
+ VK_OBJECT_TYPE_GPU_MEMORY,
+ VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
+ VK_OBJECT_TYPE_FENCE,
+ VK_OBJECT_TYPE_CMD_BUFFER,
+ VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
- XGL_OBJECT_TYPE_UNKNOWN,
- XGL_NUM_OBJECT_TYPE,
- XGL_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
-} XGL_OBJECT_TYPE;
+ VK_OBJECT_TYPE_UNKNOWN,
+ VK_NUM_OBJECT_TYPE,
+ VK_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
+} VK_OBJECT_TYPE;
-static const char* string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE type) {
+static const char* string_VK_OBJECT_TYPE(VK_OBJECT_TYPE type) {
switch (type)
{
- case XGL_OBJECT_TYPE_DEVICE:
+ case VK_OBJECT_TYPE_DEVICE:
return "DEVICE";
- case XGL_OBJECT_TYPE_PIPELINE:
+ case VK_OBJECT_TYPE_PIPELINE:
return "PIPELINE";
- case XGL_OBJECT_TYPE_FENCE:
+ case VK_OBJECT_TYPE_FENCE:
return "FENCE";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
return "DESCRIPTOR_SET_LAYOUT";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN:
return "DESCRIPTOR_SET_LAYOUT_CHAIN";
- case XGL_OBJECT_TYPE_GPU_MEMORY:
+ case VK_OBJECT_TYPE_GPU_MEMORY:
return "GPU_MEMORY";
- case XGL_OBJECT_TYPE_QUEUE:
+ case VK_OBJECT_TYPE_QUEUE:
return "QUEUE";
- case XGL_OBJECT_TYPE_IMAGE:
+ case VK_OBJECT_TYPE_IMAGE:
return "IMAGE";
- case XGL_OBJECT_TYPE_CMD_BUFFER:
+ case VK_OBJECT_TYPE_CMD_BUFFER:
return "CMD_BUFFER";
- case XGL_OBJECT_TYPE_SEMAPHORE:
+ case VK_OBJECT_TYPE_SEMAPHORE:
return "SEMAPHORE";
- case XGL_OBJECT_TYPE_FRAMEBUFFER:
+ case VK_OBJECT_TYPE_FRAMEBUFFER:
return "FRAMEBUFFER";
- case XGL_OBJECT_TYPE_SAMPLER:
+ case VK_OBJECT_TYPE_SAMPLER:
return "SAMPLER";
- case XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
+ case VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
return "COLOR_ATTACHMENT_VIEW";
- case XGL_OBJECT_TYPE_BUFFER_VIEW:
+ case VK_OBJECT_TYPE_BUFFER_VIEW:
return "BUFFER_VIEW";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET:
return "DESCRIPTOR_SET";
- case XGL_OBJECT_TYPE_PHYSICAL_GPU:
+ case VK_OBJECT_TYPE_PHYSICAL_GPU:
return "PHYSICAL_GPU";
- case XGL_OBJECT_TYPE_IMAGE_VIEW:
+ case VK_OBJECT_TYPE_IMAGE_VIEW:
return "IMAGE_VIEW";
- case XGL_OBJECT_TYPE_BUFFER:
+ case VK_OBJECT_TYPE_BUFFER:
return "BUFFER";
- case XGL_OBJECT_TYPE_PIPELINE_DELTA:
+ case VK_OBJECT_TYPE_PIPELINE_DELTA:
return "PIPELINE_DELTA";
- case XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
return "DYNAMIC_RS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_EVENT:
+ case VK_OBJECT_TYPE_EVENT:
return "EVENT";
- case XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
+ case VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
return "DEPTH_STENCIL_VIEW";
- case XGL_OBJECT_TYPE_SHADER:
+ case VK_OBJECT_TYPE_SHADER:
return "SHADER";
- case XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
return "DYNAMIC_DS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
return "DYNAMIC_VP_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
return "DYNAMIC_CB_STATE_OBJECT";
- case XGL_OBJECT_TYPE_INSTANCE:
+ case VK_OBJECT_TYPE_INSTANCE:
return "INSTANCE";
- case XGL_OBJECT_TYPE_RENDER_PASS:
+ case VK_OBJECT_TYPE_RENDER_PASS:
return "RENDER_PASS";
- case XGL_OBJECT_TYPE_QUERY_POOL:
+ case VK_OBJECT_TYPE_QUERY_POOL:
return "QUERY_POOL";
- case XGL_OBJECT_TYPE_DESCRIPTOR_POOL:
+ case VK_OBJECT_TYPE_DESCRIPTOR_POOL:
return "DESCRIPTOR_POOL";
- case XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
+ case VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
return "PRESENTABLE_IMAGE_MEMORY";
default:
return "UNKNOWN";
typedef struct _OBJTRACK_NODE {
void *pObj;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
uint64_t numUses;
OBJECT_STATUS status;
} OBJTRACK_NODE;
// prototype for extension functions
-uint64_t objTrackGetObjectCount(XGL_OBJECT_TYPE type);
-XGL_RESULT objTrackGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray);
+uint64_t objTrackGetObjectCount(VK_OBJECT_TYPE type);
+VK_RESULT objTrackGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray);
// Func ptr typedefs
-typedef uint64_t (*OBJ_TRACK_GET_OBJECT_COUNT)(XGL_OBJECT_TYPE);
-typedef XGL_RESULT (*OBJ_TRACK_GET_OBJECTS)(XGL_OBJECT_TYPE, uint64_t, OBJTRACK_NODE*);
+typedef uint64_t (*OBJ_TRACK_GET_OBJECT_COUNT)(VK_OBJECT_TYPE);
+typedef VK_RESULT (*OBJ_TRACK_GET_OBJECTS)(VK_OBJECT_TYPE, uint64_t, OBJTRACK_NODE*);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <sstream>
#include "loader_platform.h"
-#include "xglLayer.h"
+#include "vkLayer.h"
#include "layers_config.h"
-#include "xgl_enum_validate_helper.h"
-#include "xgl_struct_validate_helper.h"
+#include "vk_enum_validate_helper.h"
+#include "vk_struct_validate_helper.h"
//The following is #included again to catch certain OS-specific functions being used:
#include "loader_platform.h"
#include "layers_msg.h"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);
-#include "xgl_dispatch_table_helper.h"
+#include "vk_dispatch_table_helper.h"
static void initParamChecker(void)
{
getLayerOptionEnum("ParamCheckerReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("ParamCheckerDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("ParamCheckerLogFilename");
if (strOpt)
g_logFile = stdout;
}
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
}
-void PreCreateInstance(const XGL_APPLICATION_INFO* pAppInfo, const XGL_ALLOC_CALLBACKS* pAllocCb)
+void PreCreateInstance(const VK_APPLICATION_INFO* pAppInfo, const VK_ALLOC_CALLBACKS* pAllocCb)
{
if(pAppInfo == nullptr)
{
- char const str[] = "xglCreateInstance parameter, XGL_APPLICATION_INFO* pAppInfo, is "\
+ char const str[] = "vkCreateInstance parameter, VK_APPLICATION_INFO* pAppInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pAppInfo->sType != XGL_STRUCTURE_TYPE_APPLICATION_INFO)
+ if(pAppInfo->sType != VK_STRUCTURE_TYPE_APPLICATION_INFO)
{
- char const str[] = "xglCreateInstance parameter, XGL_STRUCTURE_TYPE_APPLICATION_INFO "\
- "pAppInfo->sType, is not XGL_STRUCTURE_TYPE_APPLICATION_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateInstance parameter, VK_STRUCTURE_TYPE_APPLICATION_INFO "\
+ "pAppInfo->sType, is not VK_STRUCTURE_TYPE_APPLICATION_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
// TODO: What else can validated in pAppInfo?
- // TODO: XGL_API_VERSION validation.
+ // TODO: VK_API_VERSION validation.
// It's okay if pAllocCb is a nullptr.
if(pAllocCb != nullptr)
{
- if(!xgl_validate_xgl_alloc_callbacks(pAllocCb))
+ if(!vk_validate_vk_alloc_callbacks(pAllocCb))
{
- char const str[] = "xglCreateInstance parameter, XGL_ALLOC_CALLBACKS* pAllocCb, "\
+ char const str[] = "vkCreateInstance parameter, VK_ALLOC_CALLBACKS* pAllocCb, "\
"contains an invalid value (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
}
-void PostCreateInstance(XGL_RESULT result, XGL_INSTANCE* pInstance)
+void PostCreateInstance(VK_RESULT result, VK_INSTANCE* pInstance)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateInstance failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateInstance failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pInstance == nullptr)
{
- char const str[] = "xglCreateInstance parameter, XGL_INSTANCE* pInstance, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateInstance parameter, VK_INSTANCE* pInstance, is nullptr "\
+ "(postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateInstance(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance)
{
PreCreateInstance(pCreateInfo->pAppInfo, pCreateInfo->pAllocCb);
- XGL_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
+ VK_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
PostCreateInstance(result, pInstance);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(XGL_INSTANCE instance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyInstance(VK_INSTANCE instance)
{
- XGL_RESULT result = nextTable.DestroyInstance(instance);
+ VK_RESULT result = nextTable.DestroyInstance(instance);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus)
{
- XGL_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
+ VK_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetGpuInfo(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
char str[1024];
- if (!validate_XGL_PHYSICAL_GPU_INFO_TYPE(infoType)) {
+ if (!validate_VK_PHYSICAL_GPU_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetGpuInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetGpuInfo((XGL_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetGpuInfo((VK_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
return result;
}
-void PreCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo)
+void PreCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo)
{
if(gpu == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_PHYSICAL_GPU gpu, is nullptr "\
+ char const str[] = "vkCreateDevice parameter, VK_PHYSICAL_GPU gpu, is nullptr "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO)
{
- char const str[] = "xglCreateDevice parameter, XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateDevice parameter, VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->queueRecordCount == 0)
{
- char const str[] = "xglCreateDevice parameter, uint32_t pCreateInfo->queueRecordCount, is "\
+ char const str[] = "vkCreateDevice parameter, uint32_t pCreateInfo->queueRecordCount, is "\
"zero (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pRequestedQueues == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE_QUEUE_CREATE_INFO* pCreateInfo->pRequestedQueues, is "\
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE_QUEUE_CREATE_INFO* pCreateInfo->pRequestedQueues, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->queueRecordCount; ++i)
{
- if(!xgl_validate_xgl_device_queue_create_info(&(pCreateInfo->pRequestedQueues[i])))
+ if(!vk_validate_vk_device_queue_create_info(&(pCreateInfo->pRequestedQueues[i])))
{
std::stringstream ss;
- ss << "xglCreateDevice parameter, XGL_DEVICE_QUEUE_CREATE_INFO pCreateInfo->pRequestedQueues[" << i <<
+ ss << "vkCreateDevice parameter, VK_DEVICE_QUEUE_CREATE_INFO pCreateInfo->pRequestedQueues[" << i <<
"], is invalid (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
- if(!validate_XGL_VALIDATION_LEVEL(pCreateInfo->maxValidationLevel))
+ if(!validate_VK_VALIDATION_LEVEL(pCreateInfo->maxValidationLevel))
{
- char const str[] = "xglCreateDevice parameter, XGL_VALIDATION_LEVEL pCreateInfo->maxValidationLevel, is "\
+ char const str[] = "vkCreateDevice parameter, VK_VALIDATION_LEVEL pCreateInfo->maxValidationLevel, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateDevice(XGL_RESULT result, XGL_DEVICE* pDevice)
+void PostCreateDevice(VK_RESULT result, VK_DEVICE* pDevice)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateDevice failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateDevice failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pDevice == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE* pDevice, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE* pDevice, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
PreCreateDevice(gpu, pCreateInfo);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
PostCreateDevice(result, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_RESULT result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
char str[1024];
if (gpu != NULL) {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
sprintf(str, "At start of layered EnumerateLayers\n");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
sprintf(str, "Completed layered EnumerateLayers\n");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
fflush(stdout);
return result;
} else {
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy(pOutLayers[0], "ParamChecker", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueAddMemReference(queue, mem);
+ VK_RESULT result = nextTable.QueueAddMemReference(queue, mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
+ VK_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pAllocInfo) {
sprintf(str, "Struct ptr parameter pAllocInfo to function AllocMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_memory_alloc_info(pAllocInfo)) {
+ else if (!vk_validate_vk_memory_alloc_info(pAllocInfo)) {
sprintf(str, "Parameter pAllocInfo to function AllocMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
char str[1024];
- if (!validate_XGL_MEMORY_PRIORITY(priority)) {
+ if (!validate_VK_MEMORY_PRIORITY(priority)) {
sprintf(str, "Parameter priority to function SetMemoryPriority has invalid value of %i.", (int)priority);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu0;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu0;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.GetMultiGpuCompatibility((XGL_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
+ VK_RESULT result = nextTable.GetMultiGpuCompatibility((VK_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenSharedMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_memory_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_memory_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenSharedMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenSharedSemaphore is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_semaphore_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_semaphore_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenSharedSemaphore contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
+ VK_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenPeerMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_peer_memory_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_peer_memory_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenPeerMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenPeerImage is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_peer_image_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_peer_image_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenPeerImage contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
- if (!validate_XGL_OBJECT_INFO_TYPE(infoType)) {
+ if (!validate_VK_OBJECT_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetObjectInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset, XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset, VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
- XGL_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
+ VK_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
char str[1024];
if (!bindInfo) {
sprintf(str, "Struct ptr parameter bindInfo to function BindImageMemoryRange is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_memory_bind_info(bindInfo)) {
+ else if (!vk_validate_vk_image_memory_bind_info(bindInfo)) {
sprintf(str, "Parameter bindInfo to function BindImageMemoryRange contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
+ VK_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateFence is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_fence_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_fence_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateFence contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetFences(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetFences(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences)
{
- XGL_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
+ VK_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSemaphore(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateSemaphore is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_semaphore_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_semaphore_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateSemaphore contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
+ VK_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateEvent is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_event_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_event_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateEvent contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetEventStatus(VK_EVENT event)
{
- XGL_RESULT result = nextTable.GetEventStatus(event);
+ VK_RESULT result = nextTable.GetEventStatus(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetEvent(VK_EVENT event)
{
- XGL_RESULT result = nextTable.SetEvent(event);
+ VK_RESULT result = nextTable.SetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetEvent(VK_EVENT event)
{
- XGL_RESULT result = nextTable.ResetEvent(event);
+ VK_RESULT result = nextTable.ResetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateQueryPool is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_query_pool_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_query_pool_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateQueryPool contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
{
- XGL_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
+ VK_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
- if (!validate_XGL_FORMAT(format)) {
+ if (!validate_VK_FORMAT(format)) {
sprintf(str, "Parameter format to function GetFormatInfo has invalid value of %i.", (int)format);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_FORMAT_INFO_TYPE(infoType)) {
+ if (!validate_VK_FORMAT_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetFormatInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateBuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_buffer_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_buffer_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateBuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateBufferView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_buffer_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_buffer_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateBufferView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
return result;
}
-void PreCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo)
+void PreCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo)
{
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
{
- char const str[] = "xglCreateImage parameter, XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateImage parameter, VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_IMAGE_TYPE(pCreateInfo->imageType))
+ if (!validate_VK_IMAGE_TYPE(pCreateInfo->imageType))
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_TYPE pCreateInfo->imageType, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_TYPE pCreateInfo->imageType, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_FORMAT(pCreateInfo->format))
+ if (!validate_VK_FORMAT(pCreateInfo->format))
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, is "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, cannot be "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, cannot be "\
"validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, contains "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, contains "\
"unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
// TODO: Can we check device-specific limits?
- if (!xgl_validate_xgl_extent3d(&pCreateInfo->extent))
+ if (!vk_validate_vk_extent3d(&pCreateInfo->extent))
{
- char const str[] = "xglCreateImage parameter, XGL_EXTENT3D pCreateInfo->extent, is invalid "\
+ char const str[] = "vkCreateImage parameter, VK_EXTENT3D pCreateInfo->extent, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_IMAGE_TILING(pCreateInfo->tiling))
+ if (!validate_VK_IMAGE_TILING(pCreateInfo->tiling))
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_TILING pCreateInfo->tiling, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_TILING pCreateInfo->tiling, is "\
"unrecoginized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateImage(XGL_RESULT result, XGL_IMAGE* pImage)
+void PostCreateImage(VK_RESULT result, VK_IMAGE* pImage)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateImage failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateImage failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pImage == nullptr)
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE* pImage, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateImage parameter, VK_IMAGE* pImage, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
PreCreateImage(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
PostCreateImage(result, pImage);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
if (!pSubresource) {
sprintf(str, "Struct ptr parameter pSubresource to function GetImageSubresourceInfo is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_subresource(pSubresource)) {
+ else if (!vk_validate_vk_image_subresource(pSubresource)) {
sprintf(str, "Parameter pSubresource to function GetImageSubresourceInfo contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_SUBRESOURCE_INFO_TYPE(infoType)) {
+ if (!validate_VK_SUBRESOURCE_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetImageSubresourceInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateImageView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_image_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateImageView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateColorAttachmentView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_color_attachment_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_color_attachment_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateColorAttachmentView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDepthStencilView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_depth_stencil_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_depth_stencil_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDepthStencilView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateShader is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_shader_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_shader_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateShader contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateGraphicsPipeline is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_graphics_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_graphics_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateGraphicsPipeline contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateGraphicsPipelineDerivative is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_graphics_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_graphics_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateGraphicsPipelineDerivative contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateComputePipeline is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_compute_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_compute_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateComputePipeline contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglStorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkStorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
- XGL_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
+ VK_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipeline(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
+ VK_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipelineDerivative(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipelineDerivative(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.LoadPipelineDerivative(device, dataSize, pData, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.LoadPipelineDerivative(device, dataSize, pData, basePipeline, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateSampler is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_sampler_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_sampler_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateSampler contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDescriptorSetLayout is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_descriptor_set_layout_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_descriptor_set_layout_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDescriptorSetLayout contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
+ VK_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_UPDATE_MODE(updateMode)) {
+ if (!validate_VK_DESCRIPTOR_UPDATE_MODE(updateMode)) {
sprintf(str, "Parameter updateMode to function BeginDescriptorPoolUpdate has invalid value of %i.", (int)updateMode);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_POOL_USAGE(poolUsage)) {
+ if (!validate_VK_DESCRIPTOR_POOL_USAGE(poolUsage)) {
sprintf(str, "Parameter poolUsage to function CreateDescriptorPool has invalid value of %i.", (int)poolUsage);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDescriptorPool is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_descriptor_pool_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_descriptor_pool_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDescriptorPool contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_SET_USAGE(setUsage)) {
+ if (!validate_VK_DESCRIPTOR_SET_USAGE(setUsage)) {
sprintf(str, "Parameter setUsage to function AllocDescriptorSets has invalid value of %i.", (int)setUsage);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicViewportState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_vp_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_vp_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicViewportState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicRasterState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_rs_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_rs_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicRasterState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicColorBlendState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_cb_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_cb_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicColorBlendState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicDepthStencilState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_ds_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_ds_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicDepthStencilState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
return result;
}
-void PreCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo)
+void PreCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo)
{
if(device == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_DEVICE device, is "\
+ char const str[] = "vkCreateCommandBuffer parameter, VK_DEVICE device, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateCommandBuffer parameter, VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateCommandBuffer parameter, VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateCommandBuffer(XGL_RESULT result, XGL_CMD_BUFFER* pCmdBuffer)
+void PostCreateCommandBuffer(VK_RESULT result, VK_CMD_BUFFER* pCmdBuffer)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateCommandBuffer failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateCommandBuffer failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCmdBuffer == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_CMD_BUFFER* pCmdBuffer, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateCommandBuffer parameter, VK_CMD_BUFFER* pCmdBuffer, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
PreCreateCommandBuffer(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
PostCreateCommandBuffer(result, pCmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
char str[1024];
if (!pBeginInfo) {
sprintf(str, "Struct ptr parameter pBeginInfo to function BeginCommandBuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_cmd_buffer_begin_info(pBeginInfo)) {
+ else if (!vk_validate_vk_cmd_buffer_begin_info(pBeginInfo)) {
sprintf(str, "Parameter pBeginInfo to function BeginCommandBuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdBindPipeline has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
char str[1024];
- if (!validate_XGL_STATE_BIND_POINT(stateBindPoint)) {
+ if (!validate_VK_STATE_BIND_POINT(stateBindPoint)) {
sprintf(str, "Parameter stateBindPoint to function CmdBindDynamicStateObject has invalid value of %i.", (int)stateBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdBindDescriptorSets has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
char str[1024];
- if (!validate_XGL_INDEX_TYPE(indexType)) {
+ if (!validate_VK_INDEX_TYPE(indexType)) {
sprintf(str, "Parameter indexType to function CmdBindIndexBuffer has invalid value of %i.", (int)indexType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
char str[1024];
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyBuffer contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCopyImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCopyImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdBlitImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdBlitImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_image_blit(&pRegions[i])) {
+ if (!vk_validate_vk_image_blit(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdBlitImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCopyBufferToImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyBufferToImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCopyImageToBuffer has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyImageToBuffer contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCloneImageData has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCloneImageData has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(imageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(imageLayout)) {
sprintf(str, "Parameter imageLayout to function CmdClearColorImage has invalid value of %i.", (int)imageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rangeCount; i++) {
- if (!xgl_validate_xgl_image_subresource_range(&pRanges[i])) {
+ if (!vk_validate_vk_image_subresource_range(&pRanges[i])) {
sprintf(str, "Parameter pRanges[%i] to function CmdClearColorImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(imageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(imageLayout)) {
sprintf(str, "Parameter imageLayout to function CmdClearDepthStencil has invalid value of %i.", (int)imageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rangeCount; i++) {
- if (!xgl_validate_xgl_image_subresource_range(&pRanges[i])) {
+ if (!vk_validate_vk_image_subresource_range(&pRanges[i])) {
sprintf(str, "Parameter pRanges[%i] to function CmdClearDepthStencil contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdResolveImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdResolveImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rectCount; i++) {
- if (!xgl_validate_xgl_image_resolve(&pRects[i])) {
+ if (!vk_validate_vk_image_resolve(&pRects[i])) {
sprintf(str, "Parameter pRects[%i] to function CmdResolveImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
char str[1024];
- if (!validate_XGL_PIPE_EVENT(pipeEvent)) {
+ if (!validate_VK_PIPE_EVENT(pipeEvent)) {
sprintf(str, "Parameter pipeEvent to function CmdSetEvent has invalid value of %i.", (int)pipeEvent);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
char str[1024];
- if (!validate_XGL_PIPE_EVENT(pipeEvent)) {
+ if (!validate_VK_PIPE_EVENT(pipeEvent)) {
sprintf(str, "Parameter pipeEvent to function CmdResetEvent has invalid value of %i.", (int)pipeEvent);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
char str[1024];
if (!pWaitInfo) {
sprintf(str, "Struct ptr parameter pWaitInfo to function CmdWaitEvents is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_event_wait_info(pWaitInfo)) {
+ else if (!vk_validate_vk_event_wait_info(pWaitInfo)) {
sprintf(str, "Parameter pWaitInfo to function CmdWaitEvents contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
char str[1024];
if (!pBarrier) {
sprintf(str, "Struct ptr parameter pBarrier to function CmdPipelineBarrier is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_pipeline_barrier(pBarrier)) {
+ else if (!vk_validate_vk_pipeline_barrier(pBarrier)) {
sprintf(str, "Parameter pBarrier to function CmdPipelineBarrier contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
char str[1024];
- if (!validate_XGL_TIMESTAMP_TYPE(timestampType)) {
+ if (!validate_VK_TIMESTAMP_TYPE(timestampType)) {
sprintf(str, "Parameter timestampType to function CmdWriteTimestamp has invalid value of %i.", (int)timestampType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdInitAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdLoadAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdSaveAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateFramebuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_framebuffer_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_framebuffer_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateFramebuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
return result;
}
-void PreCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo)
+void PreCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo)
{
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateRenderPass parameter, VK_RENDER_PASS_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateRenderPass parameter, VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!xgl_validate_xgl_rect(&pCreateInfo->renderArea))
+ if(!vk_validate_vk_rect(&pCreateInfo->renderArea))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RECT pCreateInfo->renderArea, is invalid "\
+ char const str[] = "vkCreateRenderPass parameter, VK_RECT pCreateInfo->renderArea, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!xgl_validate_xgl_extent2d(&pCreateInfo->extent))
+ if(!vk_validate_vk_extent2d(&pCreateInfo->extent))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_EXTENT2D pCreateInfo->extent, is invalid "\
+ char const str[] = "vkCreateRenderPass parameter, VK_EXTENT2D pCreateInfo->extent, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pColorFormats == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT* pCreateInfo->pColorFormats, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT* pCreateInfo->pColorFormats, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_FORMAT(pCreateInfo->pColorFormats[i]))
+ if(!validate_VK_FORMAT(pCreateInfo->pColorFormats[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->pColorFormats[i],
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->pColorFormats[i],
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], cannot be validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], contains unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
if(pCreateInfo->pColorLayouts == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT* pCreateInfo->pColorLayouts, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT* pCreateInfo->pColorLayouts, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_IMAGE_LAYOUT(pCreateInfo->pColorLayouts[i]))
+ if(!validate_VK_IMAGE_LAYOUT(pCreateInfo->pColorLayouts[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT pCreateInfo->pColorLayouts[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT pCreateInfo->pColorLayouts[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP* pCreateInfo->pColorLoadOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP* pCreateInfo->pColorLoadOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->pColorLoadOps[i]))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->pColorLoadOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->pColorLoadOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->pColorLoadOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorStoreOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadClearValues == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_CLEAR_COLOR* pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_CLEAR_COLOR* pCreateInfo->"\
"pColorLoadClearValues, is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pColorStoreOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadClearValues == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_CLEAR_COLOR* pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_CLEAR_COLOR* pCreateInfo->"\
"pColorLoadClearValues, is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!xgl_validate_xgl_clear_color(&(pCreateInfo->pColorLoadClearValues[i])))
+ if(!vk_validate_vk_clear_color(&(pCreateInfo->pColorLoadClearValues[i])))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_CLEAR_COLOR pCreateInfo->pColorLoadClearValues[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_CLEAR_COLOR pCreateInfo->pColorLoadClearValues[" << i <<
"], is invalid (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
- if(!validate_XGL_FORMAT(pCreateInfo->depthStencilFormat))
+ if(!validate_VK_FORMAT(pCreateInfo->depthStencilFormat))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->depthStencilFormat,
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->depthStencilFormat,
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, cannot be validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, contains unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_IMAGE_LAYOUT(pCreateInfo->depthStencilLayout))
+ if(!validate_VK_IMAGE_LAYOUT(pCreateInfo->depthStencilLayout))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT pCreateInfo->"\
"depthStencilLayout, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->depthLoadOp))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->depthLoadOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->"\
"depthLoadOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->depthStoreOp))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->depthStoreOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->"\
"depthStoreOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->stencilLoadOp))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->stencilLoadOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->"\
"stencilLoadOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->stencilStoreOp))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->stencilStoreOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->"\
"stencilStoreOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateRenderPass(XGL_RESULT result, XGL_RENDER_PASS* pRenderPass)
+void PostCreateRenderPass(VK_RESULT result, VK_RENDER_PASS* pRenderPass)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateRenderPass failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateRenderPass failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pRenderPass == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RENDER_PASS* pRenderPass, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateRenderPass parameter, VK_RENDER_PASS* pRenderPass, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
PreCreateRenderPass(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
PostCreateRenderPass(result, pRenderPass);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN* pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN* pRenderPassBegin)
{
char str[1024];
if (!pRenderPassBegin) {
sprintf(str, "Struct ptr parameter pRenderPassBegin to function CmdBeginRenderPass is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_render_pass_begin(pRenderPassBegin)) {
+ else if (!vk_validate_vk_render_pass_begin(pRenderPassBegin)) {
sprintf(str, "Parameter pRenderPassBegin to function CmdBeginRenderPass contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel)
{
char str[1024];
- if (!validate_XGL_VALIDATION_LEVEL(validationLevel)) {
+ if (!validate_VK_VALIDATION_LEVEL(validationLevel)) {
sprintf(str, "Parameter validationLevel to function DbgSetValidationLevel has invalid value of %i.", (int)validationLevel);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
+ VK_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter)
{
- XGL_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
+ VK_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(VK_BASE_OBJECT object, size_t tagSize, const void* pTag)
{
- XGL_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
+ VK_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
nextTable.CmdDbgMarkerEnd(cmdBuffer);
#if defined(__linux__) || defined(XCB_NVIDIA)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.WsiX11AssociateConnection((XGL_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
+ VK_RESULT result = nextTable.WsiX11AssociateConnection((VK_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
{
- XGL_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
+ VK_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
#endif
-#include "xgl_generic_intercept_proc_helper.h"
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+#include "vk_generic_intercept_proc_helper.h"
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
void* addr;
if (gpu == NULL)
return NULL;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
${CMAKE_CURRENT_BINARY_DIR}
)
-set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
-set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
add_library(xcb_nvidia STATIC xcb_nvidia.cpp)
target_link_libraries(xcb_nvidia)
#include <string>
-#include <xgl.h>
+#include <vulkan.h>
// COPIED FROM "loader.c" (not pointed to, because we're about to delete this
// code). Ian Elliott <ian@lunarg.com>.
xcb_connection_t * xcb_connect(const char *displayname, int *screenp)
{
- std::string xglNvidia = (getenv("XGL_DRIVERS_PATH") == NULL) ? "" : getenv("XGL_DRIVERS_PATH");
+ std::string xglNvidia = (getenv("VK_DRIVERS_PATH") == NULL) ? "" : getenv("VK_DRIVERS_PATH");
xglNvidia += "\\XGL_nvidia.dll";
HMODULE module = LoadLibrary(xglNvidia.c_str());
if (!module) {
- std::string xglNulldrv = (getenv("XGL_DRIVERS_PATH") == NULL) ? "" : getenv("XGL_DRIVERS_PATH");
+ std::string xglNulldrv = (getenv("VK_DRIVERS_PATH") == NULL) ? "" : getenv("VK_DRIVERS_PATH");
xglNulldrv += "\\xgl_nulldrv.dll";
module = LoadLibrary(xglNulldrv.c_str());
}
registry_str = loader_get_registry_string(HKEY_LOCAL_MACHINE,
"Software\\XGL",
- "XGL_DRIVERS_PATH");
+ "VK_DRIVERS_PATH");
registry_len = strlen(registry_str);
rtn_len = registry_len + 16;
rtn_str = (char *) malloc(rtn_len);
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -DDEBUG")
if (WIN32)
- set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
add_library(XGL SHARED loader.c loader.h dirent_on_windows.c dispatch.c table_ops.h XGL.def)
set_target_properties(XGL PROPERTIES LINK_FLAGS "/DEF:${PROJECT_SOURCE_DIR}/loader/XGL.def")
target_link_libraries(XGL)
endif()
if (NOT WIN32)
- set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
add_library(XGL SHARED loader.c dispatch.c table_ops.h)
set_target_properties(XGL PROPERTIES SOVERSION 0)
# Loader Description
## Overview
-The Loader implements the main XGL library (e.g. "XGL.dll" on Windows and
-"libXGL.so" on Linux). It handles layer management and driver management. The
+The Loader implements the main VK library (e.g. "VK.dll" on Windows and
+"libVK.so" on Linux). It handles layer management and driver management. The
loader fully supports multi-gpu operation. As part of this, it dispatches API
calls to the correct driver, and to the correct layers, based on the GPU object
selected by the application.
loader supports layers that operate on multiple GPUs.
## Environment Variables
-**LIBXGL\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
+**LIBVK\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
-**LIBXGL\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at xglCreateDevice() time.
+**LIBVK\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at vkCreateDevice() time.
-**LIBXGL\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBXGL\_LAYER\_NAMES=MemTracker:DrawState).
+**LIBVK\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBVK\_LAYER\_NAMES=MemTracker:DrawState).
-Note: Both of the LIBXGL\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
+Note: Both of the LIBVK\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
- ";" on Windows
- ":" on Linux
## Interface to driver (ICD)
-- xglEnumerateGpus exported
-- xglCreateInstance exported
-- xglDestroyInstance exported
-- xglGetProcAddr exported and returns valid function pointers for all the XGL API entrypoints
-- all objects created by ICD can be cast to (XGL\_LAYER\_DISPATCH\_TABLE \*\*)
+- vkEnumerateGpus exported
+- vkCreateInstance exported
+- vkDestroyInstance exported
+- vkGetProcAddr exported and returns valid function pointers for all the VK API entrypoints
+- all objects created by ICD can be cast to (VK\_LAYER\_DISPATCH\_TABLE \*\*)
where the loader will replace the first entry with a pointer to the dispatch table which is
owned by the loader. This implies three things for ICD drivers:
1. The ICD must return a pointer for the opaque object handle
2. This pointer points to a regular C structure with the first entry being a pointer.
- Note: for any C++ ICD's that implement XGL objects directly as C++ classes.
+ Note: for any C++ ICD's that implement VK objects directly as C++ classes.
The C++ compiler may put a vtable at offset zero, if your class is virtual.
In this case use a regular C structure (see below).
3. The reservedForLoader.loaderMagic member must be initialized with ICD\_LOADER\_MAGIC, as follows:
```
- #include "xglIcd.h"
+ #include "vkIcd.h"
struct {
- XGL_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
+ VK_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
myObjectClass myObj; // Your driver's C++ class
- } xglObj;
+ } vkObj;
- xglObj alloc_icd_obj()
+ vkObj alloc_icd_obj()
{
- xglObj *newObj = alloc_obj();
+ vkObj *newObj = alloc_obj();
...
// Initialize pointer to loader's dispatch table with ICD_LOADER_MAGIC
set_loader_magic_value(newObj);
Additional Notes:
- The ICD may or may not implement a dispatch table.
-- ICD entrypoints can be named anything including the offcial xgl name such as xglCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
+- ICD entrypoints can be named anything including the offcial vk name such as vkCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "loader_platform.h"
#include "table_ops.h"
#include "loader.h"
-#include "xglIcd.h"
+#include "vkIcd.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
struct loader_icd {
const struct loader_scanned_icds *scanned_icds;
- XGL_LAYER_DISPATCH_TABLE *loader_dispatch;
- uint32_t layer_count[XGL_MAX_PHYSICAL_GPUS];
- struct loader_layers layer_libs[XGL_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
- XGL_BASE_LAYER_OBJECT *wrappedGpus[XGL_MAX_PHYSICAL_GPUS];
+ VK_LAYER_DISPATCH_TABLE *loader_dispatch;
+ uint32_t layer_count[VK_MAX_PHYSICAL_GPUS];
+ struct loader_layers layer_libs[VK_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
+ VK_BASE_LAYER_OBJECT *wrappedGpus[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
- XGL_BASE_LAYER_OBJECT *gpus;
+ VK_BASE_LAYER_OBJECT *gpus;
struct loader_icd *next;
};
struct loader_scanned_icds {
loader_platform_dl_handle handle;
- xglGetProcAddrType GetProcAddr;
- xglCreateInstanceType CreateInstance;
- xglDestroyInstanceType DestroyInstance;
- xglEnumerateGpusType EnumerateGpus;
- xglGetExtensionSupportType GetExtensionSupport;
- XGL_INSTANCE instance;
+ vkGetProcAddrType GetProcAddr;
+ vkCreateInstanceType CreateInstance;
+ vkDestroyInstanceType DestroyInstance;
+ vkEnumerateGpusType EnumerateGpus;
+ vkGetExtensionSupportType GetExtensionSupport;
+ VK_INSTANCE instance;
struct loader_scanned_icds *next;
};
size_t rtn_len;
registry_str = loader_get_registry_string(HKEY_LOCAL_MACHINE,
- "Software\\XGL",
+ "Software\\VK",
registry_value);
registry_len = (registry_str) ? strlen(registry_str) : 0;
#endif // WIN32
-static void loader_log(XGL_DBG_MSG_TYPE msg_type, int32_t msg_code,
+static void loader_log(VK_DBG_MSG_TYPE msg_type, int32_t msg_code,
const char *format, ...)
{
char msg[256];
// Used to call: dlopen(filename, RTLD_LAZY);
handle = loader_platform_open_library(filename);
if (!handle) {
- loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
+ loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
return;
}
#define LOOKUP(func_ptr, func) do { \
- func_ptr = (xgl ##func## Type) loader_platform_get_proc_address(handle, "xgl" #func); \
+ func_ptr = (vk ##func## Type) loader_platform_get_proc_address(handle, "vk" #func); \
if (!func_ptr) { \
- loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("xgl" #func)); \
+ loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("vk" #func)); \
return; \
} \
} while (0)
new_node = (struct loader_scanned_icds *) malloc(sizeof(struct loader_scanned_icds));
if (!new_node) {
- loader_log(XGL_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
+ loader_log(VK_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
return;
}
/**
- * Try to \c loader_icd_scan XGL driver(s).
+ * Try to \c loader_icd_scan VK driver(s).
*
* This function scans the default system path or path
- * specified by the \c LIBXGL_DRIVERS_PATH environment variable in
- * order to find loadable XGL ICDs with the name of libXGL_*.
+ * specified by the \c LIBVK_DRIVERS_PATH environment variable in
+ * order to find loadable VK ICDs with the name of libVK_*.
*
* \returns
* void; but side effect is to set loader_icd_scanned to true
must_free_libPaths = true;
} else {
must_free_libPaths = false;
- libPaths = DEFAULT_XGL_DRIVERS_PATH;
+ libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
libPaths = getenv(DRIVER_PATH_ENV);
}
if (libPaths == NULL) {
- libPaths = DEFAULT_XGL_DRIVERS_PATH;
+ libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#endif // WIN32
if (sysdir) {
dent = readdir(sysdir);
while (dent) {
- /* Look for ICDs starting with XGL_DRIVER_LIBRARY_PREFIX and
- * ending with XGL_LIBRARY_SUFFIX
+ /* Look for ICDs starting with VK_DRIVER_LIBRARY_PREFIX and
+ * ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
- XGL_DRIVER_LIBRARY_PREFIX,
- XGL_DRIVER_LIBRARY_PREFIX_LEN)) {
+ VK_DRIVER_LIBRARY_PREFIX,
+ VK_DRIVER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
- const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
- if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
+ const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
+ if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
- XGL_LIBRARY_SUFFIX,
- XGL_LIBRARY_SUFFIX_LEN)) {
+ VK_LIBRARY_SUFFIX,
+ VK_LIBRARY_SUFFIX_LEN)) {
snprintf(icd_library, 1024, "%s" DIRECTORY_SYMBOL "%s", p,dent->d_name);
loader_scanned_icd_add(icd_library);
}
must_free_libPaths = true;
} else {
must_free_libPaths = false;
- libPaths = DEFAULT_XGL_LAYERS_PATH;
+ libPaths = DEFAULT_VK_LAYERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
libPaths = getenv(LAYERS_PATH_ENV);
}
if (libPaths == NULL) {
- libPaths = DEFAULT_XGL_LAYERS_PATH;
+ libPaths = DEFAULT_VK_LAYERS_PATH;
}
#endif // WIN32
if (curdir) {
dent = readdir(curdir);
while (dent) {
- /* Look for layers starting with XGL_LAYER_LIBRARY_PREFIX and
- * ending with XGL_LIBRARY_SUFFIX
+ /* Look for layers starting with VK_LAYER_LIBRARY_PREFIX and
+ * ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
- XGL_LAYER_LIBRARY_PREFIX,
- XGL_LAYER_LIBRARY_PREFIX_LEN)) {
+ VK_LAYER_LIBRARY_PREFIX,
+ VK_LAYER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
- const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
- if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
+ const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
+ if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
- XGL_LIBRARY_SUFFIX,
- XGL_LIBRARY_SUFFIX_LEN)) {
+ VK_LIBRARY_SUFFIX,
+ VK_LIBRARY_SUFFIX_LEN)) {
loader_platform_dl_handle handle;
snprintf(temp_str, sizeof(temp_str), "%s" DIRECTORY_SYMBOL "%s",p,dent->d_name);
// Used to call: dlopen(temp_str, RTLD_LAZY)
continue;
}
if (loader.scanned_layer_count == MAX_LAYER_LIBRARIES) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
+ loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
break;
}
if ((loader.scanned_layer_names[loader.scanned_layer_count] = malloc(strlen(temp_str) + 1)) == NULL) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
+ loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
break;
}
strcpy(loader.scanned_layer_names[loader.scanned_layer_count], temp_str);
loader.layer_scanned = true;
}
-static void loader_init_dispatch_table(XGL_LAYER_DISPATCH_TABLE *tab, xglGetProcAddrType fpGPA, XGL_PHYSICAL_GPU gpu)
+static void loader_init_dispatch_table(VK_LAYER_DISPATCH_TABLE *tab, vkGetProcAddrType fpGPA, VK_PHYSICAL_GPU gpu)
{
loader_initialize_dispatch_table(tab, fpGPA, gpu);
if (tab->EnumerateLayers == NULL)
- tab->EnumerateLayers = xglEnumerateLayers;
+ tab->EnumerateLayers = vkEnumerateLayers;
}
-static struct loader_icd * loader_get_icd(const XGL_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
+static struct loader_icd * loader_get_icd(const VK_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
{
for (struct loader_instance *inst = loader.instances; inst; inst = inst->next) {
for (struct loader_icd *icd = inst->icds; icd; icd = icd->next) {
obj->name[sizeof(obj->name) - 1] = '\0';
// Used to call: dlopen(pLayerNames[i].lib_name, RTLD_LAZY | RTLD_DEEPBIND)
if ((obj->lib_handle = loader_platform_open_library(pLayerNames[i].lib_name)) == NULL) {
- loader_log(XGL_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
+ loader_log(VK_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
continue;
} else {
- loader_log(XGL_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
+ loader_log(VK_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
}
free(pLayerNames[i].layer_name);
icd->layer_count[gpu_index]++;
}
}
-static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
+static VK_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
{
- XGL_RESULT err;
+ VK_RESULT err;
char *search_name;
loader_platform_dl_handle handle;
- xglGetExtensionSupportType fpGetExtensionSupport;
+ vkGetExtensionSupportType fpGetExtensionSupport;
/*
* The loader provides the abstraction that make layers and extensions work via
* the currently defined extension mechanism. That is, when app queries for an extension
- * via xglGetExtensionSupport, the loader will call both the driver as well as any layers
+ * via vkGetExtensionSupport, the loader will call both the driver as well as any layers
* to see who implements that extension. Then, if the app enables the extension during
- * xglCreateDevice the loader will find and load any layers that implement that extension.
+ * vkCreateDevice the loader will find and load any layers that implement that extension.
*/
// TODO: What if extension is in multiple places?
// TODO: Who should we ask first? Driver or layers? Do driver for now.
- err = icd->scanned_icds[gpu_index].GetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
- if (err == XGL_SUCCESS) {
+ err = icd->scanned_icds[gpu_index].GetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
+ if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = NULL;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
for (unsigned int j = 0; j < loader.scanned_layer_count; j++) {
if ((handle = loader_platform_open_library(search_name)) == NULL)
continue;
- fpGetExtensionSupport = loader_platform_get_proc_address(handle, "xglGetExtensionSupport");
+ fpGetExtensionSupport = loader_platform_get_proc_address(handle, "vkGetExtensionSupport");
if (fpGetExtensionSupport != NULL) {
// Found layer's GetExtensionSupport call
- err = fpGetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
+ err = fpGetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
loader_platform_close_library(handle);
- if (err == XGL_SUCCESS) {
+ if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
} else {
loader_platform_close_library(handle);
// No GetExtensionSupport or GetExtensionSupport returned invalid extension
// for the layer, so test the layer name as if it is an extension name
- // use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
+ // use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd;
size_t siz;
search_name = basename(search_name);
- search_name += strlen(XGL_LAYER_LIBRARY_PREFIX);
+ search_name += strlen(VK_LAYER_LIBRARY_PREFIX);
pEnd = strrchr(search_name, '.');
siz = (int) (pEnd - search_name);
if (siz != strlen(pExtName))
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
- return XGL_ERROR_INVALID_EXTENSION;
+ return VK_ERROR_INVALID_EXTENSION;
}
static uint32_t loader_get_layer_env(struct loader_icd *icd, uint32_t gpu_index, struct layer_name_pair *pLayerNames)
next++;
}
name = basename(p);
- if (find_layer_extension(icd, gpu_index, name, &lib_name) != XGL_SUCCESS) {
+ if (find_layer_extension(icd, gpu_index, name, &lib_name) != VK_SUCCESS) {
p = next;
continue;
}
return count;
}
-static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const XGL_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
+static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const VK_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
{
static struct layer_name_pair layerNames[MAX_LAYER_LIBRARIES];
const char *lib_name = NULL;
for (uint32_t i = 0; i < pCreateInfo->extensionCount; i++) {
const char *pExtName = pCreateInfo->ppEnabledExtensionNames[i];
- if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == XGL_SUCCESS) {
+ if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == VK_SUCCESS) {
uint32_t len;
/*
}
}
-extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo)
+extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo)
{
uint32_t gpu_index;
uint32_t count;
struct layer_name_pair *pLayerNames;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
return 0;
- assert(gpu_index < XGL_MAX_PHYSICAL_GPUS);
+ assert(gpu_index < VK_MAX_PHYSICAL_GPUS);
/* activate any layer libraries */
if (!loader_layers_activated(icd, gpu_index)) {
- XGL_BASE_LAYER_OBJECT *gpuObj = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
- xglGetProcAddrType nextGPA = xglGetProcAddr;
+ VK_BASE_LAYER_OBJECT *gpuObj = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
+ vkGetProcAddrType nextGPA = vkGetProcAddr;
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
if (!count)
return 0;
loader_init_layer_libs(icd, gpu_index, pLayerNames, count);
- icd->wrappedGpus[gpu_index] = malloc(sizeof(XGL_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
+ icd->wrappedGpus[gpu_index] = malloc(sizeof(VK_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
if (! icd->wrappedGpus[gpu_index])
- loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
for (int32_t i = icd->layer_count[gpu_index] - 1; i >= 0; i--) {
nextGpuObj = (icd->wrappedGpus[gpu_index] + i);
nextGpuObj->pGPA = nextGPA;
char funcStr[256];
snprintf(funcStr, 256, "%sGetProcAddr",icd->layer_libs[gpu_index][i].name);
- if ((nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
- nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "xglGetProcAddr");
+ if ((nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
+ nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "vkGetProcAddr");
if (!nextGPA) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to find xglGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
+ loader_log(VK_DBG_MSG_ERROR, 0, "Failed to find vkGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
continue;
}
if (i == 0) {
loader_init_dispatch_table(icd->loader_dispatch + gpu_index, nextGPA, gpuObj);
//Insert the new wrapped objects into the list with loader object at head
- ((XGL_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
- ((XGL_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
+ ((VK_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
+ ((VK_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
gpuObj = icd->wrappedGpus[gpu_index] + icd->layer_count[gpu_index] - 1;
gpuObj->nextObject = baseObj;
gpuObj->pGPA = icd->scanned_icds->GetProcAddr;
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
for (uint32_t i = 0; i < count; i++) {
if (strcmp(icd->layer_libs[gpu_index][i].name, pLayerNames[i].layer_name)) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
break;
}
}
if (count != icd->layer_count[gpu_index]) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
}
}
return icd->layer_count[gpu_index];
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance)
+LOADER_EXPORT VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance)
{
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_icd);
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_layer);
struct loader_instance *ptr_instance = NULL;
struct loader_scanned_icds *scanned_icds;
struct loader_icd *icd;
- XGL_RESULT res = XGL_ERROR_INITIALIZATION_FAILED;
+ VK_RESULT res = VK_ERROR_INITIALIZATION_FAILED;
/* Scan/discover all ICD libraries in a single-threaded manner */
loader_platform_thread_once(&once_icd, loader_icd_scan);
ptr_instance = (struct loader_instance*) malloc(sizeof(struct loader_instance));
if (ptr_instance == NULL) {
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
memset(ptr_instance, 0, sizeof(struct loader_instance));
if (icd) {
res = scanned_icds->CreateInstance(pCreateInfo,
&(scanned_icds->instance));
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
{
ptr_instance->icds = ptr_instance->icds->next;
loader_icd_destroy(icd);
scanned_icds->instance = NULL;
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to CreateInstance on device");
}
}
}
if (ptr_instance->icds == NULL) {
- return XGL_ERROR_INCOMPATIBLE_DRIVER;
+ return VK_ERROR_INCOMPATIBLE_DRIVER;
}
- *pInstance = (XGL_INSTANCE) ptr_instance;
- return XGL_SUCCESS;
+ *pInstance = (VK_INSTANCE) ptr_instance;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE instance)
+LOADER_EXPORT VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE instance)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_scanned_icds *scanned_icds;
- XGL_RESULT res;
+ VK_RESULT res;
// Remove this instance from the list of instances:
struct loader_instance *prev = NULL;
}
if (next == NULL) {
// This must be an invalid instance handle or empty list
- return XGL_ERROR_INVALID_HANDLE;
+ return VK_ERROR_INVALID_HANDLE;
}
// cleanup any prior layer initializations
while (scanned_icds) {
if (scanned_icds->instance)
res = scanned_icds->DestroyInstance(scanned_icds->instance);
- if (res != XGL_SUCCESS)
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ if (res != VK_SUCCESS)
+ loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to DestroyInstance on device");
scanned_icds->instance = NULL;
scanned_icds = scanned_icds->next;
free(ptr_instance);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
+LOADER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
- XGL_INSTANCE instance,
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus)
+ VK_PHYSICAL_GPU* pGpus)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_icd *icd;
uint32_t count = 0;
- XGL_RESULT res;
+ VK_RESULT res;
- //in spirit of XGL don't error check on the instance parameter
+ //in spirit of VK don't error check on the instance parameter
icd = ptr_instance->icds;
while (icd) {
- XGL_PHYSICAL_GPU gpus[XGL_MAX_PHYSICAL_GPUS];
- XGL_BASE_LAYER_OBJECT * wrapped_gpus;
- xglGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
+ VK_PHYSICAL_GPU gpus[VK_MAX_PHYSICAL_GPUS];
+ VK_BASE_LAYER_OBJECT * wrapped_gpus;
+ vkGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
uint32_t n, max = maxGpus - count;
- if (max > XGL_MAX_PHYSICAL_GPUS) {
- max = XGL_MAX_PHYSICAL_GPUS;
+ if (max > VK_MAX_PHYSICAL_GPUS) {
+ max = VK_MAX_PHYSICAL_GPUS;
}
res = icd->scanned_icds->EnumerateGpus(icd->scanned_icds->instance,
max, &n,
gpus);
- if (res == XGL_SUCCESS && n) {
- wrapped_gpus = (XGL_BASE_LAYER_OBJECT*) malloc(n *
- sizeof(XGL_BASE_LAYER_OBJECT));
+ if (res == VK_SUCCESS && n) {
+ wrapped_gpus = (VK_BASE_LAYER_OBJECT*) malloc(n *
+ sizeof(VK_BASE_LAYER_OBJECT));
icd->gpus = wrapped_gpus;
icd->gpu_count = n;
- icd->loader_dispatch = (XGL_LAYER_DISPATCH_TABLE *) malloc(n *
- sizeof(XGL_LAYER_DISPATCH_TABLE));
+ icd->loader_dispatch = (VK_LAYER_DISPATCH_TABLE *) malloc(n *
+ sizeof(VK_LAYER_DISPATCH_TABLE));
for (unsigned int i = 0; i < n; i++) {
(wrapped_gpus + i)->baseObject = gpus[i];
(wrapped_gpus + i)->pGPA = get_proc_addr;
/* Verify ICD compatibility */
if (!valid_loader_magic_value(gpus[i])) {
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ loader_log(VK_DBG_MSG_WARNING, 0,
"Loader: Incompatible ICD, first dword must be initialized to ICD_LOADER_MAGIC. See loader/README.md for details.\n");
assert(0);
}
- const XGL_LAYER_DISPATCH_TABLE **disp;
- disp = (const XGL_LAYER_DISPATCH_TABLE **) gpus[i];
+ const VK_LAYER_DISPATCH_TABLE **disp;
+ disp = (const VK_LAYER_DISPATCH_TABLE **) gpus[i];
*disp = icd->loader_dispatch + i;
}
*pGpuCount = count;
- return (count > 0) ? XGL_SUCCESS : res;
+ return (count > 0) ? VK_SUCCESS : res;
}
-LOADER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char * pName)
+LOADER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char * pName)
{
if (gpu == NULL) {
return NULL;
}
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE * disp_table = * (XGL_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE * disp_table = * (VK_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
void *addr;
if (disp_table == NULL)
}
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char *pExtName)
+LOADER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char *pExtName)
{
uint32_t gpu_index;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
return find_layer_extension(icd, gpu_index, pExtName, NULL);
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+LOADER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
uint32_t gpu_index;
size_t count = 0;
char *lib_name;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
loader_platform_dl_handle handle;
- xglEnumerateLayersType fpEnumerateLayers;
+ vkEnumerateLayersType fpEnumerateLayers;
char layer_buf[16][256];
char * layers[16];
if (pOutLayerCount == NULL || pOutLayers == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (!icd)
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
for (int i = 0; i < 16; i++)
layers[i] = &layer_buf[i][0];
// Used to call: dlopen(*lib_name, RTLD_LAZY)
if ((handle = loader_platform_open_library(lib_name)) == NULL)
continue;
- if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "xglEnumerateLayers")) == NULL) {
- //use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
+ if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "vkEnumerateLayers")) == NULL) {
+ //use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd, *cpyStr;
size_t siz;
loader_platform_close_library(handle);
lib_name = basename(lib_name);
pEnd = strrchr(lib_name, '.');
- siz = (int) (pEnd - lib_name - strlen(XGL_LAYER_LIBRARY_PREFIX) + 1);
+ siz = (int) (pEnd - lib_name - strlen(VK_LAYER_LIBRARY_PREFIX) + 1);
if (pEnd == NULL || siz <= 0)
continue;
cpyStr = malloc(siz);
free(cpyStr);
continue;
}
- strncpy(cpyStr, lib_name + strlen(XGL_LAYER_LIBRARY_PREFIX), siz);
+ strncpy(cpyStr, lib_name + strlen(VK_LAYER_LIBRARY_PREFIX), siz);
cpyStr[siz - 1] = '\0';
if (siz > maxStringSize)
siz = (int) maxStringSize;
} else {
size_t cnt;
uint32_t n;
- XGL_RESULT res;
+ VK_RESULT res;
n = (uint32_t) ((maxStringSize < 256) ? maxStringSize : 256);
res = fpEnumerateLayers(NULL, 16, n, &cnt, layers, (char *) icd->gpus + gpu_index);
loader_platform_close_library(handle);
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
continue;
if (cnt + count > maxLayerCount)
cnt = maxLayerCount - count;
*pOutLayerCount = count;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
const struct loader_icd *icd;
struct loader_instance *inst;
- XGL_RESULT res;
+ VK_RESULT res;
uint32_t gpu_idx;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
res = (icd->loader_dispatch + i)->DbgRegisterMsgCallback(icd->scanned_icds->instance,
pfnMsgCallback, pUserData);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
gpu_idx = i;
break;
}
}
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
break;
}
return res;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
- XGL_RESULT r;
+ VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgUnregisterMsgCallback(icd->scanned_icds->instance, pfnMsgCallback);
- if (r != XGL_SUCCESS) {
+ if (r != VK_SUCCESS) {
res = r;
}
}
return res;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
- XGL_RESULT r;
+ VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgSetGlobalOption(icd->scanned_icds->instance, dbgOption,
dataSize, pData);
/* unfortunately we cannot roll back */
- if (r != XGL_SUCCESS) {
+ if (r != VK_SUCCESS) {
res = r;
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#ifndef LOADER_H
#define LOADER_H
-#include <xgl.h>
-#include <xglDbg.h>
+#include <vulkan.h>
+#include <vkDbg.h>
#if defined(WIN32)
// FIXME: NEED WINDOWS EQUIVALENT
#else // WIN32
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#endif // WIN32
-#include <xglLayer.h>
-#include <xglIcd.h>
+#include <vkLayer.h>
+#include <vkIcd.h>
#include <assert.h>
#if defined(__GNUC__) && __GNUC__ >= 4
loader_set_data(obj, data);
}
-static inline void *loader_unwrap_gpu(XGL_PHYSICAL_GPU *gpu)
+static inline void *loader_unwrap_gpu(VK_PHYSICAL_GPU *gpu)
{
- const XGL_BASE_LAYER_OBJECT *wrap = (const XGL_BASE_LAYER_OBJECT *) *gpu;
+ const VK_BASE_LAYER_OBJECT *wrap = (const VK_BASE_LAYER_OBJECT *) *gpu;
- *gpu = (XGL_PHYSICAL_GPU) wrap->nextObject;
+ *gpu = (VK_PHYSICAL_GPU) wrap->nextObject;
return loader_get_data(wrap->baseObject);
}
-extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo);
+extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo);
#define MAX_LAYER_LIBRARIES 64
#endif /* LOADER_H */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
* Copyright 2014 Valve Software
#include <pthread.h>
#include <assert.h>
-// XGL Library Filenames, Paths, etc.:
+// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ':'
#define DIRECTORY_SYMBOL "/"
-#define DRIVER_PATH_ENV "LIBXGL_DRIVERS_PATH"
-#define LAYERS_PATH_ENV "LIBXGL_LAYERS_PATH"
-#define LAYER_NAMES_ENV "LIBXGL_LAYER_NAMES"
-#ifndef DEFAULT_XGL_DRIVERS_PATH
+#define DRIVER_PATH_ENV "LIBVK_DRIVERS_PATH"
+#define LAYERS_PATH_ENV "LIBVK_LAYERS_PATH"
+#define LAYER_NAMES_ENV "LIBVK_LAYER_NAMES"
+#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
-#define DEFAULT_XGL_DRIVERS_PATH "/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
-#define XGL_DRIVER_LIBRARY_PREFIX "libXGL_"
-#define XGL_DRIVER_LIBRARY_PREFIX_LEN 7
-#define XGL_LAYER_LIBRARY_PREFIX "libXGLLayer"
-#define XGL_LAYER_LIBRARY_PREFIX_LEN 11
-#define XGL_LIBRARY_SUFFIX ".so"
-#define XGL_LIBRARY_SUFFIX_LEN 3
-#endif // DEFAULT_XGL_DRIVERS_PATH
-#ifndef DEFAULT_XGL_LAYERS_PATH
+#define DEFAULT_VK_DRIVERS_PATH "/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
+#define VK_DRIVER_LIBRARY_PREFIX "libVK_"
+#define VK_DRIVER_LIBRARY_PREFIX_LEN 6
+#define VK_LAYER_LIBRARY_PREFIX "libVKLayer"
+#define VK_LAYER_LIBRARY_PREFIX_LEN 10
+#define VK_LIBRARY_SUFFIX ".so"
+#define VK_LIBRARY_SUFFIX_LEN 3
+#endif // DEFAULT_VK_DRIVERS_PATH
+#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Are these good default locations?
-#define DEFAULT_XGL_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
+#define DEFAULT_VK_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
#endif
// C99:
using namespace std;
#endif // __cplusplus
-// XGL Library Filenames, Paths, etc.:
+// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ';'
#define DIRECTORY_SYMBOL "\\"
-#define DRIVER_PATH_REGISTRY_VALUE "XGL_DRIVERS_PATH"
-#define LAYERS_PATH_REGISTRY_VALUE "XGL_LAYERS_PATH"
-#define LAYER_NAMES_REGISTRY_VALUE "XGL_LAYER_NAMES"
-#define DRIVER_PATH_ENV "XGL_DRIVERS_PATH"
-#define LAYERS_PATH_ENV "XGL_LAYERS_PATH"
-#define LAYER_NAMES_ENV "XGL_LAYER_NAMES"
-#ifndef DEFAULT_XGL_DRIVERS_PATH
+#define DRIVER_PATH_REGISTRY_VALUE "VK_DRIVERS_PATH"
+#define LAYERS_PATH_REGISTRY_VALUE "VK_LAYERS_PATH"
+#define LAYER_NAMES_REGISTRY_VALUE "VK_LAYER_NAMES"
+#define DRIVER_PATH_ENV "VK_DRIVERS_PATH"
+#define LAYERS_PATH_ENV "VK_LAYERS_PATH"
+#define LAYER_NAMES_ENV "VK_LAYER_NAMES"
+#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
-#define DEFAULT_XGL_DRIVERS_PATH "C:\\Windows\\System32"
+#define DEFAULT_VK_DRIVERS_PATH "C:\\Windows\\System32"
// TODO/TBD: Is this an appropriate prefix for Windows?
-#define XGL_DRIVER_LIBRARY_PREFIX "XGL_"
-#define XGL_DRIVER_LIBRARY_PREFIX_LEN 4
+#define VK_DRIVER_LIBRARY_PREFIX "VK_"
+#define VK_DRIVER_LIBRARY_PREFIX_LEN 4
// TODO/TBD: Is this an appropriate suffix for Windows?
-#define XGL_LAYER_LIBRARY_PREFIX "XGLLayer"
-#define XGL_LAYER_LIBRARY_PREFIX_LEN 8
-#define XGL_LIBRARY_SUFFIX ".dll"
-#define XGL_LIBRARY_SUFFIX_LEN 4
-#endif // DEFAULT_XGL_DRIVERS_PATH
-#ifndef DEFAULT_XGL_LAYERS_PATH
+#define VK_LAYER_LIBRARY_PREFIX "VKLayer"
+#define VK_LAYER_LIBRARY_PREFIX_LEN 8
+#define VK_LIBRARY_SUFFIX ".dll"
+#define VK_LIBRARY_SUFFIX_LEN 4
+#endif // DEFAULT_VK_DRIVERS_PATH
+#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Is this a good default location?
-#define DEFAULT_XGL_LAYERS_PATH "C:\\Windows\\System32"
-#endif // DEFAULT_XGL_LAYERS_PATH
+#define DEFAULT_VK_LAYERS_PATH "C:\\Windows\\System32"
+#endif // DEFAULT_VK_LAYERS_PATH
// C99:
// Microsoft didn't implement C99 in Visual Studio; but started adding it with
message(FATAL_ERROR "Missing ImageMagick library: sudo apt-get install libmagickwand-dev")
endif()
-set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -Wno-sign-compare")
+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -Wno-sign-compare")
SET(COMMON_CPP
xglrenderframework.cpp
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xglbase XGL gtest gtest_main ${TEST_LIBRARIES})
-add_executable(xgl_image_tests image_tests.cpp ${COMMON_CPP})
-set_target_properties(xgl_image_tests
+add_executable(vk_image_tests image_tests.cpp ${COMMON_CPP})
+set_target_properties(vk_image_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
-target_link_libraries(xgl_image_tests XGL gtest gtest_main ${TEST_LIBRARIES})
+target_link_libraries(vk_image_tests XGL gtest gtest_main ${TEST_LIBRARIES})
-add_executable(xgl_render_tests render_tests.cpp ${COMMON_CPP})
-set_target_properties(xgl_render_tests
+add_executable(vk_render_tests render_tests.cpp ${COMMON_CPP})
+set_target_properties(vk_render_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
-target_link_libraries(xgl_render_tests XGL gtest gtest_main ${TEST_LIBRARIES})
+target_link_libraries(vk_render_tests XGL gtest gtest_main ${TEST_LIBRARIES})
-add_executable(xgl_blit_tests blit_tests.cpp ${COMMON_CPP})
-set_target_properties(xgl_blit_tests
+add_executable(vk_blit_tests blit_tests.cpp ${COMMON_CPP})
+set_target_properties(vk_blit_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
-target_link_libraries(xgl_blit_tests XGL gtest gtest_main ${TEST_LIBRARIES})
+target_link_libraries(vk_blit_tests XGL gtest gtest_main ${TEST_LIBRARIES})
-add_executable(xgl_layer_validation_tests layer_validation_tests.cpp ${COMMON_CPP})
-set_target_properties(xgl_layer_validation_tests
+add_executable(vk_layer_validation_tests layer_validation_tests.cpp ${COMMON_CPP})
+set_target_properties(vk_layer_validation_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
-target_link_libraries(xgl_layer_validation_tests XGL gtest gtest_main ${TEST_LIBRARIES})
+target_link_libraries(vk_layer_validation_tests XGL gtest gtest_main ${TEST_LIBRARIES})
add_subdirectory(gtest-1.7.0)
-To run the raster tests and just look for XGL errors simply run them with no arguments or use the --gtest_filter=XglRasterTest.<testname> to run just one test. Running with --gtest_list_tests will name all of the available tests.
+To run the raster tests and just look for VK errors simply run them with no arguments or use the --gtest_filter=XglRasterTest.<testname> to run just one test. Running with --gtest_list_tests will name all of the available tests.
The raster tests can be run and the pixel results compared to known good (golden) images generated from previous runs. To generate golden images, make sure that the tests are rendering as expected, and then run with --save-images. This will generate .ppm files for all of the tests. Create a directory called "golden" and "mv *.ppm golden". When new tests are added to render_tests additional golden images will need to be generated for the new tests.
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// Blit (copy, clear, and resolve) tests
#include "test_common.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
#include "test_environment.h"
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
-namespace xgl_testing {
+namespace vk_testing {
-size_t get_format_size(XGL_FORMAT format);
+size_t get_format_size(VK_FORMAT format);
class ImageChecker {
public:
- explicit ImageChecker(const XGL_IMAGE_CREATE_INFO &info, const std::vector<XGL_BUFFER_IMAGE_COPY> ®ions)
+ explicit ImageChecker(const VK_IMAGE_CREATE_INFO &info, const std::vector<VK_BUFFER_IMAGE_COPY> ®ions)
: info_(info), regions_(regions), pattern_(HASH) {}
- explicit ImageChecker(const XGL_IMAGE_CREATE_INFO &info, const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges);
- explicit ImageChecker(const XGL_IMAGE_CREATE_INFO &info);
+ explicit ImageChecker(const VK_IMAGE_CREATE_INFO &info, const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges);
+ explicit ImageChecker(const VK_IMAGE_CREATE_INFO &info);
void set_solid_pattern(const std::vector<uint8_t> &solid);
- XGL_GPU_SIZE buffer_size() const;
+ VK_GPU_SIZE buffer_size() const;
bool fill(Buffer &buf) const { return walk(FILL, buf); }
bool fill(Image &img) const { return walk(FILL, img); }
bool check(Buffer &buf) const { return walk(CHECK, buf); }
bool check(Image &img) const { return walk(CHECK, img); }
- const std::vector<XGL_BUFFER_IMAGE_COPY> ®ions() const { return regions_; }
+ const std::vector<VK_BUFFER_IMAGE_COPY> ®ions() const { return regions_; }
static void hash_salt_generate() { hash_salt_++; }
};
size_t buffer_cpp() const;
- XGL_SUBRESOURCE_LAYOUT buffer_layout(const XGL_BUFFER_IMAGE_COPY ®ion) const;
+ VK_SUBRESOURCE_LAYOUT buffer_layout(const VK_BUFFER_IMAGE_COPY ®ion) const;
bool walk(Action action, Buffer &buf) const;
bool walk(Action action, Image &img) const;
- bool walk_region(Action action, const XGL_BUFFER_IMAGE_COPY ®ion, const XGL_SUBRESOURCE_LAYOUT &layout, void *data) const;
+ bool walk_region(Action action, const VK_BUFFER_IMAGE_COPY ®ion, const VK_SUBRESOURCE_LAYOUT &layout, void *data) const;
- std::vector<uint8_t> pattern_hash(const XGL_IMAGE_SUBRESOURCE &subres, const XGL_OFFSET3D &offset) const;
+ std::vector<uint8_t> pattern_hash(const VK_IMAGE_SUBRESOURCE &subres, const VK_OFFSET3D &offset) const;
static uint32_t hash_salt_;
- XGL_IMAGE_CREATE_INFO info_;
- std::vector<XGL_BUFFER_IMAGE_COPY> regions_;
+ VK_IMAGE_CREATE_INFO info_;
+ std::vector<VK_BUFFER_IMAGE_COPY> regions_;
Pattern pattern_;
std::vector<uint8_t> pattern_solid_;
uint32_t ImageChecker::hash_salt_;
-ImageChecker::ImageChecker(const XGL_IMAGE_CREATE_INFO &info)
+ImageChecker::ImageChecker(const VK_IMAGE_CREATE_INFO &info)
: info_(info), regions_(), pattern_(HASH)
{
// create a region for every mip level in array slice 0
- XGL_GPU_SIZE offset = 0;
+ VK_GPU_SIZE offset = 0;
for (uint32_t lv = 0; lv < info_.mipLevels; lv++) {
- XGL_BUFFER_IMAGE_COPY region = {};
+ VK_BUFFER_IMAGE_COPY region = {};
region.bufferOffset = offset;
region.imageSubresource.mipLevel = lv;
region.imageSubresource.arraySlice = 0;
region.imageExtent = Image::extent(info_.extent, lv);
- if (info_.usage & XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
- if (info_.format != XGL_FMT_S8_UINT) {
- region.imageSubresource.aspect = XGL_IMAGE_ASPECT_DEPTH;
+ if (info_.usage & VK_IMAGE_USAGE_DEPTH_STENCIL_BIT) {
+ if (info_.format != VK_FMT_S8_UINT) {
+ region.imageSubresource.aspect = VK_IMAGE_ASPECT_DEPTH;
regions_.push_back(region);
}
- if (info_.format == XGL_FMT_D16_UNORM_S8_UINT ||
- info_.format == XGL_FMT_D32_SFLOAT_S8_UINT ||
- info_.format == XGL_FMT_S8_UINT) {
- region.imageSubresource.aspect = XGL_IMAGE_ASPECT_STENCIL;
+ if (info_.format == VK_FMT_D16_UNORM_S8_UINT ||
+ info_.format == VK_FMT_D32_SFLOAT_S8_UINT ||
+ info_.format == VK_FMT_S8_UINT) {
+ region.imageSubresource.aspect = VK_IMAGE_ASPECT_STENCIL;
regions_.push_back(region);
}
} else {
- region.imageSubresource.aspect = XGL_IMAGE_ASPECT_COLOR;
+ region.imageSubresource.aspect = VK_IMAGE_ASPECT_COLOR;
regions_.push_back(region);
}
// arraySize should be limited in our tests. If this proves to be an
// issue, we can store only the regions for array slice 0 and be smart.
if (info_.arraySize > 1) {
- const XGL_GPU_SIZE slice_pitch = offset;
+ const VK_GPU_SIZE slice_pitch = offset;
const uint32_t slice_region_count = regions_.size();
regions_.reserve(slice_region_count * info_.arraySize);
for (uint32_t slice = 1; slice < info_.arraySize; slice++) {
for (uint32_t i = 0; i < slice_region_count; i++) {
- XGL_BUFFER_IMAGE_COPY region = regions_[i];
+ VK_BUFFER_IMAGE_COPY region = regions_[i];
region.bufferOffset += slice_pitch * slice;
region.imageSubresource.arraySlice = slice;
}
}
-ImageChecker::ImageChecker(const XGL_IMAGE_CREATE_INFO &info, const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges)
+ImageChecker::ImageChecker(const VK_IMAGE_CREATE_INFO &info, const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges)
: info_(info), regions_(), pattern_(HASH)
{
- XGL_GPU_SIZE offset = 0;
- for (std::vector<XGL_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
+ VK_GPU_SIZE offset = 0;
+ for (std::vector<VK_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
it != ranges.end(); it++) {
for (uint32_t lv = 0; lv < it->mipLevels; lv++) {
for (uint32_t slice = 0; slice < it->arraySize; slice++) {
- XGL_BUFFER_IMAGE_COPY region = {};
+ VK_BUFFER_IMAGE_COPY region = {};
region.bufferOffset = offset;
region.imageSubresource = Image::subresource(*it, lv, slice);
region.imageExtent = Image::extent(info_.extent, lv);
return get_format_size(info_.format);
}
-XGL_SUBRESOURCE_LAYOUT ImageChecker::buffer_layout(const XGL_BUFFER_IMAGE_COPY ®ion) const
+VK_SUBRESOURCE_LAYOUT ImageChecker::buffer_layout(const VK_BUFFER_IMAGE_COPY ®ion) const
{
- XGL_SUBRESOURCE_LAYOUT layout = {};
+ VK_SUBRESOURCE_LAYOUT layout = {};
layout.offset = region.bufferOffset;
layout.rowPitch = buffer_cpp() * region.imageExtent.width;
layout.depthPitch = layout.rowPitch * region.imageExtent.height;
return layout;
}
-XGL_GPU_SIZE ImageChecker::buffer_size() const
+VK_GPU_SIZE ImageChecker::buffer_size() const
{
- XGL_GPU_SIZE size = 0;
+ VK_GPU_SIZE size = 0;
- for (std::vector<XGL_BUFFER_IMAGE_COPY>::const_iterator it = regions_.begin();
+ for (std::vector<VK_BUFFER_IMAGE_COPY>::const_iterator it = regions_.begin();
it != regions_.end(); it++) {
- const XGL_SUBRESOURCE_LAYOUT layout = buffer_layout(*it);
+ const VK_SUBRESOURCE_LAYOUT layout = buffer_layout(*it);
if (size < layout.offset + layout.size)
size = layout.offset + layout.size;
}
return size;
}
-bool ImageChecker::walk_region(Action action, const XGL_BUFFER_IMAGE_COPY ®ion,
- const XGL_SUBRESOURCE_LAYOUT &layout, void *data) const
+bool ImageChecker::walk_region(Action action, const VK_BUFFER_IMAGE_COPY ®ion,
+ const VK_SUBRESOURCE_LAYOUT &layout, void *data) const
{
for (int32_t z = 0; z < region.imageExtent.depth; z++) {
for (int32_t y = 0; y < region.imageExtent.height; y++) {
dst += layout.offset + layout.depthPitch * z +
layout.rowPitch * y + buffer_cpp() * x;
- XGL_OFFSET3D offset = region.imageOffset;
+ VK_OFFSET3D offset = region.imageOffset;
offset.x += x;
offset.y += y;
offset.z += z;
if (!data)
return false;
- std::vector<XGL_BUFFER_IMAGE_COPY>::const_iterator it;
+ std::vector<VK_BUFFER_IMAGE_COPY>::const_iterator it;
for (it = regions_.begin(); it != regions_.end(); it++) {
if (!walk_region(action, *it, buffer_layout(*it), data))
break;
if (!data)
return false;
- std::vector<XGL_BUFFER_IMAGE_COPY>::const_iterator it;
+ std::vector<VK_BUFFER_IMAGE_COPY>::const_iterator it;
for (it = regions_.begin(); it != regions_.end(); it++) {
if (!walk_region(action, *it, img.subresource_layout(it->imageSubresource), data))
break;
return (it == regions_.end());
}
-std::vector<uint8_t> ImageChecker::pattern_hash(const XGL_IMAGE_SUBRESOURCE &subres, const XGL_OFFSET3D &offset) const
+std::vector<uint8_t> ImageChecker::pattern_hash(const VK_IMAGE_SUBRESOURCE &subres, const VK_OFFSET3D &offset) const
{
#define HASH_BYTE(val, b) static_cast<uint8_t>((static_cast<uint32_t>(val) >> (b * 8)) & 0xff)
#define HASH_BYTES(val) HASH_BYTE(val, 0), HASH_BYTE(val, 1), HASH_BYTE(val, 2), HASH_BYTE(val, 3)
return val;
}
-size_t get_format_size(XGL_FORMAT format)
+size_t get_format_size(VK_FORMAT format)
{
static const struct format_info {
size_t size;
uint32_t channel_count;
- } format_table[XGL_NUM_FMT] = {
- [XGL_FMT_UNDEFINED] = { 0, 0 },
- [XGL_FMT_R4G4_UNORM] = { 1, 2 },
- [XGL_FMT_R4G4_USCALED] = { 1, 2 },
- [XGL_FMT_R4G4B4A4_UNORM] = { 2, 4 },
- [XGL_FMT_R4G4B4A4_USCALED] = { 2, 4 },
- [XGL_FMT_R5G6B5_UNORM] = { 2, 3 },
- [XGL_FMT_R5G6B5_USCALED] = { 2, 3 },
- [XGL_FMT_R5G5B5A1_UNORM] = { 2, 4 },
- [XGL_FMT_R5G5B5A1_USCALED] = { 2, 4 },
- [XGL_FMT_R8_UNORM] = { 1, 1 },
- [XGL_FMT_R8_SNORM] = { 1, 1 },
- [XGL_FMT_R8_USCALED] = { 1, 1 },
- [XGL_FMT_R8_SSCALED] = { 1, 1 },
- [XGL_FMT_R8_UINT] = { 1, 1 },
- [XGL_FMT_R8_SINT] = { 1, 1 },
- [XGL_FMT_R8_SRGB] = { 1, 1 },
- [XGL_FMT_R8G8_UNORM] = { 2, 2 },
- [XGL_FMT_R8G8_SNORM] = { 2, 2 },
- [XGL_FMT_R8G8_USCALED] = { 2, 2 },
- [XGL_FMT_R8G8_SSCALED] = { 2, 2 },
- [XGL_FMT_R8G8_UINT] = { 2, 2 },
- [XGL_FMT_R8G8_SINT] = { 2, 2 },
- [XGL_FMT_R8G8_SRGB] = { 2, 2 },
- [XGL_FMT_R8G8B8_UNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_SNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_USCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_SSCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_UINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SRGB] = { 3, 3 },
- [XGL_FMT_R8G8B8A8_UNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_USCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_UINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SRGB] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_USCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UINT] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SINT] = { 4, 4 },
- [XGL_FMT_R16_UNORM] = { 2, 1 },
- [XGL_FMT_R16_SNORM] = { 2, 1 },
- [XGL_FMT_R16_USCALED] = { 2, 1 },
- [XGL_FMT_R16_SSCALED] = { 2, 1 },
- [XGL_FMT_R16_UINT] = { 2, 1 },
- [XGL_FMT_R16_SINT] = { 2, 1 },
- [XGL_FMT_R16_SFLOAT] = { 2, 1 },
- [XGL_FMT_R16G16_UNORM] = { 4, 2 },
- [XGL_FMT_R16G16_SNORM] = { 4, 2 },
- [XGL_FMT_R16G16_USCALED] = { 4, 2 },
- [XGL_FMT_R16G16_SSCALED] = { 4, 2 },
- [XGL_FMT_R16G16_UINT] = { 4, 2 },
- [XGL_FMT_R16G16_SINT] = { 4, 2 },
- [XGL_FMT_R16G16_SFLOAT] = { 4, 2 },
- [XGL_FMT_R16G16B16_UNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_SNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_USCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_SSCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_UINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SFLOAT] = { 6, 3 },
- [XGL_FMT_R16G16B16A16_UNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_USCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_UINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
- [XGL_FMT_R32_UINT] = { 4, 1 },
- [XGL_FMT_R32_SINT] = { 4, 1 },
- [XGL_FMT_R32_SFLOAT] = { 4, 1 },
- [XGL_FMT_R32G32_UINT] = { 8, 2 },
- [XGL_FMT_R32G32_SINT] = { 8, 2 },
- [XGL_FMT_R32G32_SFLOAT] = { 8, 2 },
- [XGL_FMT_R32G32B32_UINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SFLOAT] = { 12, 3 },
- [XGL_FMT_R32G32B32A32_UINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
- [XGL_FMT_R64_SFLOAT] = { 8, 1 },
- [XGL_FMT_R64G64_SFLOAT] = { 16, 2 },
- [XGL_FMT_R64G64B64_SFLOAT] = { 24, 3 },
- [XGL_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
- [XGL_FMT_R11G11B10_UFLOAT] = { 4, 3 },
- [XGL_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
- [XGL_FMT_D16_UNORM] = { 2, 1 },
- [XGL_FMT_D24_UNORM] = { 3, 1 },
- [XGL_FMT_D32_SFLOAT] = { 4, 1 },
- [XGL_FMT_S8_UINT] = { 1, 1 },
- [XGL_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
- [XGL_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
- [XGL_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
- [XGL_FMT_BC1_RGB_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGB_SRGB] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_SRGB] = { 8, 4 },
- [XGL_FMT_BC2_UNORM] = { 16, 4 },
- [XGL_FMT_BC2_SRGB] = { 16, 4 },
- [XGL_FMT_BC3_UNORM] = { 16, 4 },
- [XGL_FMT_BC3_SRGB] = { 16, 4 },
- [XGL_FMT_BC4_UNORM] = { 8, 4 },
- [XGL_FMT_BC4_SNORM] = { 8, 4 },
- [XGL_FMT_BC5_UNORM] = { 16, 4 },
- [XGL_FMT_BC5_SNORM] = { 16, 4 },
- [XGL_FMT_BC6H_UFLOAT] = { 16, 4 },
- [XGL_FMT_BC6H_SFLOAT] = { 16, 4 },
- [XGL_FMT_BC7_UNORM] = { 16, 4 },
- [XGL_FMT_BC7_SRGB] = { 16, 4 },
+ } format_table[VK_NUM_FMT] = {
+ [VK_FMT_UNDEFINED] = { 0, 0 },
+ [VK_FMT_R4G4_UNORM] = { 1, 2 },
+ [VK_FMT_R4G4_USCALED] = { 1, 2 },
+ [VK_FMT_R4G4B4A4_UNORM] = { 2, 4 },
+ [VK_FMT_R4G4B4A4_USCALED] = { 2, 4 },
+ [VK_FMT_R5G6B5_UNORM] = { 2, 3 },
+ [VK_FMT_R5G6B5_USCALED] = { 2, 3 },
+ [VK_FMT_R5G5B5A1_UNORM] = { 2, 4 },
+ [VK_FMT_R5G5B5A1_USCALED] = { 2, 4 },
+ [VK_FMT_R8_UNORM] = { 1, 1 },
+ [VK_FMT_R8_SNORM] = { 1, 1 },
+ [VK_FMT_R8_USCALED] = { 1, 1 },
+ [VK_FMT_R8_SSCALED] = { 1, 1 },
+ [VK_FMT_R8_UINT] = { 1, 1 },
+ [VK_FMT_R8_SINT] = { 1, 1 },
+ [VK_FMT_R8_SRGB] = { 1, 1 },
+ [VK_FMT_R8G8_UNORM] = { 2, 2 },
+ [VK_FMT_R8G8_SNORM] = { 2, 2 },
+ [VK_FMT_R8G8_USCALED] = { 2, 2 },
+ [VK_FMT_R8G8_SSCALED] = { 2, 2 },
+ [VK_FMT_R8G8_UINT] = { 2, 2 },
+ [VK_FMT_R8G8_SINT] = { 2, 2 },
+ [VK_FMT_R8G8_SRGB] = { 2, 2 },
+ [VK_FMT_R8G8B8_UNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_SNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_USCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_SSCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_UINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SRGB] = { 3, 3 },
+ [VK_FMT_R8G8B8A8_UNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_USCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_UINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SRGB] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_USCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UINT] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SINT] = { 4, 4 },
+ [VK_FMT_R16_UNORM] = { 2, 1 },
+ [VK_FMT_R16_SNORM] = { 2, 1 },
+ [VK_FMT_R16_USCALED] = { 2, 1 },
+ [VK_FMT_R16_SSCALED] = { 2, 1 },
+ [VK_FMT_R16_UINT] = { 2, 1 },
+ [VK_FMT_R16_SINT] = { 2, 1 },
+ [VK_FMT_R16_SFLOAT] = { 2, 1 },
+ [VK_FMT_R16G16_UNORM] = { 4, 2 },
+ [VK_FMT_R16G16_SNORM] = { 4, 2 },
+ [VK_FMT_R16G16_USCALED] = { 4, 2 },
+ [VK_FMT_R16G16_SSCALED] = { 4, 2 },
+ [VK_FMT_R16G16_UINT] = { 4, 2 },
+ [VK_FMT_R16G16_SINT] = { 4, 2 },
+ [VK_FMT_R16G16_SFLOAT] = { 4, 2 },
+ [VK_FMT_R16G16B16_UNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_SNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_USCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_SSCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_UINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SFLOAT] = { 6, 3 },
+ [VK_FMT_R16G16B16A16_UNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_USCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_UINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
+ [VK_FMT_R32_UINT] = { 4, 1 },
+ [VK_FMT_R32_SINT] = { 4, 1 },
+ [VK_FMT_R32_SFLOAT] = { 4, 1 },
+ [VK_FMT_R32G32_UINT] = { 8, 2 },
+ [VK_FMT_R32G32_SINT] = { 8, 2 },
+ [VK_FMT_R32G32_SFLOAT] = { 8, 2 },
+ [VK_FMT_R32G32B32_UINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SFLOAT] = { 12, 3 },
+ [VK_FMT_R32G32B32A32_UINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
+ [VK_FMT_R64_SFLOAT] = { 8, 1 },
+ [VK_FMT_R64G64_SFLOAT] = { 16, 2 },
+ [VK_FMT_R64G64B64_SFLOAT] = { 24, 3 },
+ [VK_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
+ [VK_FMT_R11G11B10_UFLOAT] = { 4, 3 },
+ [VK_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
+ [VK_FMT_D16_UNORM] = { 2, 1 },
+ [VK_FMT_D24_UNORM] = { 3, 1 },
+ [VK_FMT_D32_SFLOAT] = { 4, 1 },
+ [VK_FMT_S8_UINT] = { 1, 1 },
+ [VK_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
+ [VK_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
+ [VK_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
+ [VK_FMT_BC1_RGB_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGB_SRGB] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_SRGB] = { 8, 4 },
+ [VK_FMT_BC2_UNORM] = { 16, 4 },
+ [VK_FMT_BC2_SRGB] = { 16, 4 },
+ [VK_FMT_BC3_UNORM] = { 16, 4 },
+ [VK_FMT_BC3_SRGB] = { 16, 4 },
+ [VK_FMT_BC4_UNORM] = { 8, 4 },
+ [VK_FMT_BC4_SNORM] = { 8, 4 },
+ [VK_FMT_BC5_UNORM] = { 16, 4 },
+ [VK_FMT_BC5_SNORM] = { 16, 4 },
+ [VK_FMT_BC6H_UFLOAT] = { 16, 4 },
+ [VK_FMT_BC6H_SFLOAT] = { 16, 4 },
+ [VK_FMT_BC7_UNORM] = { 16, 4 },
+ [VK_FMT_BC7_SRGB] = { 16, 4 },
// TODO: Initialize remaining compressed formats.
- [XGL_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8_SRGB] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A1_SRGB] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A8_SRGB] = { 0, 0 },
- [XGL_FMT_EAC_R11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11_SNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_SNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_SRGB] = { 0, 0 },
- [XGL_FMT_B4G4R4A4_UNORM] = { 2, 4 },
- [XGL_FMT_B5G5R5A1_UNORM] = { 2, 4 },
- [XGL_FMT_B5G6R5_UNORM] = { 2, 3 },
- [XGL_FMT_B5G6R5_USCALED] = { 2, 3 },
- [XGL_FMT_B8G8R8_UNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_SNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_USCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_SSCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_UINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SRGB] = { 3, 3 },
- [XGL_FMT_B8G8R8A8_UNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_USCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_UINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SRGB] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_USCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UINT] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SINT] = { 4, 4 },
+ [VK_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8_SRGB] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A1_SRGB] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A8_SRGB] = { 0, 0 },
+ [VK_FMT_EAC_R11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11_SNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_SNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_SRGB] = { 0, 0 },
+ [VK_FMT_B4G4R4A4_UNORM] = { 2, 4 },
+ [VK_FMT_B5G5R5A1_UNORM] = { 2, 4 },
+ [VK_FMT_B5G6R5_UNORM] = { 2, 3 },
+ [VK_FMT_B5G6R5_USCALED] = { 2, 3 },
+ [VK_FMT_B8G8R8_UNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_SNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_USCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_SSCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_UINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SRGB] = { 3, 3 },
+ [VK_FMT_B8G8R8A8_UNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_USCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_UINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SRGB] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_USCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UINT] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SINT] = { 4, 4 },
};
return format_table[format].size;
}
-XGL_EXTENT3D get_mip_level_extent(const XGL_EXTENT3D &extent, uint32_t mip_level)
+VK_EXTENT3D get_mip_level_extent(const VK_EXTENT3D &extent, uint32_t mip_level)
{
- const XGL_EXTENT3D ext = {
+ const VK_EXTENT3D ext = {
(extent.width >> mip_level) ? extent.width >> mip_level : 1,
(extent.height >> mip_level) ? extent.height >> mip_level : 1,
(extent.depth >> mip_level) ? extent.depth >> mip_level : 1,
return ext;
}
-}; // namespace xgl_testing
+}; // namespace vk_testing
namespace {
#define DO(action) ASSERT_EQ(true, action);
-static xgl_testing::Environment *environment;
+static vk_testing::Environment *environment;
class XglCmdBlitTest : public ::testing::Test {
protected:
XglCmdBlitTest() :
dev_(environment->default_device()),
queue_(*dev_.graphics_queues()[0]),
- cmd_(dev_, xgl_testing::CmdBuffer::create_info(dev_.graphics_queue_node_index_))
+ cmd_(dev_, vk_testing::CmdBuffer::create_info(dev_.graphics_queue_node_index_))
{
// make sure every test uses a different pattern
- xgl_testing::ImageChecker::hash_salt_generate();
+ vk_testing::ImageChecker::hash_salt_generate();
}
bool submit_and_done()
return true;
}
- void add_memory_ref(const xgl_testing::Object &obj)
+ void add_memory_ref(const vk_testing::Object &obj)
{
- const std::vector<XGL_GPU_MEMORY> mems = obj.memories();
- for (std::vector<XGL_GPU_MEMORY>::const_iterator it = mems.begin(); it != mems.end(); it++) {
- std::vector<XGL_GPU_MEMORY>::iterator ref;
+ const std::vector<VK_GPU_MEMORY> mems = obj.memories();
+ for (std::vector<VK_GPU_MEMORY>::const_iterator it = mems.begin(); it != mems.end(); it++) {
+ std::vector<VK_GPU_MEMORY>::iterator ref;
for (ref = mem_refs_.begin(); ref != mem_refs_.end(); ref++) {
if (*ref == *it)
break;
}
}
- xgl_testing::Device &dev_;
- xgl_testing::Queue &queue_;
- xgl_testing::CmdBuffer cmd_;
+ vk_testing::Device &dev_;
+ vk_testing::Queue &queue_;
+ vk_testing::CmdBuffer cmd_;
- std::vector<XGL_GPU_MEMORY> mem_refs_;
+ std::vector<VK_GPU_MEMORY> mem_refs_;
};
typedef XglCmdBlitTest XglCmdFillBufferTest;
TEST_F(XglCmdFillBufferTest, Basic)
{
- xgl_testing::Buffer buf;
+ vk_testing::Buffer buf;
buf.init(dev_, 20);
add_memory_ref(buf);
cmd_.begin();
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), 0, 4, 0x11111111);
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), 4, 16, 0x22222222);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), 0, 4, 0x11111111);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), 4, 16, 0x22222222);
cmd_.end();
submit_and_done();
TEST_F(XglCmdFillBufferTest, Large)
{
- const XGL_GPU_SIZE size = 32 * 1024 * 1024;
- xgl_testing::Buffer buf;
+ const VK_GPU_SIZE size = 32 * 1024 * 1024;
+ vk_testing::Buffer buf;
buf.init(dev_, size);
add_memory_ref(buf);
cmd_.begin();
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), 0, size / 2, 0x11111111);
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), size / 2, size / 2, 0x22222222);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), 0, size / 2, 0x11111111);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), size / 2, size / 2, 0x22222222);
cmd_.end();
submit_and_done();
const uint32_t *data = static_cast<const uint32_t *>(buf.map());
- XGL_GPU_SIZE offset;
+ VK_GPU_SIZE offset;
for (offset = 0; offset < size / 2; offset += 4)
EXPECT_EQ(0x11111111, data[offset / 4]) << "Offset is: " << offset;
for (; offset < size; offset += 4)
TEST_F(XglCmdFillBufferTest, Overlap)
{
- xgl_testing::Buffer buf;
+ vk_testing::Buffer buf;
buf.init(dev_, 64);
add_memory_ref(buf);
cmd_.begin();
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), 0, 48, 0x11111111);
- xglCmdFillBuffer(cmd_.obj(), buf.obj(), 32, 32, 0x22222222);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), 0, 48, 0x11111111);
+ vkCmdFillBuffer(cmd_.obj(), buf.obj(), 32, 32, 0x22222222);
cmd_.end();
submit_and_done();
const uint32_t *data = static_cast<const uint32_t *>(buf.map());
- XGL_GPU_SIZE offset;
+ VK_GPU_SIZE offset;
for (offset = 0; offset < 32; offset += 4)
EXPECT_EQ(0x11111111, data[offset / 4]) << "Offset is: " << offset;
for (; offset < 64; offset += 4)
TEST_F(XglCmdFillBufferTest, MultiAlignments)
{
- xgl_testing::Buffer bufs[9];
- XGL_GPU_SIZE size = 4;
+ vk_testing::Buffer bufs[9];
+ VK_GPU_SIZE size = 4;
cmd_.begin();
for (int i = 0; i < ARRAY_SIZE(bufs); i++) {
bufs[i].init(dev_, size);
add_memory_ref(bufs[i]);
- xglCmdFillBuffer(cmd_.obj(), bufs[i].obj(), 0, size, 0x11111111);
+ vkCmdFillBuffer(cmd_.obj(), bufs[i].obj(), 0, size, 0x11111111);
size <<= 1;
}
cmd_.end();
size = 4;
for (int i = 0; i < ARRAY_SIZE(bufs); i++) {
const uint32_t *data = static_cast<const uint32_t *>(bufs[i].map());
- XGL_GPU_SIZE offset;
+ VK_GPU_SIZE offset;
for (offset = 0; offset < size; offset += 4)
EXPECT_EQ(0x11111111, data[offset / 4]) << "Buffser is: " << i << "\n" <<
"Offset is: " << offset;
TEST_F(XglCmdCopyBufferTest, Basic)
{
- xgl_testing::Buffer src, dst;
+ vk_testing::Buffer src, dst;
src.init(dev_, 4);
uint32_t *data = static_cast<uint32_t *>(src.map());
add_memory_ref(dst);
cmd_.begin();
- XGL_BUFFER_COPY region = {};
+ VK_BUFFER_COPY region = {};
region.copySize = 4;
- xglCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), 1, ®ion);
+ vkCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), 1, ®ion);
cmd_.end();
submit_and_done();
TEST_F(XglCmdCopyBufferTest, Large)
{
- const XGL_GPU_SIZE size = 32 * 1024 * 1024;
- xgl_testing::Buffer src, dst;
+ const VK_GPU_SIZE size = 32 * 1024 * 1024;
+ vk_testing::Buffer src, dst;
src.init(dev_, size);
uint32_t *data = static_cast<uint32_t *>(src.map());
- XGL_GPU_SIZE offset;
+ VK_GPU_SIZE offset;
for (offset = 0; offset < size; offset += 4)
data[offset / 4] = offset;
src.unmap();
add_memory_ref(dst);
cmd_.begin();
- XGL_BUFFER_COPY region = {};
+ VK_BUFFER_COPY region = {};
region.copySize = size;
- xglCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), 1, ®ion);
+ vkCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), 1, ®ion);
cmd_.end();
submit_and_done();
TEST_F(XglCmdCopyBufferTest, MultiAlignments)
{
- const XGL_BUFFER_COPY regions[] = {
+ const VK_BUFFER_COPY regions[] = {
/* well aligned */
{ 0, 0, 256 },
{ 0, 256, 128 },
{ 45, 570, 15 },
{ 50, 590, 1 },
};
- xgl_testing::Buffer src, dst;
+ vk_testing::Buffer src, dst;
src.init(dev_, 256);
uint8_t *data = static_cast<uint8_t *>(src.map());
add_memory_ref(dst);
cmd_.begin();
- xglCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), ARRAY_SIZE(regions), regions);
+ vkCmdCopyBuffer(cmd_.obj(), src.obj(), dst.obj(), ARRAY_SIZE(regions), regions);
cmd_.end();
submit_and_done();
data = static_cast<uint8_t *>(dst.map());
for (int i = 0; i < ARRAY_SIZE(regions); i++) {
- const XGL_BUFFER_COPY &r = regions[i];
+ const VK_BUFFER_COPY &r = regions[i];
for (int j = 0; j < r.copySize; j++) {
EXPECT_EQ(r.srcOffset + j, data[r.destOffset + j]) <<
TEST_F(XglCmdCopyBufferTest, RAWHazard)
{
- xgl_testing::Buffer bufs[3];
- XGL_EVENT_CREATE_INFO event_info;
- XGL_EVENT event;
- XGL_MEMORY_REQUIREMENTS mem_req;
+ vk_testing::Buffer bufs[3];
+ VK_EVENT_CREATE_INFO event_info;
+ VK_EVENT event;
+ VK_MEMORY_REQUIREMENTS mem_req;
size_t data_size = sizeof(mem_req);
- XGL_RESULT err;
+ VK_RESULT err;
- // typedef struct _XGL_EVENT_CREATE_INFO
+ // typedef struct _VK_EVENT_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO
// const void* pNext; // Pointer to next structure
- // XGL_FLAGS flags; // Reserved
- // } XGL_EVENT_CREATE_INFO;
+ // VK_FLAGS flags; // Reserved
+ // } VK_EVENT_CREATE_INFO;
memset(&event_info, 0, sizeof(event_info));
- event_info.sType = XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO;
+ event_info.sType = VK_STRUCTURE_TYPE_EVENT_CREATE_INFO;
- err = xglCreateEvent(dev_.obj(), &event_info, &event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateEvent(dev_.obj(), &event_info, &event);
+ ASSERT_VK_SUCCESS(err);
- err = xglGetObjectInfo(event, XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(event, VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&data_size, &mem_req);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
- // XGL_RESULT XGLAPI xglAllocMemory(
- // XGL_DEVICE device,
- // const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- // XGL_GPU_MEMORY* pMem);
- XGL_MEMORY_ALLOC_INFO mem_info;
- XGL_GPU_MEMORY event_mem;
+ // VK_RESULT VKAPI vkAllocMemory(
+ // VK_DEVICE device,
+ // const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ // VK_GPU_MEMORY* pMem);
+ VK_MEMORY_ALLOC_INFO mem_info;
+ VK_GPU_MEMORY event_mem;
- ASSERT_NE(0, mem_req.size) << "xglGetObjectInfo (Event): Failed - expect events to require memory";
+ ASSERT_NE(0, mem_req.size) << "vkGetObjectInfo (Event): Failed - expect events to require memory";
memset(&mem_info, 0, sizeof(mem_info));
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.allocationSize = mem_req.size;
mem_info.memType = mem_req.memType;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
- mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
- err = xglAllocMemory(dev_.obj(), &mem_info, &event_mem);
- ASSERT_XGL_SUCCESS(err);
+ mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
+ mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
+ err = vkAllocMemory(dev_.obj(), &mem_info, &event_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglBindObjectMemory(event, 0, event_mem, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(event, 0, event_mem, 0);
+ ASSERT_VK_SUCCESS(err);
- err = xglResetEvent(event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkResetEvent(event);
+ ASSERT_VK_SUCCESS(err);
for (int i = 0; i < ARRAY_SIZE(bufs); i++) {
bufs[i].init(dev_, 4);
cmd_.begin();
- xglCmdFillBuffer(cmd_.obj(), bufs[0].obj(), 0, 4, 0x11111111);
+ vkCmdFillBuffer(cmd_.obj(), bufs[0].obj(), 0, 4, 0x11111111);
// is this necessary?
- XGL_BUFFER_MEMORY_BARRIER memory_barrier = bufs[0].buffer_memory_barrier(
- XGL_MEMORY_OUTPUT_COPY_BIT, XGL_MEMORY_INPUT_COPY_BIT, 0, 4);
- XGL_BUFFER_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
+ VK_BUFFER_MEMORY_BARRIER memory_barrier = bufs[0].buffer_memory_barrier(
+ VK_MEMORY_OUTPUT_COPY_BIT, VK_MEMORY_INPUT_COPY_BIT, 0, 4);
+ VK_BUFFER_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_TRANSFER_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_TRANSFER_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
- XGL_BUFFER_COPY region = {};
+ VK_BUFFER_COPY region = {};
region.copySize = 4;
- xglCmdCopyBuffer(cmd_.obj(), bufs[0].obj(), bufs[1].obj(), 1, ®ion);
+ vkCmdCopyBuffer(cmd_.obj(), bufs[0].obj(), bufs[1].obj(), 1, ®ion);
memory_barrier = bufs[1].buffer_memory_barrier(
- XGL_MEMORY_OUTPUT_COPY_BIT, XGL_MEMORY_INPUT_COPY_BIT, 0, 4);
+ VK_MEMORY_OUTPUT_COPY_BIT, VK_MEMORY_INPUT_COPY_BIT, 0, 4);
pmemory_barrier = &memory_barrier;
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
- xglCmdCopyBuffer(cmd_.obj(), bufs[1].obj(), bufs[2].obj(), 1, ®ion);
+ vkCmdCopyBuffer(cmd_.obj(), bufs[1].obj(), bufs[2].obj(), 1, ®ion);
- /* Use xglCmdSetEvent and xglCmdWaitEvents to test them.
- * This could be xglCmdPipelineBarrier.
+ /* Use vkCmdSetEvent and vkCmdWaitEvents to test them.
+ * This could be vkCmdPipelineBarrier.
*/
- xglCmdSetEvent(cmd_.obj(), event, XGL_PIPE_EVENT_TRANSFER_COMPLETE);
+ vkCmdSetEvent(cmd_.obj(), event, VK_PIPE_EVENT_TRANSFER_COMPLETE);
// Additional commands could go into the buffer here before the wait.
memory_barrier = bufs[1].buffer_memory_barrier(
- XGL_MEMORY_OUTPUT_COPY_BIT, XGL_MEMORY_INPUT_CPU_READ_BIT, 0, 4);
+ VK_MEMORY_OUTPUT_COPY_BIT, VK_MEMORY_INPUT_CPU_READ_BIT, 0, 4);
pmemory_barrier = &memory_barrier;
- XGL_EVENT_WAIT_INFO wait_info = {};
- wait_info.sType = XGL_STRUCTURE_TYPE_EVENT_WAIT_INFO;
+ VK_EVENT_WAIT_INFO wait_info = {};
+ wait_info.sType = VK_STRUCTURE_TYPE_EVENT_WAIT_INFO;
wait_info.eventCount = 1;
wait_info.pEvents = &event;
- wait_info.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ wait_info.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
wait_info.memBarrierCount = 1;
wait_info.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdWaitEvents(cmd_.obj(), &wait_info);
+ vkCmdWaitEvents(cmd_.obj(), &wait_info);
cmd_.end();
bufs[2].unmap();
// All done with event memory, clean up
- err = xglBindObjectMemory(event, 0, XGL_NULL_HANDLE, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(event, 0, VK_NULL_HANDLE, 0);
+ ASSERT_VK_SUCCESS(err);
- err = xglDestroyObject(event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkDestroyObject(event);
+ ASSERT_VK_SUCCESS(err);
- err = xglFreeMemory(event_mem);
- ASSERT_XGL_SUCCESS(err);
+ err = vkFreeMemory(event_mem);
+ ASSERT_VK_SUCCESS(err);
}
class XglCmdBlitImageTest : public XglCmdBlitTest {
protected:
- void init_test_formats(XGL_FLAGS features)
+ void init_test_formats(VK_FLAGS features)
{
- first_linear_format_ = XGL_FMT_UNDEFINED;
- first_optimal_format_ = XGL_FMT_UNDEFINED;
+ first_linear_format_ = VK_FMT_UNDEFINED;
+ first_optimal_format_ = VK_FMT_UNDEFINED;
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = dev_.formats().begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = dev_.formats().begin();
it != dev_.formats().end(); it++) {
if (it->features & features) {
test_formats_.push_back(*it);
- if (it->tiling == XGL_LINEAR_TILING &&
- first_linear_format_ == XGL_FMT_UNDEFINED)
+ if (it->tiling == VK_LINEAR_TILING &&
+ first_linear_format_ == VK_FMT_UNDEFINED)
first_linear_format_ = it->format;
- if (it->tiling == XGL_OPTIMAL_TILING &&
- first_optimal_format_ == XGL_FMT_UNDEFINED)
+ if (it->tiling == VK_OPTIMAL_TILING &&
+ first_optimal_format_ == VK_FMT_UNDEFINED)
first_optimal_format_ = it->format;
}
}
void init_test_formats()
{
- init_test_formats(static_cast<XGL_FLAGS>(-1));
+ init_test_formats(static_cast<VK_FLAGS>(-1));
}
- void fill_src(xgl_testing::Image &img, const xgl_testing::ImageChecker &checker)
+ void fill_src(vk_testing::Image &img, const vk_testing::ImageChecker &checker)
{
if (img.transparent()) {
checker.fill(img);
ASSERT_EQ(true, img.copyable());
- xgl_testing::Buffer in_buf;
+ vk_testing::Buffer in_buf;
in_buf.init(dev_, checker.buffer_size());
checker.fill(in_buf);
// copy in and tile
cmd_.begin();
- xglCmdCopyBufferToImage(cmd_.obj(), in_buf.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyBufferToImage(cmd_.obj(), in_buf.obj(),
+ img.obj(), VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
checker.regions().size(), &checker.regions()[0]);
cmd_.end();
submit_and_done();
}
- void check_dst(xgl_testing::Image &img, const xgl_testing::ImageChecker &checker)
+ void check_dst(vk_testing::Image &img, const vk_testing::ImageChecker &checker)
{
if (img.transparent()) {
DO(checker.check(img));
ASSERT_EQ(true, img.copyable());
- xgl_testing::Buffer out_buf;
+ vk_testing::Buffer out_buf;
out_buf.init(dev_, checker.buffer_size());
add_memory_ref(img);
// copy out and linearize
cmd_.begin();
- xglCmdCopyImageToBuffer(cmd_.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ vkCmdCopyImageToBuffer(cmd_.obj(),
+ img.obj(), VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
out_buf.obj(),
checker.regions().size(), &checker.regions()[0]);
cmd_.end();
DO(checker.check(out_buf));
}
- std::vector<xgl_testing::Device::Format> test_formats_;
- XGL_FORMAT first_linear_format_;
- XGL_FORMAT first_optimal_format_;
+ std::vector<vk_testing::Device::Format> test_formats_;
+ VK_FORMAT first_linear_format_;
+ VK_FORMAT first_optimal_format_;
};
class XglCmdCopyBufferToImageTest : public XglCmdBlitImageTest {
virtual void SetUp()
{
XglCmdBlitTest::SetUp();
- init_test_formats(XGL_FORMAT_IMAGE_COPY_BIT);
+ init_test_formats(VK_FORMAT_IMAGE_COPY_BIT);
ASSERT_NE(true, test_formats_.empty());
}
- void test_copy_memory_to_image(const XGL_IMAGE_CREATE_INFO &img_info, const xgl_testing::ImageChecker &checker)
+ void test_copy_memory_to_image(const VK_IMAGE_CREATE_INFO &img_info, const vk_testing::ImageChecker &checker)
{
- xgl_testing::Buffer buf;
- xgl_testing::Image img;
+ vk_testing::Buffer buf;
+ vk_testing::Image img;
buf.init(dev_, checker.buffer_size());
checker.fill(buf);
add_memory_ref(img);
cmd_.begin();
- xglCmdCopyBufferToImage(cmd_.obj(),
+ vkCmdCopyBufferToImage(cmd_.obj(),
buf.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ img.obj(), VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
checker.regions().size(), &checker.regions()[0]);
cmd_.end();
check_dst(img, checker);
}
- void test_copy_memory_to_image(const XGL_IMAGE_CREATE_INFO &img_info, const std::vector<XGL_BUFFER_IMAGE_COPY> ®ions)
+ void test_copy_memory_to_image(const VK_IMAGE_CREATE_INFO &img_info, const std::vector<VK_BUFFER_IMAGE_COPY> ®ions)
{
- xgl_testing::ImageChecker checker(img_info, regions);
+ vk_testing::ImageChecker checker(img_info, regions);
test_copy_memory_to_image(img_info, checker);
}
- void test_copy_memory_to_image(const XGL_IMAGE_CREATE_INFO &img_info)
+ void test_copy_memory_to_image(const VK_IMAGE_CREATE_INFO &img_info)
{
- xgl_testing::ImageChecker checker(img_info);
+ vk_testing::ImageChecker checker(img_info);
test_copy_memory_to_image(img_info, checker);
}
};
TEST_F(XglCmdCopyBufferToImageTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
// not sure what to do here
- if (it->format == XGL_FMT_UNDEFINED ||
- (it->format >= XGL_FMT_B8G8R8_UNORM &&
- it->format <= XGL_FMT_B8G8R8_SRGB))
+ if (it->format == VK_FMT_UNDEFINED ||
+ (it->format >= VK_FMT_B8G8R8_UNORM &&
+ it->format <= VK_FMT_B8G8R8_SRGB))
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
virtual void SetUp()
{
XglCmdBlitTest::SetUp();
- init_test_formats(XGL_FORMAT_IMAGE_COPY_BIT);
+ init_test_formats(VK_FORMAT_IMAGE_COPY_BIT);
ASSERT_NE(true, test_formats_.empty());
}
- void test_copy_image_to_memory(const XGL_IMAGE_CREATE_INFO &img_info, const xgl_testing::ImageChecker &checker)
+ void test_copy_image_to_memory(const VK_IMAGE_CREATE_INFO &img_info, const vk_testing::ImageChecker &checker)
{
- xgl_testing::Image img;
- xgl_testing::Buffer buf;
+ vk_testing::Image img;
+ vk_testing::Buffer buf;
img.init(dev_, img_info);
fill_src(img, checker);
add_memory_ref(buf);
cmd_.begin();
- xglCmdCopyImageToBuffer(cmd_.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ vkCmdCopyImageToBuffer(cmd_.obj(),
+ img.obj(), VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
buf.obj(),
checker.regions().size(), &checker.regions()[0]);
cmd_.end();
checker.check(buf);
}
- void test_copy_image_to_memory(const XGL_IMAGE_CREATE_INFO &img_info, const std::vector<XGL_BUFFER_IMAGE_COPY> ®ions)
+ void test_copy_image_to_memory(const VK_IMAGE_CREATE_INFO &img_info, const std::vector<VK_BUFFER_IMAGE_COPY> ®ions)
{
- xgl_testing::ImageChecker checker(img_info, regions);
+ vk_testing::ImageChecker checker(img_info, regions);
test_copy_image_to_memory(img_info, checker);
}
- void test_copy_image_to_memory(const XGL_IMAGE_CREATE_INFO &img_info)
+ void test_copy_image_to_memory(const VK_IMAGE_CREATE_INFO &img_info)
{
- xgl_testing::ImageChecker checker(img_info);
+ vk_testing::ImageChecker checker(img_info);
test_copy_image_to_memory(img_info, checker);
}
};
TEST_F(XglCmdCopyImageToBufferTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
// not sure what to do here
- if (it->format == XGL_FMT_UNDEFINED ||
- (it->format >= XGL_FMT_B8G8R8_UNORM &&
- it->format <= XGL_FMT_B8G8R8_SRGB))
+ if (it->format == VK_FMT_UNDEFINED ||
+ (it->format >= VK_FMT_B8G8R8_UNORM &&
+ it->format <= VK_FMT_B8G8R8_SRGB))
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
virtual void SetUp()
{
XglCmdBlitTest::SetUp();
- init_test_formats(XGL_FORMAT_IMAGE_COPY_BIT);
+ init_test_formats(VK_FORMAT_IMAGE_COPY_BIT);
ASSERT_NE(true, test_formats_.empty());
}
- void test_copy_image(const XGL_IMAGE_CREATE_INFO &src_info, const XGL_IMAGE_CREATE_INFO &dst_info,
- const std::vector<XGL_IMAGE_COPY> &copies)
+ void test_copy_image(const VK_IMAGE_CREATE_INFO &src_info, const VK_IMAGE_CREATE_INFO &dst_info,
+ const std::vector<VK_IMAGE_COPY> &copies)
{
- // convert XGL_IMAGE_COPY to two sets of XGL_BUFFER_IMAGE_COPY
- std::vector<XGL_BUFFER_IMAGE_COPY> src_regions, dst_regions;
- XGL_GPU_SIZE src_offset = 0, dst_offset = 0;
- for (std::vector<XGL_IMAGE_COPY>::const_iterator it = copies.begin(); it != copies.end(); it++) {
- XGL_BUFFER_IMAGE_COPY src_region = {}, dst_region = {};
+ // convert VK_IMAGE_COPY to two sets of VK_BUFFER_IMAGE_COPY
+ std::vector<VK_BUFFER_IMAGE_COPY> src_regions, dst_regions;
+ VK_GPU_SIZE src_offset = 0, dst_offset = 0;
+ for (std::vector<VK_IMAGE_COPY>::const_iterator it = copies.begin(); it != copies.end(); it++) {
+ VK_BUFFER_IMAGE_COPY src_region = {}, dst_region = {};
src_region.bufferOffset = src_offset;
src_region.imageSubresource = it->srcSubresource;
dst_region.imageExtent = it->extent;
dst_regions.push_back(dst_region);
- const XGL_GPU_SIZE size = it->extent.width * it->extent.height * it->extent.depth;
- src_offset += xgl_testing::get_format_size(src_info.format) * size;
- dst_offset += xgl_testing::get_format_size(dst_info.format) * size;
+ const VK_GPU_SIZE size = it->extent.width * it->extent.height * it->extent.depth;
+ src_offset += vk_testing::get_format_size(src_info.format) * size;
+ dst_offset += vk_testing::get_format_size(dst_info.format) * size;
}
- xgl_testing::ImageChecker src_checker(src_info, src_regions);
- xgl_testing::ImageChecker dst_checker(dst_info, dst_regions);
+ vk_testing::ImageChecker src_checker(src_info, src_regions);
+ vk_testing::ImageChecker dst_checker(dst_info, dst_regions);
- xgl_testing::Image src;
+ vk_testing::Image src;
src.init(dev_, src_info);
fill_src(src, src_checker);
add_memory_ref(src);
- xgl_testing::Image dst;
+ vk_testing::Image dst;
dst.init(dev_, dst_info);
add_memory_ref(dst);
cmd_.begin();
- xglCmdCopyImage(cmd_.obj(),
- src.obj(), XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
- dst.obj(), XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyImage(cmd_.obj(),
+ src.obj(), VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ dst.obj(), VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
copies.size(), &copies[0]);
cmd_.end();
TEST_F(XglCmdCopyImageTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
// not sure what to do here
- if (it->format == XGL_FMT_UNDEFINED ||
- (it->format >= XGL_FMT_B8G8R8_UNORM &&
- it->format <= XGL_FMT_B8G8R8_SRGB))
+ if (it->format == VK_FMT_UNDEFINED ||
+ (it->format >= VK_FMT_B8G8R8_UNORM &&
+ it->format <= VK_FMT_B8G8R8_SRGB))
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
img_info.tiling = it->tiling;
- XGL_IMAGE_COPY copy = {};
- copy.srcSubresource = xgl_testing::Image::subresource(XGL_IMAGE_ASPECT_COLOR, 0, 0);
+ VK_IMAGE_COPY copy = {};
+ copy.srcSubresource = vk_testing::Image::subresource(VK_IMAGE_ASPECT_COLOR, 0, 0);
copy.destSubresource = copy.srcSubresource;
copy.extent = img_info.extent;
- test_copy_image(img_info, img_info, std::vector<XGL_IMAGE_COPY>(©, © + 1));
+ test_copy_image(img_info, img_info, std::vector<VK_IMAGE_COPY>(©, © + 1));
}
}
ASSERT_NE(true, test_formats_.empty());
}
- void test_clone_image_data(const XGL_IMAGE_CREATE_INFO &img_info)
+ void test_clone_image_data(const VK_IMAGE_CREATE_INFO &img_info)
{
- xgl_testing::ImageChecker checker(img_info);
- xgl_testing::Image src, dst;
+ vk_testing::ImageChecker checker(img_info);
+ vk_testing::Image src, dst;
src.init(dev_, img_info);
if (src.transparent() || src.copyable())
dst.init(dev_, img_info);
add_memory_ref(dst);
- const XGL_IMAGE_LAYOUT layout = XGL_IMAGE_LAYOUT_GENERAL;
+ const VK_IMAGE_LAYOUT layout = VK_IMAGE_LAYOUT_GENERAL;
cmd_.begin();
- xglCmdCloneImageData(cmd_.obj(), src.obj(), layout, dst.obj(), layout);
+ vkCmdCloneImageData(cmd_.obj(), src.obj(), layout, dst.obj(), layout);
cmd_.end();
submit_and_done();
TEST_F(XglCmdCloneImageDataTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
// not sure what to do here
- if (it->format == XGL_FMT_UNDEFINED ||
- (it->format >= XGL_FMT_R32G32B32_UINT &&
- it->format <= XGL_FMT_R32G32B32_SFLOAT) ||
- (it->format >= XGL_FMT_B8G8R8_UNORM &&
- it->format <= XGL_FMT_B8G8R8_SRGB) ||
- (it->format >= XGL_FMT_BC1_RGB_UNORM &&
- it->format <= XGL_FMT_ASTC_12x12_SRGB) ||
- (it->format >= XGL_FMT_D16_UNORM &&
- it->format <= XGL_FMT_D32_SFLOAT_S8_UINT) ||
- it->format == XGL_FMT_R64G64B64_SFLOAT ||
- it->format == XGL_FMT_R64G64B64A64_SFLOAT)
+ if (it->format == VK_FMT_UNDEFINED ||
+ (it->format >= VK_FMT_R32G32B32_UINT &&
+ it->format <= VK_FMT_R32G32B32_SFLOAT) ||
+ (it->format >= VK_FMT_B8G8R8_UNORM &&
+ it->format <= VK_FMT_B8G8R8_SRGB) ||
+ (it->format >= VK_FMT_BC1_RGB_UNORM &&
+ it->format <= VK_FMT_ASTC_12x12_SRGB) ||
+ (it->format >= VK_FMT_D16_UNORM &&
+ it->format <= VK_FMT_D32_SFLOAT_S8_UINT) ||
+ it->format == VK_FMT_R64G64B64_SFLOAT ||
+ it->format == VK_FMT_R64G64B64A64_SFLOAT)
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
img_info.tiling = it->tiling;
- img_info.flags = XGL_IMAGE_CREATE_CLONEABLE_BIT;
+ img_info.flags = VK_IMAGE_CREATE_CLONEABLE_BIT;
- const XGL_IMAGE_SUBRESOURCE_RANGE range =
- xgl_testing::Image::subresource_range(img_info, XGL_IMAGE_ASPECT_COLOR);
- std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
+ const VK_IMAGE_SUBRESOURCE_RANGE range =
+ vk_testing::Image::subresource_range(img_info, VK_IMAGE_ASPECT_COLOR);
+ std::vector<VK_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
test_clone_image_data(img_info);
}
if (test_raw_)
init_test_formats();
else
- init_test_formats(XGL_FORMAT_CONVERSION_BIT);
+ init_test_formats(VK_FORMAT_CONVERSION_BIT);
ASSERT_NE(true, test_formats_.empty());
}
bool test_raw_;
- std::vector<uint8_t> color_to_raw(XGL_FORMAT format, const float color[4])
+ std::vector<uint8_t> color_to_raw(VK_FORMAT format, const float color[4])
{
std::vector<uint8_t> raw;
// TODO support all formats
switch (format) {
- case XGL_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_UNORM:
raw.push_back(color[0] * 255.0f);
raw.push_back(color[1] * 255.0f);
raw.push_back(color[2] * 255.0f);
raw.push_back(color[3] * 255.0f);
break;
- case XGL_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_UNORM:
raw.push_back(color[2] * 255.0f);
raw.push_back(color[1] * 255.0f);
raw.push_back(color[0] * 255.0f);
return raw;
}
- std::vector<uint8_t> color_to_raw(XGL_FORMAT format, const uint32_t color[4])
+ std::vector<uint8_t> color_to_raw(VK_FORMAT format, const uint32_t color[4])
{
std::vector<uint8_t> raw;
// TODO support all formats
switch (format) {
- case XGL_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_UNORM:
raw.push_back(static_cast<uint8_t>(color[0]));
raw.push_back(static_cast<uint8_t>(color[1]));
raw.push_back(static_cast<uint8_t>(color[2]));
raw.push_back(static_cast<uint8_t>(color[3]));
break;
- case XGL_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_UNORM:
raw.push_back(static_cast<uint8_t>(color[2]));
raw.push_back(static_cast<uint8_t>(color[1]));
raw.push_back(static_cast<uint8_t>(color[0]));
return raw;
}
- std::vector<uint8_t> color_to_raw(XGL_FORMAT format, const XGL_CLEAR_COLOR &color)
+ std::vector<uint8_t> color_to_raw(VK_FORMAT format, const VK_CLEAR_COLOR &color)
{
if (color.useRawValue)
return color_to_raw(format, color.color.rawColor);
return color_to_raw(format, color.color.floatColor);
}
- void test_clear_color_image(const XGL_IMAGE_CREATE_INFO &img_info,
- const XGL_CLEAR_COLOR &clear_color,
- const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges)
+ void test_clear_color_image(const VK_IMAGE_CREATE_INFO &img_info,
+ const VK_CLEAR_COLOR &clear_color,
+ const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges)
{
- xgl_testing::Image img;
+ vk_testing::Image img;
img.init(dev_, img_info);
add_memory_ref(img);
- const XGL_FLAGS all_cache_outputs =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT;
- const XGL_FLAGS all_cache_inputs =
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT;
-
- std::vector<XGL_IMAGE_MEMORY_BARRIER> to_clear;
- std::vector<XGL_IMAGE_MEMORY_BARRIER *> p_to_clear;
- std::vector<XGL_IMAGE_MEMORY_BARRIER> to_xfer;
- std::vector<XGL_IMAGE_MEMORY_BARRIER *> p_to_xfer;
-
- for (std::vector<XGL_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
+ const VK_FLAGS all_cache_outputs =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT;
+ const VK_FLAGS all_cache_inputs =
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT;
+
+ std::vector<VK_IMAGE_MEMORY_BARRIER> to_clear;
+ std::vector<VK_IMAGE_MEMORY_BARRIER *> p_to_clear;
+ std::vector<VK_IMAGE_MEMORY_BARRIER> to_xfer;
+ std::vector<VK_IMAGE_MEMORY_BARRIER *> p_to_xfer;
+
+ for (std::vector<VK_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
it != ranges.end(); it++) {
to_clear.push_back(img.image_memory_barrier(all_cache_outputs, all_cache_inputs,
- XGL_IMAGE_LAYOUT_GENERAL,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_GENERAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
*it));
p_to_clear.push_back(&to_clear.back());
to_xfer.push_back(img.image_memory_barrier(all_cache_outputs, all_cache_inputs,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL, *it));
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL, *it));
p_to_xfer.push_back(&to_xfer.back());
}
cmd_.begin();
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = to_clear.size();
pipeline_barrier.ppMemBarriers = (const void **)&p_to_clear[0];
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
- xglCmdClearColorImage(cmd_.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ vkCmdClearColorImage(cmd_.obj(),
+ img.obj(), VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, ranges.size(), &ranges[0]);
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = to_xfer.size();
pipeline_barrier.ppMemBarriers = (const void **)&p_to_xfer[0];
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
cmd_.end();
if (!img.transparent() && !img.copyable())
return;
- xgl_testing::ImageChecker checker(img_info, ranges);
+ vk_testing::ImageChecker checker(img_info, ranges);
const std::vector<uint8_t> solid_pattern = color_to_raw(img_info.format, clear_color);
if (solid_pattern.empty())
check_dst(img, checker);
}
- void test_clear_color_image(const XGL_IMAGE_CREATE_INFO &img_info,
+ void test_clear_color_image(const VK_IMAGE_CREATE_INFO &img_info,
const float color[4],
- const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges)
+ const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges)
{
- XGL_CLEAR_COLOR c = {};
+ VK_CLEAR_COLOR c = {};
memcpy(c.color.floatColor, color, sizeof(c.color.floatColor));
test_clear_color_image(img_info, c, ranges);
}
TEST_F(XglCmdClearColorImageTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
const float color[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
img_info.tiling = it->tiling;
- const XGL_IMAGE_SUBRESOURCE_RANGE range =
- xgl_testing::Image::subresource_range(img_info, XGL_IMAGE_ASPECT_COLOR);
- std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
+ const VK_IMAGE_SUBRESOURCE_RANGE range =
+ vk_testing::Image::subresource_range(img_info, VK_IMAGE_ASPECT_COLOR);
+ std::vector<VK_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
test_clear_color_image(img_info, color, ranges);
}
protected:
XglCmdClearColorImageRawTest() : XglCmdClearColorImageTest(true) {}
- void test_clear_color_image_raw(const XGL_IMAGE_CREATE_INFO &img_info,
+ void test_clear_color_image_raw(const VK_IMAGE_CREATE_INFO &img_info,
const uint32_t color[4],
- const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges)
+ const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges)
{
- XGL_CLEAR_COLOR c = {};
+ VK_CLEAR_COLOR c = {};
c.useRawValue = true;
memcpy(c.color.rawColor, color, sizeof(c.color.rawColor));
test_clear_color_image(img_info, c, ranges);
TEST_F(XglCmdClearColorImageRawTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
const uint32_t color[4] = { 0x11111111, 0x22222222, 0x33333333, 0x44444444 };
// not sure what to do here
- if (it->format == XGL_FMT_UNDEFINED ||
- (it->format >= XGL_FMT_R8G8B8_UNORM &&
- it->format <= XGL_FMT_R8G8B8_SRGB) ||
- (it->format >= XGL_FMT_B8G8R8_UNORM &&
- it->format <= XGL_FMT_B8G8R8_SRGB) ||
- (it->format >= XGL_FMT_R16G16B16_UNORM &&
- it->format <= XGL_FMT_R16G16B16_SFLOAT) ||
- (it->format >= XGL_FMT_R32G32B32_UINT &&
- it->format <= XGL_FMT_R32G32B32_SFLOAT) ||
- it->format == XGL_FMT_R64G64B64_SFLOAT ||
- it->format == XGL_FMT_R64G64B64A64_SFLOAT ||
- (it->format >= XGL_FMT_D16_UNORM &&
- it->format <= XGL_FMT_D32_SFLOAT_S8_UINT))
+ if (it->format == VK_FMT_UNDEFINED ||
+ (it->format >= VK_FMT_R8G8B8_UNORM &&
+ it->format <= VK_FMT_R8G8B8_SRGB) ||
+ (it->format >= VK_FMT_B8G8R8_UNORM &&
+ it->format <= VK_FMT_B8G8R8_SRGB) ||
+ (it->format >= VK_FMT_R16G16B16_UNORM &&
+ it->format <= VK_FMT_R16G16B16_SFLOAT) ||
+ (it->format >= VK_FMT_R32G32B32_UINT &&
+ it->format <= VK_FMT_R32G32B32_SFLOAT) ||
+ it->format == VK_FMT_R64G64B64_SFLOAT ||
+ it->format == VK_FMT_R64G64B64A64_SFLOAT ||
+ (it->format >= VK_FMT_D16_UNORM &&
+ it->format <= VK_FMT_D32_SFLOAT_S8_UINT))
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
img_info.tiling = it->tiling;
- const XGL_IMAGE_SUBRESOURCE_RANGE range =
- xgl_testing::Image::subresource_range(img_info, XGL_IMAGE_ASPECT_COLOR);
- std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
+ const VK_IMAGE_SUBRESOURCE_RANGE range =
+ vk_testing::Image::subresource_range(img_info, VK_IMAGE_ASPECT_COLOR);
+ std::vector<VK_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
test_clear_color_image_raw(img_info, color, ranges);
}
virtual void SetUp()
{
XglCmdBlitTest::SetUp();
- init_test_formats(XGL_FORMAT_DEPTH_ATTACHMENT_BIT |
- XGL_FORMAT_STENCIL_ATTACHMENT_BIT);
+ init_test_formats(VK_FORMAT_DEPTH_ATTACHMENT_BIT |
+ VK_FORMAT_STENCIL_ATTACHMENT_BIT);
ASSERT_NE(true, test_formats_.empty());
}
- std::vector<uint8_t> ds_to_raw(XGL_FORMAT format, float depth, uint32_t stencil)
+ std::vector<uint8_t> ds_to_raw(VK_FORMAT format, float depth, uint32_t stencil)
{
std::vector<uint8_t> raw;
// depth
switch (format) {
- case XGL_FMT_D16_UNORM:
- case XGL_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D16_UNORM:
+ case VK_FMT_D16_UNORM_S8_UINT:
{
const uint16_t unorm = depth * 65535.0f;
raw.push_back(unorm & 0xff);
raw.push_back(unorm >> 8);
}
break;
- case XGL_FMT_D32_SFLOAT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
{
const union {
float depth;
// stencil
switch (format) {
- case XGL_FMT_S8_UINT:
+ case VK_FMT_S8_UINT:
raw.push_back(stencil);
break;
- case XGL_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D16_UNORM_S8_UINT:
raw.push_back(stencil);
raw.push_back(0);
break;
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
raw.push_back(stencil);
raw.push_back(0);
raw.push_back(0);
return raw;
}
- void test_clear_depth_stencil(const XGL_IMAGE_CREATE_INFO &img_info,
+ void test_clear_depth_stencil(const VK_IMAGE_CREATE_INFO &img_info,
float depth, uint32_t stencil,
- const std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> &ranges)
+ const std::vector<VK_IMAGE_SUBRESOURCE_RANGE> &ranges)
{
- xgl_testing::Image img;
+ vk_testing::Image img;
img.init(dev_, img_info);
add_memory_ref(img);
- const XGL_FLAGS all_cache_outputs =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT;
- const XGL_FLAGS all_cache_inputs =
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT;
-
- std::vector<XGL_IMAGE_MEMORY_BARRIER> to_clear;
- std::vector<XGL_IMAGE_MEMORY_BARRIER *> p_to_clear;
- std::vector<XGL_IMAGE_MEMORY_BARRIER> to_xfer;
- std::vector<XGL_IMAGE_MEMORY_BARRIER *> p_to_xfer;
-
- for (std::vector<XGL_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
+ const VK_FLAGS all_cache_outputs =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT;
+ const VK_FLAGS all_cache_inputs =
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT;
+
+ std::vector<VK_IMAGE_MEMORY_BARRIER> to_clear;
+ std::vector<VK_IMAGE_MEMORY_BARRIER *> p_to_clear;
+ std::vector<VK_IMAGE_MEMORY_BARRIER> to_xfer;
+ std::vector<VK_IMAGE_MEMORY_BARRIER *> p_to_xfer;
+
+ for (std::vector<VK_IMAGE_SUBRESOURCE_RANGE>::const_iterator it = ranges.begin();
it != ranges.end(); it++) {
to_clear.push_back(img.image_memory_barrier(all_cache_outputs, all_cache_inputs,
- XGL_IMAGE_LAYOUT_GENERAL,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_GENERAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
*it));
p_to_clear.push_back(&to_clear.back());
to_xfer.push_back(img.image_memory_barrier(all_cache_outputs, all_cache_inputs,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL, *it));
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL, *it));
p_to_xfer.push_back(&to_xfer.back());
}
cmd_.begin();
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = to_clear.size();
pipeline_barrier.ppMemBarriers = (const void **)&p_to_clear[0];
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
- xglCmdClearDepthStencil(cmd_.obj(),
- img.obj(), XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ vkCmdClearDepthStencil(cmd_.obj(),
+ img.obj(), VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
depth, stencil,
ranges.size(), &ranges[0]);
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = to_xfer.size();
pipeline_barrier.ppMemBarriers = (const void **)&p_to_xfer[0];
- xglCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_.obj(), &pipeline_barrier);
cmd_.end();
if (!img.transparent() && !img.copyable())
return;
- xgl_testing::ImageChecker checker(img_info, ranges);
+ vk_testing::ImageChecker checker(img_info, ranges);
checker.set_solid_pattern(ds_to_raw(img_info.format, depth, stencil));
check_dst(img, checker);
TEST_F(XglCmdClearDepthStencilTest, Basic)
{
- for (std::vector<xgl_testing::Device::Format>::const_iterator it = test_formats_.begin();
+ for (std::vector<vk_testing::Device::Format>::const_iterator it = test_formats_.begin();
it != test_formats_.end(); it++) {
// known driver issues
- if (it->format == XGL_FMT_S8_UINT ||
- it->format == XGL_FMT_D24_UNORM ||
- it->format == XGL_FMT_D16_UNORM_S8_UINT ||
- it->format == XGL_FMT_D24_UNORM_S8_UINT)
+ if (it->format == VK_FMT_S8_UINT ||
+ it->format == VK_FMT_D24_UNORM ||
+ it->format == VK_FMT_D16_UNORM_S8_UINT ||
+ it->format == VK_FMT_D24_UNORM_S8_UINT)
continue;
- XGL_IMAGE_CREATE_INFO img_info = xgl_testing::Image::create_info();
- img_info.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO img_info = vk_testing::Image::create_info();
+ img_info.imageType = VK_IMAGE_2D;
img_info.format = it->format;
img_info.extent.width = 64;
img_info.extent.height = 64;
img_info.tiling = it->tiling;
- img_info.usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT;
+ img_info.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT;
- const XGL_IMAGE_SUBRESOURCE_RANGE range =
- xgl_testing::Image::subresource_range(img_info, XGL_IMAGE_ASPECT_DEPTH);
- std::vector<XGL_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
+ const VK_IMAGE_SUBRESOURCE_RANGE range =
+ vk_testing::Image::subresource_range(img_info, VK_IMAGE_ASPECT_DEPTH);
+ std::vector<VK_IMAGE_SUBRESOURCE_RANGE> ranges(&range, &range + 1);
test_clear_depth_stencil(img_info, 0.25f, 63, ranges);
}
{
::testing::InitGoogleTest(&argc, argv);
- xgl_testing::set_error_callback(test_error_callback);
+ vk_testing::set_error_callback(test_error_callback);
- environment = new xgl_testing::Environment();
+ environment = new vk_testing::Environment();
if (!environment->parse_args(argc, argv))
return -1;
--- /dev/null
+P6
+256
+256
+255
+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
\ No newline at end of file
--- /dev/null
+P6
+256
+256
+255
+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
\ No newline at end of file
--- /dev/null
+P6
+256
+256
+255
+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
\ No newline at end of file
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// DEALINGS IN THE SOFTWARE.
-// Verify XGL driver initialization
+// Verify VK driver initialization
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
-#include <xgl.h>
+#include <vulkan.h>
#include "gtest-1.7.0/include/gtest/gtest.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
#include "test_common.h"
class XglImageTest : public ::testing::Test {
void CreateImage(uint32_t w, uint32_t h);
void DestroyImage();
- void CreateImageView(XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView);
- void DestroyImageView(XGL_IMAGE_VIEW imageView);
- XGL_DEVICE device() {return m_device->obj();}
+ void CreateImageView(VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView);
+ void DestroyImageView(VK_IMAGE_VIEW imageView);
+ VK_DEVICE device() {return m_device->obj();}
protected:
- xgl_testing::Device *m_device;
- XGL_APPLICATION_INFO app_info;
- XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
+ vk_testing::Device *m_device;
+ VK_APPLICATION_INFO app_info;
+ VK_PHYSICAL_GPU objs[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
- XGL_INSTANCE inst;
- XGL_IMAGE m_image;
- XGL_GPU_MEMORY *m_image_mem;
+ VK_INSTANCE inst;
+ VK_IMAGE m_image;
+ VK_GPU_MEMORY *m_image_mem;
uint32_t m_num_mem;
virtual void SetUp() {
- XGL_RESULT err;
+ VK_RESULT err;
- this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
+ this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "base";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = XGL_API_VERSION;
- XGL_INSTANCE_CREATE_INFO inst_info = {};
- inst_info.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ this->app_info.apiVersion = VK_API_VERSION;
+ VK_INSTANCE_CREATE_INFO inst_info = {};
+ inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.pAppInfo = &app_info;
inst_info.pAllocCb = NULL;
inst_info.extensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
- err = xglCreateInstance(&inst_info, &this->inst);
- ASSERT_XGL_SUCCESS(err);
- err = xglEnumerateGpus(this->inst, XGL_MAX_PHYSICAL_GPUS,
+ err = vkCreateInstance(&inst_info, &this->inst);
+ ASSERT_VK_SUCCESS(err);
+ err = vkEnumerateGpus(this->inst, VK_MAX_PHYSICAL_GPUS,
&this->gpu_count, objs);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_GE(this->gpu_count, 1) << "No GPU available";
- this->m_device = new xgl_testing::Device(objs[0]);
+ this->m_device = new vk_testing::Device(objs[0]);
this->m_device->init();
}
virtual void TearDown() {
- xglDestroyInstance(this->inst);
+ vkDestroyInstance(this->inst);
}
};
void XglImageTest::CreateImage(uint32_t w, uint32_t h)
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t mipCount;
size_t size;
- XGL_FORMAT fmt;
- XGL_FORMAT_PROPERTIES image_fmt;
+ VK_FORMAT fmt;
+ VK_FORMAT_PROPERTIES image_fmt;
mipCount = 0;
mipCount++;
}
- fmt = XGL_FMT_R8G8B8A8_UINT;
+ fmt = VK_FMT_R8G8B8A8_UINT;
// TODO: Pick known good format rather than just expect common format
/*
* XXX: What should happen if given NULL HANDLE for the pData argument?
- * We're not requesting XGL_INFO_TYPE_MEMORY_REQUIREMENTS so there is
+ * We're not requesting VK_INFO_TYPE_MEMORY_REQUIREMENTS so there is
* an expectation that pData is a valid pointer.
* However, why include a returned size value? That implies that the
* amount of data may vary and that doesn't work well for using a
* fixed structure.
*/
size = sizeof(image_fmt);
- err = xglGetFormatInfo(this->device(), fmt,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(this->device(), fmt,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &image_fmt);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
- // typedef struct _XGL_IMAGE_CREATE_INFO
+ // typedef struct _VK_IMAGE_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
// const void* pNext; // Pointer to next structure.
- // XGL_IMAGE_TYPE imageType;
- // XGL_FORMAT format;
- // XGL_EXTENT3D extent;
+ // VK_IMAGE_TYPE imageType;
+ // VK_FORMAT format;
+ // VK_EXTENT3D extent;
// uint32_t mipLevels;
// uint32_t arraySize;
// uint32_t samples;
- // XGL_IMAGE_TILING tiling;
- // XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- // XGL_FLAGS flags; // XGL_IMAGE_CREATE_FLAGS
- // } XGL_IMAGE_CREATE_INFO;
+ // VK_IMAGE_TILING tiling;
+ // VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ // VK_FLAGS flags; // VK_IMAGE_CREATE_FLAGS
+ // } VK_IMAGE_CREATE_INFO;
- XGL_IMAGE_CREATE_INFO imageCreateInfo = {};
- imageCreateInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
- imageCreateInfo.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO imageCreateInfo = {};
+ imageCreateInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
+ imageCreateInfo.imageType = VK_IMAGE_2D;
imageCreateInfo.format = fmt;
imageCreateInfo.arraySize = 1;
imageCreateInfo.extent.width = w;
imageCreateInfo.extent.depth = 1;
imageCreateInfo.mipLevels = mipCount;
imageCreateInfo.samples = 1;
- if (image_fmt.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
- imageCreateInfo.tiling = XGL_LINEAR_TILING;
+ if (image_fmt.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
+ imageCreateInfo.tiling = VK_LINEAR_TILING;
}
- else if (image_fmt.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
- imageCreateInfo.tiling = XGL_OPTIMAL_TILING;
+ else if (image_fmt.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
+ imageCreateInfo.tiling = VK_OPTIMAL_TILING;
}
else {
ASSERT_TRUE(false) << "Cannot find supported tiling format - Exiting";
}
// Image usage flags
- // typedef enum _XGL_IMAGE_USAGE_FLAGS
+ // typedef enum _VK_IMAGE_USAGE_FLAGS
// {
- // XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
- // XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
- // XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
- // XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
- // } XGL_IMAGE_USAGE_FLAGS;
- imageCreateInfo.usage = XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
-
- // XGL_RESULT XGLAPI xglCreateImage(
- // XGL_DEVICE device,
- // const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- // XGL_IMAGE* pImage);
- err = xglCreateImage(device(), &imageCreateInfo, &m_image);
- ASSERT_XGL_SUCCESS(err);
-
- XGL_MEMORY_REQUIREMENTS *mem_req;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ // VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
+ // VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
+ // VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
+ // VK_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
+ // } VK_IMAGE_USAGE_FLAGS;
+ imageCreateInfo.usage = VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
+
+ // VK_RESULT VKAPI vkCreateImage(
+ // VK_DEVICE device,
+ // const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ // VK_IMAGE* pImage);
+ err = vkCreateImage(device(), &imageCreateInfo, &m_image);
+ ASSERT_VK_SUCCESS(err);
+
+ VK_MEMORY_REQUIREMENTS *mem_req;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {};
- img_alloc.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {};
+ img_alloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
img_alloc.pNext = NULL;
- XGL_MEMORY_ALLOC_INFO mem_info = {};
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ VK_MEMORY_ALLOC_INFO mem_info = {};
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.pNext = &img_alloc;
- err = xglGetObjectInfo(m_image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(m_image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_EQ(num_alloc_size,sizeof(num_allocations));
- mem_req = (XGL_MEMORY_REQUIREMENTS *) malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- m_image_mem = (XGL_GPU_MEMORY *) malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_req = (VK_MEMORY_REQUIREMENTS *) malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ m_image_mem = (VK_GPU_MEMORY *) malloc(num_allocations * sizeof(VK_GPU_MEMORY));
m_num_mem = num_allocations;
- err = xglGetObjectInfo(m_image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(m_image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_req);
- ASSERT_XGL_SUCCESS(err);
- ASSERT_EQ(mem_reqs_size, num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(m_image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ ASSERT_VK_SUCCESS(err);
+ ASSERT_EQ(mem_reqs_size, num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(m_image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- ASSERT_XGL_SUCCESS(err);
- ASSERT_EQ(img_reqs_size, sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ ASSERT_VK_SUCCESS(err);
+ ASSERT_EQ(img_reqs_size, sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
for (uint32_t i = 0; i < num_allocations; i ++) {
- ASSERT_NE(0, mem_req[i].size) << "xglGetObjectInfo (Image): Failed - expect images to require memory";
+ ASSERT_NE(0, mem_req[i].size) << "vkGetObjectInfo (Image): Failed - expect images to require memory";
mem_info.allocationSize = mem_req[i].size;
- mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
- mem_info.memType = XGL_MEMORY_TYPE_IMAGE;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
+ mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
+ mem_info.memType = VK_MEMORY_TYPE_IMAGE;
+ mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
/* allocate memory */
- err = xglAllocMemory(device(), &mem_info, &m_image_mem[i]);
- ASSERT_XGL_SUCCESS(err);
+ err = vkAllocMemory(device(), &mem_info, &m_image_mem[i]);
+ ASSERT_VK_SUCCESS(err);
/* bind memory */
- err = xglBindObjectMemory(m_image, i, m_image_mem[i], 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(m_image, i, m_image_mem[i], 0);
+ ASSERT_VK_SUCCESS(err);
}
}
void XglImageTest::DestroyImage()
{
- XGL_RESULT err;
+ VK_RESULT err;
// All done with image memory, clean up
- ASSERT_XGL_SUCCESS(xglBindObjectMemory(m_image, 0, XGL_NULL_HANDLE, 0));
+ ASSERT_VK_SUCCESS(vkBindObjectMemory(m_image, 0, VK_NULL_HANDLE, 0));
for (uint32_t i = 0 ; i < m_num_mem; i++) {
- err = xglFreeMemory(m_image_mem[i]);
- ASSERT_XGL_SUCCESS(err);
+ err = vkFreeMemory(m_image_mem[i]);
+ ASSERT_VK_SUCCESS(err);
}
- ASSERT_XGL_SUCCESS(xglDestroyObject(m_image));
+ ASSERT_VK_SUCCESS(vkDestroyObject(m_image));
}
-void XglImageTest::CreateImageView(XGL_IMAGE_VIEW_CREATE_INFO *pCreateInfo,
- XGL_IMAGE_VIEW *pView)
+void XglImageTest::CreateImageView(VK_IMAGE_VIEW_CREATE_INFO *pCreateInfo,
+ VK_IMAGE_VIEW *pView)
{
pCreateInfo->image = this->m_image;
- ASSERT_XGL_SUCCESS(xglCreateImageView(device(), pCreateInfo, pView));
+ ASSERT_VK_SUCCESS(vkCreateImageView(device(), pCreateInfo, pView));
}
-void XglImageTest::DestroyImageView(XGL_IMAGE_VIEW imageView)
+void XglImageTest::DestroyImageView(VK_IMAGE_VIEW imageView)
{
- ASSERT_XGL_SUCCESS(xglDestroyObject(imageView));
+ ASSERT_VK_SUCCESS(vkDestroyObject(imageView));
}
TEST_F(XglImageTest, CreateImageViewTest) {
- XGL_FORMAT fmt;
- XGL_IMAGE_VIEW imageView;
+ VK_FORMAT fmt;
+ VK_IMAGE_VIEW imageView;
- fmt = XGL_FMT_R8G8B8A8_UINT;
+ fmt = VK_FMT_R8G8B8A8_UINT;
CreateImage(512, 256);
- // typedef struct _XGL_IMAGE_VIEW_CREATE_INFO
+ // typedef struct _VK_IMAGE_VIEW_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
// const void* pNext; // Pointer to next structure
- // XGL_IMAGE image;
- // XGL_IMAGE_VIEW_TYPE viewType;
- // XGL_FORMAT format;
- // XGL_CHANNEL_MAPPING channels;
- // XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange;
+ // VK_IMAGE image;
+ // VK_IMAGE_VIEW_TYPE viewType;
+ // VK_FORMAT format;
+ // VK_CHANNEL_MAPPING channels;
+ // VK_IMAGE_SUBRESOURCE_RANGE subresourceRange;
// float minLod;
- // } XGL_IMAGE_VIEW_CREATE_INFO;
- XGL_IMAGE_VIEW_CREATE_INFO viewInfo = {};
- viewInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
- viewInfo.viewType = XGL_IMAGE_VIEW_2D;
+ // } VK_IMAGE_VIEW_CREATE_INFO;
+ VK_IMAGE_VIEW_CREATE_INFO viewInfo = {};
+ viewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
+ viewInfo.viewType = VK_IMAGE_VIEW_2D;
viewInfo.format = fmt;
- viewInfo.channels.r = XGL_CHANNEL_SWIZZLE_R;
- viewInfo.channels.g = XGL_CHANNEL_SWIZZLE_G;
- viewInfo.channels.b = XGL_CHANNEL_SWIZZLE_B;
- viewInfo.channels.a = XGL_CHANNEL_SWIZZLE_A;
+ viewInfo.channels.r = VK_CHANNEL_SWIZZLE_R;
+ viewInfo.channels.g = VK_CHANNEL_SWIZZLE_G;
+ viewInfo.channels.b = VK_CHANNEL_SWIZZLE_B;
+ viewInfo.channels.a = VK_CHANNEL_SWIZZLE_A;
viewInfo.subresourceRange.baseArraySlice = 0;
viewInfo.subresourceRange.arraySize = 1;
viewInfo.subresourceRange.baseMipLevel = 0;
viewInfo.subresourceRange.mipLevels = 1;
- viewInfo.subresourceRange.aspect = XGL_IMAGE_ASPECT_COLOR;
+ viewInfo.subresourceRange.aspect = VK_IMAGE_ASPECT_COLOR;
- // XGL_RESULT XGLAPI xglCreateImageView(
- // XGL_DEVICE device,
- // const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- // XGL_IMAGE_VIEW* pView);
+ // VK_RESULT VKAPI vkCreateImageView(
+ // VK_DEVICE device,
+ // const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ // VK_IMAGE_VIEW* pView);
CreateImageView(&viewInfo, &imageView);
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
- xgl_testing::set_error_callback(test_error_callback);
+ vk_testing::set_error_callback(test_error_callback);
return RUN_ALL_TESTS();
}
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// DEALINGS IN THE SOFTWARE.
-// Verify XGL driver initialization
+// Verify VK driver initialization
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
-#include <xgl.h>
+#include <vulkan.h>
#include "gtest-1.7.0/include/gtest/gtest.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
#include "test_common.h"
#include "icd-spv.h"
void CreateCommandBufferTest();
void CreatePipelineTest();
void CreateShaderTest();
- void CreateShader(XGL_SHADER *pshader);
+ void CreateShader(VK_SHADER *pshader);
- XGL_DEVICE device() {return m_device->obj();}
+ VK_DEVICE device() {return m_device->obj();}
protected:
- XGL_APPLICATION_INFO app_info;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
+ VK_APPLICATION_INFO app_info;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU objs[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
uint32_t m_device_id;
- xgl_testing::Device *m_device;
- XGL_PHYSICAL_GPU_PROPERTIES props;
- std::vector<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_props;
+ vk_testing::Device *m_device;
+ VK_PHYSICAL_GPU_PROPERTIES props;
+ std::vector<VK_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_props;
uint32_t graphics_queue_node_index;
virtual void SetUp() {
- XGL_RESULT err;
+ VK_RESULT err;
int i;
- this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
+ this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "base";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = XGL_API_VERSION;
- XGL_INSTANCE_CREATE_INFO inst_info = {};
- inst_info.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ this->app_info.apiVersion = VK_API_VERSION;
+ VK_INSTANCE_CREATE_INFO inst_info = {};
+ inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.pAppInfo = &app_info;
inst_info.pAllocCb = NULL;
inst_info.extensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
- err = xglCreateInstance(&inst_info, &inst);
- ASSERT_XGL_SUCCESS(err);
- err = xglEnumerateGpus(inst, XGL_MAX_PHYSICAL_GPUS, &this->gpu_count,
+ err = vkCreateInstance(&inst_info, &inst);
+ ASSERT_VK_SUCCESS(err);
+ err = vkEnumerateGpus(inst, VK_MAX_PHYSICAL_GPUS, &this->gpu_count,
objs);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_GE(this->gpu_count, 1) << "No GPU available";
m_device_id = 0;
- this->m_device = new xgl_testing::Device(objs[m_device_id]);
+ this->m_device = new vk_testing::Device(objs[m_device_id]);
this->m_device->init();
props = m_device->gpu().properties();
queue_props = this->m_device->gpu().queue_properties();
for (i = 0; i < queue_props.size(); i++) {
- if (queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT) {
+ if (queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) {
graphics_queue_node_index = i;
break;
}
}
virtual void TearDown() {
- xglDestroyInstance(inst);
+ vkDestroyInstance(inst);
}
};
-TEST(Initialization, xglEnumerateGpus) {
- XGL_APPLICATION_INFO app_info = {};
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
+TEST(Initialization, vkEnumerateGpus) {
+ VK_APPLICATION_INFO app_info = {};
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU objs[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
- XGL_RESULT err;
- xgl_testing::PhysicalGpu *gpu;
+ VK_RESULT err;
+ vk_testing::PhysicalGpu *gpu;
char *layers[16];
size_t layer_count;
char layer_buf[16][256];
- XGL_INSTANCE_CREATE_INFO inst_info = {};
- inst_info.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ VK_INSTANCE_CREATE_INFO inst_info = {};
+ inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.pAppInfo = &app_info;
inst_info.pAllocCb = NULL;
inst_info.extensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
- app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
+ app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
app_info.pNext = NULL;
app_info.pAppName = "base";
app_info.appVersion = 1;
app_info.pEngineName = "unittest";
app_info.engineVersion = 1;
- app_info.apiVersion = XGL_API_VERSION;
+ app_info.apiVersion = VK_API_VERSION;
- err = xglCreateInstance(&inst_info, &inst);
- ASSERT_XGL_SUCCESS(err);
- err = xglEnumerateGpus(inst, XGL_MAX_PHYSICAL_GPUS, &gpu_count, objs);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateInstance(&inst_info, &inst);
+ ASSERT_VK_SUCCESS(err);
+ err = vkEnumerateGpus(inst, VK_MAX_PHYSICAL_GPUS, &gpu_count, objs);
+ ASSERT_VK_SUCCESS(err);
ASSERT_GE(gpu_count, 1) << "No GPU available";
for (int i = 0; i < 16; i++)
layers[i] = &layer_buf[i][0];
- err = xglEnumerateLayers(objs[0], 16, 256, &layer_count, (char * const *) layers, NULL);
- ASSERT_XGL_SUCCESS(err);
+ err = vkEnumerateLayers(objs[0], 16, 256, &layer_count, (char * const *) layers, NULL);
+ ASSERT_VK_SUCCESS(err);
for (int i = 0; i < layer_count; i++) {
printf("Enumerated layers: %s ", layers[i]);
}
printf("\n");
// TODO: Iterate over all GPUs
- gpu = new xgl_testing::PhysicalGpu(objs[0]);
+ gpu = new vk_testing::PhysicalGpu(objs[0]);
delete gpu;
// TODO: Verify destroy functions
- err = xglDestroyInstance(inst);
- ASSERT_XGL_SUCCESS(err);
+ err = vkDestroyInstance(inst);
+ ASSERT_VK_SUCCESS(err);
}
TEST_F(XglTest, AllocMemory) {
- XGL_RESULT err;
- XGL_MEMORY_ALLOC_INFO alloc_info = {};
- XGL_GPU_MEMORY gpu_mem;
+ VK_RESULT err;
+ VK_MEMORY_ALLOC_INFO alloc_info = {};
+ VK_GPU_MEMORY gpu_mem;
uint8_t *pData;
- alloc_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ alloc_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
alloc_info.allocationSize = 1024 * 1024; // 1MB
- alloc_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT |
- XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
- alloc_info.memType = XGL_MEMORY_TYPE_OTHER;
+ alloc_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT |
+ VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ alloc_info.memType = VK_MEMORY_TYPE_OTHER;
// TODO: Try variety of memory priorities
- alloc_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
+ alloc_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
- err = xglAllocMemory(device(), &alloc_info, &gpu_mem);
- ASSERT_XGL_SUCCESS(err);
+ err = vkAllocMemory(device(), &alloc_info, &gpu_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglMapMemory(gpu_mem, 0, (void **) &pData);
- ASSERT_XGL_SUCCESS(err);
+ err = vkMapMemory(gpu_mem, 0, (void **) &pData);
+ ASSERT_VK_SUCCESS(err);
memset(pData, 0x55, alloc_info.allocationSize);
EXPECT_EQ(0x55, pData[0]) << "Memory read not same a write";
- err = xglUnmapMemory(gpu_mem);
- ASSERT_XGL_SUCCESS(err);
+ err = vkUnmapMemory(gpu_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglFreeMemory(gpu_mem);
- ASSERT_XGL_SUCCESS(err);
+ err = vkFreeMemory(gpu_mem);
+ ASSERT_VK_SUCCESS(err);
}
TEST_F(XglTest, Event) {
- XGL_EVENT_CREATE_INFO event_info;
- XGL_EVENT event;
- XGL_MEMORY_REQUIREMENTS mem_req;
+ VK_EVENT_CREATE_INFO event_info;
+ VK_EVENT event;
+ VK_MEMORY_REQUIREMENTS mem_req;
size_t data_size = sizeof(mem_req);
- XGL_RESULT err;
+ VK_RESULT err;
- // typedef struct _XGL_EVENT_CREATE_INFO
+ // typedef struct _VK_EVENT_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO
// const void* pNext; // Pointer to next structure
- // XGL_FLAGS flags; // Reserved
- // } XGL_EVENT_CREATE_INFO;
+ // VK_FLAGS flags; // Reserved
+ // } VK_EVENT_CREATE_INFO;
memset(&event_info, 0, sizeof(event_info));
- event_info.sType = XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO;
+ event_info.sType = VK_STRUCTURE_TYPE_EVENT_CREATE_INFO;
- err = xglCreateEvent(device(), &event_info, &event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateEvent(device(), &event_info, &event);
+ ASSERT_VK_SUCCESS(err);
- err = xglGetObjectInfo(event, XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(event, VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&data_size, &mem_req);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
- // XGL_RESULT XGLAPI xglAllocMemory(
- // XGL_DEVICE device,
- // const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- // XGL_GPU_MEMORY* pMem);
- XGL_MEMORY_ALLOC_INFO mem_info;
- XGL_GPU_MEMORY event_mem;
+ // VK_RESULT VKAPI vkAllocMemory(
+ // VK_DEVICE device,
+ // const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ // VK_GPU_MEMORY* pMem);
+ VK_MEMORY_ALLOC_INFO mem_info;
+ VK_GPU_MEMORY event_mem;
- ASSERT_NE(0, mem_req.size) << "xglGetObjectInfo (Event): Failed - expect events to require memory";
+ ASSERT_NE(0, mem_req.size) << "vkGetObjectInfo (Event): Failed - expect events to require memory";
memset(&mem_info, 0, sizeof(mem_info));
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.allocationSize = mem_req.size;
- mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
- mem_info.memType = XGL_MEMORY_TYPE_OTHER;
- err = xglAllocMemory(device(), &mem_info, &event_mem);
- ASSERT_XGL_SUCCESS(err);
+ mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
+ mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
+ mem_info.memType = VK_MEMORY_TYPE_OTHER;
+ err = vkAllocMemory(device(), &mem_info, &event_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglBindObjectMemory(event, 0, event_mem, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(event, 0, event_mem, 0);
+ ASSERT_VK_SUCCESS(err);
- err = xglResetEvent(event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkResetEvent(event);
+ ASSERT_VK_SUCCESS(err);
- err = xglGetEventStatus(event);
- ASSERT_EQ(XGL_EVENT_RESET, err);
+ err = vkGetEventStatus(event);
+ ASSERT_EQ(VK_EVENT_RESET, err);
- err = xglSetEvent(event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkSetEvent(event);
+ ASSERT_VK_SUCCESS(err);
- err = xglGetEventStatus(event);
- ASSERT_EQ(XGL_EVENT_SET, err);
+ err = vkGetEventStatus(event);
+ ASSERT_EQ(VK_EVENT_SET, err);
// TODO: Test actual synchronization with command buffer event.
// All done with event memory, clean up
- err = xglBindObjectMemory(event, 0, XGL_NULL_HANDLE, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(event, 0, VK_NULL_HANDLE, 0);
+ ASSERT_VK_SUCCESS(err);
- err = xglDestroyObject(event);
- ASSERT_XGL_SUCCESS(err);
+ err = vkDestroyObject(event);
+ ASSERT_VK_SUCCESS(err);
}
TEST_F(XglTest, Fence) {
- XGL_RESULT err;
- XGL_FENCE_CREATE_INFO fence_info;
- XGL_FENCE fence;
+ VK_RESULT err;
+ VK_FENCE_CREATE_INFO fence_info;
+ VK_FENCE fence;
memset(&fence_info, 0, sizeof(fence_info));
- // typedef struct _XGL_FENCE_CREATE_INFO
+ // typedef struct _VK_FENCE_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_FENCE_CREATE_INFO
// const void* pNext; // Pointer to next structure
- // XGL_FLAGS flags; // Reserved
- fence_info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ // VK_FLAGS flags; // Reserved
+ fence_info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
- err = xglCreateFence(device(), &fence_info, &fence);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateFence(device(), &fence_info, &fence);
+ ASSERT_VK_SUCCESS(err);
- err = xglGetFenceStatus(fence);
+ err = vkGetFenceStatus(fence);
// We've not submitted this fence on a command buffer so should get
- // XGL_ERROR_UNAVAILABLE
- EXPECT_EQ(XGL_ERROR_UNAVAILABLE, err);
+ // VK_ERROR_UNAVAILABLE
+ EXPECT_EQ(VK_ERROR_UNAVAILABLE, err);
// Test glxWaitForFences
- // XGL_RESULT XGLAPI xglWaitForFences(
- // XGL_DEVICE device,
+ // VK_RESULT VKAPI vkWaitForFences(
+ // VK_DEVICE device,
// uint32_t fenceCount,
- // const XGL_FENCE* pFences,
+ // const VK_FENCE* pFences,
// bool32_t waitAll,
// uint64_t timeout);
- err = xglWaitForFences(device(), 1, &fence, XGL_TRUE, 0);
- EXPECT_EQ(XGL_ERROR_UNAVAILABLE, err);
+ err = vkWaitForFences(device(), 1, &fence, VK_TRUE, 0);
+ EXPECT_EQ(VK_ERROR_UNAVAILABLE, err);
// TODO: Attached to command buffer and test GetFenceStatus
// TODO: Add some commands and submit the command buffer
- err = xglDestroyObject(fence);
- ASSERT_XGL_SUCCESS(err);
+ err = vkDestroyObject(fence);
+ ASSERT_VK_SUCCESS(err);
}
#define MAX_QUERY_SLOTS 10
TEST_F(XglTest, Query) {
- XGL_QUERY_POOL_CREATE_INFO query_info;
- XGL_QUERY_POOL query_pool;
+ VK_QUERY_POOL_CREATE_INFO query_info;
+ VK_QUERY_POOL query_pool;
size_t data_size;
- XGL_MEMORY_REQUIREMENTS mem_req;
+ VK_MEMORY_REQUIREMENTS mem_req;
size_t query_result_size;
uint32_t *query_result_data;
- XGL_RESULT err;
+ VK_RESULT err;
- // typedef enum _XGL_QUERY_TYPE
+ // typedef enum _VK_QUERY_TYPE
// {
- // XGL_QUERY_OCCLUSION = 0x00000000,
- // XGL_QUERY_PIPELINE_STATISTICS = 0x00000001,
+ // VK_QUERY_OCCLUSION = 0x00000000,
+ // VK_QUERY_PIPELINE_STATISTICS = 0x00000001,
- // XGL_QUERY_TYPE_BEGIN_RANGE = XGL_QUERY_OCCLUSION,
- // XGL_QUERY_TYPE_END_RANGE = XGL_QUERY_PIPELINE_STATISTICS,
- // XGL_NUM_QUERY_TYPE = (XGL_QUERY_TYPE_END_RANGE - XGL_QUERY_TYPE_BEGIN_RANGE + 1),
- // XGL_MAX_ENUM(_XGL_QUERY_TYPE)
- // } XGL_QUERY_TYPE;
+ // VK_QUERY_TYPE_BEGIN_RANGE = VK_QUERY_OCCLUSION,
+ // VK_QUERY_TYPE_END_RANGE = VK_QUERY_PIPELINE_STATISTICS,
+ // VK_NUM_QUERY_TYPE = (VK_QUERY_TYPE_END_RANGE - VK_QUERY_TYPE_BEGIN_RANGE + 1),
+ // VK_MAX_ENUM(_VK_QUERY_TYPE)
+ // } VK_QUERY_TYPE;
- // typedef struct _XGL_QUERY_POOL_CREATE_INFO
+ // typedef struct _VK_QUERY_POOL_CREATE_INFO
// {
- // XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
+ // VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
// const void* pNext; // Pointer to next structure
- // XGL_QUERY_TYPE queryType;
+ // VK_QUERY_TYPE queryType;
// uint32_t slots;
- // } XGL_QUERY_POOL_CREATE_INFO;
+ // } VK_QUERY_POOL_CREATE_INFO;
memset(&query_info, 0, sizeof(query_info));
- query_info.sType = XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
- query_info.queryType = XGL_QUERY_OCCLUSION;
+ query_info.sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
+ query_info.queryType = VK_QUERY_OCCLUSION;
query_info.slots = MAX_QUERY_SLOTS;
- // XGL_RESULT XGLAPI xglCreateQueryPool(
- // XGL_DEVICE device,
- // const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- // XGL_QUERY_POOL* pQueryPool);
+ // VK_RESULT VKAPI vkCreateQueryPool(
+ // VK_DEVICE device,
+ // const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ // VK_QUERY_POOL* pQueryPool);
- err = xglCreateQueryPool(device(), &query_info, &query_pool);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateQueryPool(device(), &query_info, &query_pool);
+ ASSERT_VK_SUCCESS(err);
data_size = sizeof(mem_req);
- err = xglGetObjectInfo(query_pool, XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(query_pool, VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&data_size, &mem_req);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_NE(0, data_size) << "Invalid data_size";
- // XGL_RESULT XGLAPI xglAllocMemory(
- // XGL_DEVICE device,
- // const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- // XGL_GPU_MEMORY* pMem);
- XGL_MEMORY_ALLOC_INFO mem_info;
- XGL_GPU_MEMORY query_mem;
+ // VK_RESULT VKAPI vkAllocMemory(
+ // VK_DEVICE device,
+ // const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ // VK_GPU_MEMORY* pMem);
+ VK_MEMORY_ALLOC_INFO mem_info;
+ VK_GPU_MEMORY query_mem;
memset(&mem_info, 0, sizeof(mem_info));
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
// TODO: Is a simple multiple all that's needed here?
mem_info.allocationSize = mem_req.size * MAX_QUERY_SLOTS;
- mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
- mem_info.memType = XGL_MEMORY_TYPE_OTHER;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
+ mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
+ mem_info.memType = VK_MEMORY_TYPE_OTHER;
+ mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
// TODO: Should this be pinned? Or maybe a separate test with pinned.
- err = xglAllocMemory(device(), &mem_info, &query_mem);
- ASSERT_XGL_SUCCESS(err);
+ err = vkAllocMemory(device(), &mem_info, &query_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglBindObjectMemory(query_pool, 0, query_mem, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(query_pool, 0, query_mem, 0);
+ ASSERT_VK_SUCCESS(err);
// TODO: Test actual synchronization with command buffer event.
// TODO: Create command buffer
- // TODO: xglCmdResetQueryPool
- // TODO: xglCmdBeginQuery
+ // TODO: vkCmdResetQueryPool
+ // TODO: vkCmdBeginQuery
// TODO: commands
- // TOOD: xglCmdEndQuery
+ // TOOD: vkCmdEndQuery
- err = xglGetQueryPoolResults(query_pool, 0, MAX_QUERY_SLOTS,
- &query_result_size, XGL_NULL_HANDLE);
- ASSERT_XGL_SUCCESS(err);
+ err = vkGetQueryPoolResults(query_pool, 0, MAX_QUERY_SLOTS,
+ &query_result_size, VK_NULL_HANDLE);
+ ASSERT_VK_SUCCESS(err);
if (query_result_size > 0) {
query_result_data = new uint32_t [query_result_size];
- err = xglGetQueryPoolResults(query_pool, 0, MAX_QUERY_SLOTS,
+ err = vkGetQueryPoolResults(query_pool, 0, MAX_QUERY_SLOTS,
&query_result_size, query_result_data);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
// TODO: Test Query result data.
}
// All done with QueryPool memory, clean up
- err = xglBindObjectMemory(query_pool, 0, XGL_NULL_HANDLE, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(query_pool, 0, VK_NULL_HANDLE, 0);
+ ASSERT_VK_SUCCESS(err);
- err = xglDestroyObject(query_pool);
- ASSERT_XGL_SUCCESS(err);
+ err = vkDestroyObject(query_pool);
+ ASSERT_VK_SUCCESS(err);
}
-void getQueue(xgl_testing::Device *device, uint32_t queue_node_index, const char *qname)
+void getQueue(vk_testing::Device *device, uint32_t queue_node_index, const char *qname)
{
int que_idx;
- XGL_RESULT err;
- XGL_QUEUE queue;
+ VK_RESULT err;
+ VK_QUEUE queue;
- const XGL_PHYSICAL_GPU_QUEUE_PROPERTIES props = device->gpu().queue_properties()[queue_node_index];
+ const VK_PHYSICAL_GPU_QUEUE_PROPERTIES props = device->gpu().queue_properties()[queue_node_index];
for (que_idx = 0; que_idx < props.queueCount; que_idx++) {
- err = xglGetDeviceQueue(device->obj(), queue_node_index, que_idx, &queue);
- ASSERT_EQ(XGL_SUCCESS, err) << "xglGetDeviceQueue: " << qname << " queue #" << que_idx << ": Failed with error: " << xgl_result_string(err);
+ err = vkGetDeviceQueue(device->obj(), queue_node_index, que_idx, &queue);
+ ASSERT_EQ(VK_SUCCESS, err) << "vkGetDeviceQueue: " << qname << " queue #" << que_idx << ": Failed with error: " << vk_result_string(err);
}
}
-void print_queue_info(xgl_testing::Device *device, uint32_t queue_node_index)
+void print_queue_info(vk_testing::Device *device, uint32_t queue_node_index)
{
uint32_t que_idx;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES queue_props;
- XGL_PHYSICAL_GPU_PROPERTIES props;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES queue_props;
+ VK_PHYSICAL_GPU_PROPERTIES props;
props = device->gpu().properties();
queue_props = device->gpu().queue_properties()[queue_node_index];
ASSERT_NE(0, queue_props.queueCount) << "No Queues available at Node Index #" << queue_node_index << " GPU: " << props.gpuName;
-// XGL_RESULT XGLAPI xglGetDeviceQueue(
-// XGL_DEVICE device,
+// VK_RESULT VKAPI vkGetDeviceQueue(
+// VK_DEVICE device,
// uint32_t queueNodeIndex,
// uint32_t queueIndex,
-// XGL_QUEUE* pQueue);
+// VK_QUEUE* pQueue);
/*
* queue handles are retrieved from the device by calling
- * xglGetDeviceQueue() with a queue node index and a requested logical
+ * vkGetDeviceQueue() with a queue node index and a requested logical
* queue ID. The queue node index is the index into the array of
- * XGL_PHYSICAL_GPU_QUEUE_PROPERTIES returned by GetGpuInfo. Each
- * queue node index has different attributes specified by the XGL_QUEUE_FLAGS property.
+ * VK_PHYSICAL_GPU_QUEUE_PROPERTIES returned by GetGpuInfo. Each
+ * queue node index has different attributes specified by the VK_QUEUE_FLAGS property.
* The logical queue ID is a sequential number starting from zero
* and referencing up to the number of queues supported of that node index
* at device creation.
for (que_idx = 0; que_idx < queue_props.queueCount; que_idx++) {
-// typedef enum _XGL_QUEUE_FLAGS
+// typedef enum _VK_QUEUE_FLAGS
// {
-// XGL_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
-// XGL_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
-// XGL_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
-// XGL_QUEUE_EXTENDED_BIT = 0x80000000 // Extended queue
-// } XGL_QUEUE_FLAGS;
+// VK_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
+// VK_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
+// VK_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
+// VK_QUEUE_EXTENDED_BIT = 0x80000000 // Extended queue
+// } VK_QUEUE_FLAGS;
- if (queue_props.queueFlags & XGL_QUEUE_GRAPHICS_BIT) {
+ if (queue_props.queueFlags & VK_QUEUE_GRAPHICS_BIT) {
getQueue(device, queue_node_index, "Graphics");
}
- if (queue_props.queueFlags & XGL_QUEUE_COMPUTE_BIT) {
+ if (queue_props.queueFlags & VK_QUEUE_COMPUTE_BIT) {
getQueue(device, queue_node_index, "Compute");
}
- if (queue_props.queueFlags & XGL_QUEUE_DMA_BIT) {
+ if (queue_props.queueFlags & VK_QUEUE_DMA_BIT) {
getQueue(device, queue_node_index, "DMA");
}
void XglTest::CreateImageTest()
{
- XGL_RESULT err;
- XGL_IMAGE image;
+ VK_RESULT err;
+ VK_IMAGE image;
uint32_t w, h, mipCount;
size_t size;
- XGL_FORMAT fmt;
- XGL_FORMAT_PROPERTIES image_fmt;
+ VK_FORMAT fmt;
+ VK_FORMAT_PROPERTIES image_fmt;
size_t data_size;
w =512;
mipCount++;
}
- fmt = XGL_FMT_R8G8B8A8_UINT;
+ fmt = VK_FMT_R8G8B8A8_UINT;
// TODO: Pick known good format rather than just expect common format
/*
* XXX: What should happen if given NULL HANDLE for the pData argument?
- * We're not requesting XGL_INFO_TYPE_MEMORY_REQUIREMENTS so there is
+ * We're not requesting VK_INFO_TYPE_MEMORY_REQUIREMENTS so there is
* an expectation that pData is a valid pointer.
* However, why include a returned size value? That implies that the
* amount of data may vary and that doesn't work well for using a
*/
size = sizeof(image_fmt);
- err = xglGetFormatInfo(device(), fmt,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(device(), fmt,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &image_fmt);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
-// typedef struct _XGL_IMAGE_CREATE_INFO
+// typedef struct _VK_IMAGE_CREATE_INFO
// {
-// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO
+// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
// const void* pNext; // Pointer to next structure.
-// XGL_IMAGE_TYPE imageType;
-// XGL_FORMAT format;
-// XGL_EXTENT3D extent;
+// VK_IMAGE_TYPE imageType;
+// VK_FORMAT format;
+// VK_EXTENT3D extent;
// uint32_t mipLevels;
// uint32_t arraySize;
// uint32_t samples;
-// XGL_IMAGE_TILING tiling;
-// XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
-// XGL_FLAGS flags; // XGL_IMAGE_CREATE_FLAGS
-// } XGL_IMAGE_CREATE_INFO;
+// VK_IMAGE_TILING tiling;
+// VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+// VK_FLAGS flags; // VK_IMAGE_CREATE_FLAGS
+// } VK_IMAGE_CREATE_INFO;
- XGL_IMAGE_CREATE_INFO imageCreateInfo = {};
- imageCreateInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
- imageCreateInfo.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO imageCreateInfo = {};
+ imageCreateInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
+ imageCreateInfo.imageType = VK_IMAGE_2D;
imageCreateInfo.format = fmt;
imageCreateInfo.arraySize = 1;
imageCreateInfo.extent.width = w;
imageCreateInfo.extent.depth = 1;
imageCreateInfo.mipLevels = mipCount;
imageCreateInfo.samples = 1;
- imageCreateInfo.tiling = XGL_LINEAR_TILING;
+ imageCreateInfo.tiling = VK_LINEAR_TILING;
// Image usage flags
-// typedef enum _XGL_IMAGE_USAGE_FLAGS
+// typedef enum _VK_IMAGE_USAGE_FLAGS
// {
-// XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
-// XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
-// XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
-// XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
-// } XGL_IMAGE_USAGE_FLAGS;
- imageCreateInfo.usage = XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
-
-// XGL_RESULT XGLAPI xglCreateImage(
-// XGL_DEVICE device,
-// const XGL_IMAGE_CREATE_INFO* pCreateInfo,
-// XGL_IMAGE* pImage);
- err = xglCreateImage(device(), &imageCreateInfo, &image);
- ASSERT_XGL_SUCCESS(err);
+// VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
+// VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
+// VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
+// VK_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
+// } VK_IMAGE_USAGE_FLAGS;
+ imageCreateInfo.usage = VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
+
+// VK_RESULT VKAPI vkCreateImage(
+// VK_DEVICE device,
+// const VK_IMAGE_CREATE_INFO* pCreateInfo,
+// VK_IMAGE* pImage);
+ err = vkCreateImage(device(), &imageCreateInfo, &image);
+ ASSERT_VK_SUCCESS(err);
// Verify image resources
-// XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
-// XGL_IMAGE image,
-// const XGL_IMAGE_SUBRESOURCE* pSubresource,
-// XGL_SUBRESOURCE_INFO_TYPE infoType,
+// VK_RESULT VKAPI vkGetImageSubresourceInfo(
+// VK_IMAGE image,
+// const VK_IMAGE_SUBRESOURCE* pSubresource,
+// VK_SUBRESOURCE_INFO_TYPE infoType,
// size_t* pDataSize,
// void* pData);
-// typedef struct _XGL_SUBRESOURCE_LAYOUT
+// typedef struct _VK_SUBRESOURCE_LAYOUT
// {
-// XGL_GPU_SIZE offset; // Specified in bytes
-// XGL_GPU_SIZE size; // Specified in bytes
-// XGL_GPU_SIZE rowPitch; // Specified in bytes
-// XGL_GPU_SIZE depthPitch; // Specified in bytes
-// } XGL_SUBRESOURCE_LAYOUT;
+// VK_GPU_SIZE offset; // Specified in bytes
+// VK_GPU_SIZE size; // Specified in bytes
+// VK_GPU_SIZE rowPitch; // Specified in bytes
+// VK_GPU_SIZE depthPitch; // Specified in bytes
+// } VK_SUBRESOURCE_LAYOUT;
-// typedef struct _XGL_IMAGE_SUBRESOURCE
+// typedef struct _VK_IMAGE_SUBRESOURCE
// {
-// XGL_IMAGE_ASPECT aspect;
+// VK_IMAGE_ASPECT aspect;
// uint32_t mipLevel;
// uint32_t arraySlice;
-// } XGL_IMAGE_SUBRESOURCE;
-// typedef enum _XGL_SUBRESOURCE_INFO_TYPE
+// } VK_IMAGE_SUBRESOURCE;
+// typedef enum _VK_SUBRESOURCE_INFO_TYPE
// {
-// // Info type for xglGetImageSubresourceInfo()
-// XGL_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
+// // Info type for vkGetImageSubresourceInfo()
+// VK_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
-// XGL_MAX_ENUM(_XGL_SUBRESOURCE_INFO_TYPE)
-// } XGL_SUBRESOURCE_INFO_TYPE;
- XGL_IMAGE_SUBRESOURCE subresource = {};
- subresource.aspect = XGL_IMAGE_ASPECT_COLOR;
+// VK_MAX_ENUM(_VK_SUBRESOURCE_INFO_TYPE)
+// } VK_SUBRESOURCE_INFO_TYPE;
+ VK_IMAGE_SUBRESOURCE subresource = {};
+ subresource.aspect = VK_IMAGE_ASPECT_COLOR;
subresource.arraySlice = 0;
_w = w;
_h = h;
while( ( _w > 0 ) || ( _h > 0 ) )
{
- XGL_SUBRESOURCE_LAYOUT layout = {};
+ VK_SUBRESOURCE_LAYOUT layout = {};
data_size = sizeof(layout);
- err = xglGetImageSubresourceInfo(image, &subresource, XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo(image, &subresource, VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&data_size, &layout);
- ASSERT_XGL_SUCCESS(err);
- ASSERT_EQ(sizeof(XGL_SUBRESOURCE_LAYOUT), data_size) << "Invalid structure (XGL_SUBRESOURCE_LAYOUT) size";
+ ASSERT_VK_SUCCESS(err);
+ ASSERT_EQ(sizeof(VK_SUBRESOURCE_LAYOUT), data_size) << "Invalid structure (VK_SUBRESOURCE_LAYOUT) size";
// TODO: 4 should be replaced with pixel size for given format
EXPECT_LE(_w * 4, layout.rowPitch) << "Pitch does not match expected image pitch";
subresource.mipLevel++;
}
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_REQUIREMENTS mem_req;
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS mem_req;
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
data_size = sizeof(mem_req);
- err = xglGetObjectInfo(image, XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(image, VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&data_size, &mem_req);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_EQ(data_size, sizeof(mem_req));
- ASSERT_NE(0, mem_req.size) << "xglGetObjectInfo (Event): Failed - expect images to require memory";
- err = xglGetObjectInfo(image, XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ ASSERT_NE(0, mem_req.size) << "vkGetObjectInfo (Event): Failed - expect images to require memory";
+ err = vkGetObjectInfo(image, VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- ASSERT_XGL_SUCCESS(err);
- ASSERT_EQ(img_reqs_size, sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ ASSERT_VK_SUCCESS(err);
+ ASSERT_EQ(img_reqs_size, sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
- // XGL_RESULT XGLAPI xglAllocMemory(
- // XGL_DEVICE device,
- // const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- // XGL_GPU_MEMORY* pMem);
- XGL_MEMORY_ALLOC_INFO mem_info = {};
- XGL_GPU_MEMORY image_mem;
-
- mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ // VK_RESULT VKAPI vkAllocMemory(
+ // VK_DEVICE device,
+ // const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ // VK_GPU_MEMORY* pMem);
+ VK_MEMORY_ALLOC_INFO mem_info = {};
+ VK_GPU_MEMORY image_mem;
+
+ mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.pNext = &img_alloc;
mem_info.allocationSize = mem_req.size;
- mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
- mem_info.memType = XGL_MEMORY_TYPE_IMAGE;
- mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
- err = xglAllocMemory(device(), &mem_info, &image_mem);
- ASSERT_XGL_SUCCESS(err);
+ mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
+ mem_info.memType = VK_MEMORY_TYPE_IMAGE;
+ mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
+ err = vkAllocMemory(device(), &mem_info, &image_mem);
+ ASSERT_VK_SUCCESS(err);
- err = xglBindObjectMemory(image, 0, image_mem, 0);
- ASSERT_XGL_SUCCESS(err);
+ err = vkBindObjectMemory(image, 0, image_mem, 0);
+ ASSERT_VK_SUCCESS(err);
-// typedef struct _XGL_IMAGE_VIEW_CREATE_INFO
+// typedef struct _VK_IMAGE_VIEW_CREATE_INFO
// {
-// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
+// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
// const void* pNext; // Pointer to next structure
-// XGL_IMAGE image;
-// XGL_IMAGE_VIEW_TYPE viewType;
-// XGL_FORMAT format;
-// XGL_CHANNEL_MAPPING channels;
-// XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange;
+// VK_IMAGE image;
+// VK_IMAGE_VIEW_TYPE viewType;
+// VK_FORMAT format;
+// VK_CHANNEL_MAPPING channels;
+// VK_IMAGE_SUBRESOURCE_RANGE subresourceRange;
// float minLod;
-// } XGL_IMAGE_VIEW_CREATE_INFO;
- XGL_IMAGE_VIEW_CREATE_INFO viewInfo = {};
- XGL_IMAGE_VIEW view;
- viewInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
+// } VK_IMAGE_VIEW_CREATE_INFO;
+ VK_IMAGE_VIEW_CREATE_INFO viewInfo = {};
+ VK_IMAGE_VIEW view;
+ viewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
viewInfo.image = image;
- viewInfo.viewType = XGL_IMAGE_VIEW_2D;
+ viewInfo.viewType = VK_IMAGE_VIEW_2D;
viewInfo.format = fmt;
- viewInfo.channels.r = XGL_CHANNEL_SWIZZLE_R;
- viewInfo.channels.g = XGL_CHANNEL_SWIZZLE_G;
- viewInfo.channels.b = XGL_CHANNEL_SWIZZLE_B;
- viewInfo.channels.a = XGL_CHANNEL_SWIZZLE_A;
+ viewInfo.channels.r = VK_CHANNEL_SWIZZLE_R;
+ viewInfo.channels.g = VK_CHANNEL_SWIZZLE_G;
+ viewInfo.channels.b = VK_CHANNEL_SWIZZLE_B;
+ viewInfo.channels.a = VK_CHANNEL_SWIZZLE_A;
viewInfo.subresourceRange.baseArraySlice = 0;
viewInfo.subresourceRange.arraySize = 1;
viewInfo.subresourceRange.baseMipLevel = 0;
viewInfo.subresourceRange.mipLevels = 1;
- viewInfo.subresourceRange.aspect = XGL_IMAGE_ASPECT_COLOR;
+ viewInfo.subresourceRange.aspect = VK_IMAGE_ASPECT_COLOR;
-// XGL_RESULT XGLAPI xglCreateImageView(
-// XGL_DEVICE device,
-// const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
-// XGL_IMAGE_VIEW* pView);
+// VK_RESULT VKAPI vkCreateImageView(
+// VK_DEVICE device,
+// const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+// VK_IMAGE_VIEW* pView);
- err = xglCreateImageView(device(), &viewInfo, &view);
- ASSERT_XGL_SUCCESS(err) << "xglCreateImageView failed";
+ err = vkCreateImageView(device(), &viewInfo, &view);
+ ASSERT_VK_SUCCESS(err) << "vkCreateImageView failed";
// TODO: Test image memory.
// All done with image memory, clean up
- ASSERT_XGL_SUCCESS(xglBindObjectMemory(image, 0, XGL_NULL_HANDLE, 0));
+ ASSERT_VK_SUCCESS(vkBindObjectMemory(image, 0, VK_NULL_HANDLE, 0));
- ASSERT_XGL_SUCCESS(xglFreeMemory(image_mem));
+ ASSERT_VK_SUCCESS(vkFreeMemory(image_mem));
- ASSERT_XGL_SUCCESS(xglDestroyObject(image));
+ ASSERT_VK_SUCCESS(vkDestroyObject(image));
}
TEST_F(XglTest, CreateImage) {
void XglTest::CreateCommandBufferTest()
{
- XGL_RESULT err;
- XGL_CMD_BUFFER_CREATE_INFO info = {};
- XGL_CMD_BUFFER cmdBuffer;
+ VK_RESULT err;
+ VK_CMD_BUFFER_CREATE_INFO info = {};
+ VK_CMD_BUFFER cmdBuffer;
-// typedef struct _XGL_CMD_BUFFER_CREATE_INFO
+// typedef struct _VK_CMD_BUFFER_CREATE_INFO
// {
-// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
+// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
// const void* pNext;
-// XGL_QUEUE_TYPE queueType;
-// XGL_FLAGS flags;
-// } XGL_CMD_BUFFER_CREATE_INFO;
+// VK_QUEUE_TYPE queueType;
+// VK_FLAGS flags;
+// } VK_CMD_BUFFER_CREATE_INFO;
- info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
+ info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
info.queueNodeIndex = graphics_queue_node_index;
- err = xglCreateCommandBuffer(device(), &info, &cmdBuffer);
- ASSERT_XGL_SUCCESS(err) << "xglCreateCommandBuffer failed";
+ err = vkCreateCommandBuffer(device(), &info, &cmdBuffer);
+ ASSERT_VK_SUCCESS(err) << "vkCreateCommandBuffer failed";
- ASSERT_XGL_SUCCESS(xglDestroyObject(cmdBuffer));
+ ASSERT_VK_SUCCESS(vkDestroyObject(cmdBuffer));
}
TEST_F(XglTest, TestComandBuffer) {
CreateCommandBufferTest();
}
-void XglTest::CreateShader(XGL_SHADER *pshader)
+void XglTest::CreateShader(VK_SHADER *pshader)
{
void *code;
uint32_t codeSize;
struct icd_spv_header *pSPV;
- XGL_RESULT err;
+ VK_RESULT err;
codeSize = sizeof(struct icd_spv_header) + 100;
code = malloc(codeSize);
pSPV->magic = ICD_SPV_MAGIC;
pSPV->version = ICD_SPV_VERSION;
-// typedef struct _XGL_SHADER_CREATE_INFO
+// typedef struct _VK_SHADER_CREATE_INFO
// {
-// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO
+// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SHADER_CREATE_INFO
// const void* pNext; // Pointer to next structure
// size_t codeSize; // Specified in bytes
// const void* pCode;
-// XGL_FLAGS flags; // Reserved
-// } XGL_SHADER_CREATE_INFO;
+// VK_FLAGS flags; // Reserved
+// } VK_SHADER_CREATE_INFO;
- XGL_SHADER_CREATE_INFO createInfo;
- XGL_SHADER shader;
+ VK_SHADER_CREATE_INFO createInfo;
+ VK_SHADER shader;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
createInfo.pCode = code;
createInfo.codeSize = codeSize;
createInfo.flags = 0;
- err = xglCreateShader(device(), &createInfo, &shader);
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateShader(device(), &createInfo, &shader);
+ ASSERT_VK_SUCCESS(err);
*pshader = shader;
}
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
- xgl_testing::set_error_callback(test_error_callback);
+ vk_testing::set_error_callback(test_error_callback);
return RUN_ALL_TESTS();
}
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
'{OBJTRACK}ERROR : OBJ ERROR : GPU_MEMORY',
'{OBJTRACK}ERROR : OBJ ERROR : IMAGE'],
'XglTest.Fence' : ['{OBJTRACK}ERROR : OBJECT VALIDATION WARNING: FENCE'],
- #'XglRenderTest.XGLTriangle_OutputLocation' : ['{OBJTRACK}ERROR : xglQueueSubmit Memory reference count'],
+ #'XglRenderTest.VKTriangle_OutputLocation' : ['{OBJTRACK}ERROR : vkQueueSubmit Memory reference count'],
'XglRenderTest.TriangleWithVertexFetch' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],
'XglRenderTest.TriangleMRT' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],
'XglRenderTest.QuadWithIndexedVertexFetch' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER', '{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],
-#include <xgl.h>
-#include <xglDbg.h>
+#include <vulkan.h>
+#include <vkDbg.h>
#include "gtest-1.7.0/include/gtest/gtest.h"
-#include "xglrenderframework.h"
+#include "vkrenderframework.h"
-void XGLAPI myDbgFunc(
- XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+void VKAPI myDbgFunc(
+ VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pMsg,
class ErrorMonitor {
public:
- ErrorMonitor(XGL_INSTANCE inst)
+ ErrorMonitor(VK_INSTANCE inst)
{
- xglDbgRegisterMsgCallback(inst, myDbgFunc, this);
- m_msgType = XGL_DBG_MSG_UNKNOWN;
+ vkDbgRegisterMsgCallback(inst, myDbgFunc, this);
+ m_msgType = VK_DBG_MSG_UNKNOWN;
}
void ClearState()
{
- m_msgType = XGL_DBG_MSG_UNKNOWN;
+ m_msgType = VK_DBG_MSG_UNKNOWN;
m_msgString.clear();
}
- XGL_DBG_MSG_TYPE GetState(std::string *msgString)
+ VK_DBG_MSG_TYPE GetState(std::string *msgString)
{
*msgString = m_msgString;
return m_msgType;
}
- void SetState(XGL_DBG_MSG_TYPE msgType, const char *msgString)
+ void SetState(VK_DBG_MSG_TYPE msgType, const char *msgString)
{
m_msgType = msgType;
m_msgString = *msgString;
}
private:
- XGL_DBG_MSG_TYPE m_msgType;
+ VK_DBG_MSG_TYPE m_msgType;
std::string m_msgString;
};
-void XGLAPI myDbgFunc(
- XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+void VKAPI myDbgFunc(
+ VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pMsg,
class XglLayerTest : public XglRenderFramework
{
public:
- XGL_RESULT BeginCommandBuffer(XglCommandBufferObj &cmdBuffer);
- XGL_RESULT EndCommandBuffer(XglCommandBufferObj &cmdBuffer);
+ VK_RESULT BeginCommandBuffer(XglCommandBufferObj &cmdBuffer);
+ VK_RESULT EndCommandBuffer(XglCommandBufferObj &cmdBuffer);
protected:
XglMemoryRefManager m_memoryRefManager;
virtual void SetUp() {
- this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
+ this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "layer_tests";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = XGL_API_VERSION;
+ this->app_info.apiVersion = VK_API_VERSION;
InitFramework();
m_errorMonitor = new ErrorMonitor(inst);
ShutdownFramework();
}
};
-XGL_RESULT XglLayerTest::BeginCommandBuffer(XglCommandBufferObj &cmdBuffer)
+VK_RESULT XglLayerTest::BeginCommandBuffer(XglCommandBufferObj &cmdBuffer)
{
- XGL_RESULT result;
+ VK_RESULT result;
result = cmdBuffer.BeginCommandBuffer();
* For render test all drawing happens in a single render pass
* on a single command buffer.
*/
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
cmdBuffer.BeginRenderPass(renderPass(), framebuffer());
}
return result;
}
-XGL_RESULT XglLayerTest::EndCommandBuffer(XglCommandBufferObj &cmdBuffer)
+VK_RESULT XglLayerTest::EndCommandBuffer(XglCommandBufferObj &cmdBuffer)
{
- XGL_RESULT result;
+ VK_RESULT result;
cmdBuffer.EndRenderPass(renderPass());
TEST_F(XglLayerTest, UseSignaledFence)
{
- xgl_testing::Fence testFence;
- XGL_DBG_MSG_TYPE msgType;
+ vk_testing::Fence testFence;
+ VK_DBG_MSG_TYPE msgType;
std::string msgString;
- const XGL_FENCE_CREATE_INFO fenceInfo = {
- .sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO,
+ const VK_FENCE_CREATE_INFO fenceInfo = {
+ .sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.pNext = NULL,
- .flags = XGL_FENCE_CREATE_SIGNALED_BIT,
+ .flags = VK_FENCE_CREATE_SIGNALED_BIT,
};
// Register error callback to catch errors and record parameters
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
#include <fstream>
using namespace std;
-#include <xgl.h>
+#include <vulkan.h>
#ifdef DUMP_STATE_DOT
#include "../layers/draw_state.h"
#endif
#include "../layers/object_track.h"
#endif
#ifdef DEBUG_CALLBACK
-#include <xglDbg.h>
+#include <vkDbg.h>
#endif
#include "gtest-1.7.0/include/gtest/gtest.h"
#include "glm/glm.hpp"
#include <glm/gtc/matrix_transform.hpp>
-#include "xglrenderframework.h"
+#include "vkrenderframework.h"
#ifdef DEBUG_CALLBACK
-void XGLAPI myDbgFunc(
- XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+void VKAPI myDbgFunc(
+ VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pMsg,
{
switch (msgType)
{
- case XGL_DBG_MSG_WARNING:
+ case VK_DBG_MSG_WARNING:
printf("CALLBACK WARNING : %s\n", pMsg);
break;
- case XGL_DBG_MSG_ERROR:
+ case VK_DBG_MSG_ERROR:
printf("CALLBACK ERROR : %s\n", pMsg);
break;
default:
XglConstantBufferObj *constantBuffer, XglCommandBufferObj *cmdBuffer);
void GenericDrawPreparation(XglCommandBufferObj *cmdBuffer, XglPipelineObj &pipelineobj, XglDescriptorSetObj &descriptorSet);
void InitDepthStencil();
- void XGLTriangleTest(const char *vertShaderText, const char *fragShaderText, const bool rotate);
+ void VKTriangleTest(const char *vertShaderText, const char *fragShaderText, const bool rotate);
- XGL_RESULT BeginCommandBuffer(XglCommandBufferObj &cmdBuffer);
- XGL_RESULT EndCommandBuffer(XglCommandBufferObj &cmdBuffer);
+ VK_RESULT BeginCommandBuffer(XglCommandBufferObj &cmdBuffer);
+ VK_RESULT EndCommandBuffer(XglCommandBufferObj &cmdBuffer);
protected:
- XGL_IMAGE m_texture;
- XGL_IMAGE_VIEW m_textureView;
- XGL_IMAGE_VIEW_ATTACH_INFO m_textureViewInfo;
- XGL_GPU_MEMORY m_textureMem;
+ VK_IMAGE m_texture;
+ VK_IMAGE_VIEW m_textureView;
+ VK_IMAGE_VIEW_ATTACH_INFO m_textureViewInfo;
+ VK_GPU_MEMORY m_textureMem;
- XGL_SAMPLER m_sampler;
+ VK_SAMPLER m_sampler;
virtual void SetUp() {
- this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
+ this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "render_tests";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
- this->app_info.apiVersion = XGL_API_VERSION;
+ this->app_info.apiVersion = VK_API_VERSION;
memset(&m_textureViewInfo, 0, sizeof(m_textureViewInfo));
- m_textureViewInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ m_textureViewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
InitFramework();
}
}
};
-XGL_RESULT XglRenderTest::BeginCommandBuffer(XglCommandBufferObj &cmdBuffer)
+VK_RESULT XglRenderTest::BeginCommandBuffer(XglCommandBufferObj &cmdBuffer)
{
- XGL_RESULT result;
+ VK_RESULT result;
result = cmdBuffer.BeginCommandBuffer();
* For render test all drawing happens in a single render pass
* on a single command buffer.
*/
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
cmdBuffer.BeginRenderPass(renderPass(), framebuffer());
}
return result;
}
-XGL_RESULT XglRenderTest::EndCommandBuffer(XglCommandBufferObj &cmdBuffer)
+VK_RESULT XglRenderTest::EndCommandBuffer(XglCommandBufferObj &cmdBuffer)
{
- XGL_RESULT result;
+ VK_RESULT result;
cmdBuffer.EndRenderPass(renderPass());
}
cmdBuffer->PrepareAttachments();
- cmdBuffer->BindStateObject(XGL_STATE_BIND_RASTER, m_stateRaster);
- cmdBuffer->BindStateObject(XGL_STATE_BIND_VIEWPORT, m_stateViewport);
- cmdBuffer->BindStateObject(XGL_STATE_BIND_COLOR_BLEND, m_colorBlend);
- cmdBuffer->BindStateObject(XGL_STATE_BIND_DEPTH_STENCIL, m_stateDepthStencil);
- descriptorSet.CreateXGLDescriptorSet(cmdBuffer);
- pipelineobj.CreateXGLPipeline(descriptorSet);
+ cmdBuffer->BindStateObject(VK_STATE_BIND_RASTER, m_stateRaster);
+ cmdBuffer->BindStateObject(VK_STATE_BIND_VIEWPORT, m_stateViewport);
+ cmdBuffer->BindStateObject(VK_STATE_BIND_COLOR_BLEND, m_colorBlend);
+ cmdBuffer->BindStateObject(VK_STATE_BIND_DEPTH_STENCIL, m_stateDepthStencil);
+ descriptorSet.CreateVKDescriptorSet(cmdBuffer);
+ pipelineobj.CreateVKPipeline(descriptorSet);
cmdBuffer->BindPipeline(pipelineobj);
cmdBuffer->BindDescriptorSet(descriptorSet);
}
int i;
glm::mat4 MVP;
int matrixSize = sizeof(MVP);
- XGL_RESULT err;
+ VK_RESULT err;
for (i = 0; i < 8; i++) {
void *pData = constantBuffer->map();
// submit the command buffer to the universal queue
cmdBuffer->QueueCommandBuffer();
- err = xglQueueWaitIdle( m_device->m_queue );
- ASSERT_XGL_SUCCESS( err );
+ err = vkQueueWaitIdle( m_device->m_queue );
+ ASSERT_VK_SUCCESS( err );
// Wait for work to finish before cleaning up.
- xglDeviceWaitIdle(m_device->device());
+ vkDeviceWaitIdle(m_device->device());
assert(m_renderTargets.size() == 1);
RecordImage(m_renderTargets[0]);
fflush(stdout);
}
-struct xgltriangle_vs_uniform {
+struct vktriangle_vs_uniform {
// Must start with MVP
float mvp[4][4];
float position[3][4];
float color[3][4];
};
-void XglRenderTest::XGLTriangleTest(const char *vertShaderText, const char *fragShaderText, const bool rotate)
+void XglRenderTest::VKTriangleTest(const char *vertShaderText, const char *fragShaderText, const bool rotate)
{
#ifdef DEBUG_CALLBACK
- xglDbgRegisterMsgCallback(inst, myDbgFunc, NULL);
+ vkDbgRegisterMsgCallback(inst, myDbgFunc, NULL);
#endif
// Create identity matrix
int i;
- struct xgltriangle_vs_uniform data;
+ struct vktriangle_vs_uniform data;
glm::mat4 Projection = glm::mat4(1.0f);
glm::mat4 View = glm::mat4(1.0f);
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
const int matrixSize = sizeof(MVP);
- const int bufSize = sizeof(xgltriangle_vs_uniform) / sizeof(float);
+ const int bufSize = sizeof(vktriangle_vs_uniform) / sizeof(float);
memcpy(&data.mvp, &MVP[0][0], matrixSize);
static const Vertex tri_data[] =
XglConstantBufferObj constantBuffer(m_device, bufSize*2, sizeof(float), (const void*) &data);
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, constantBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, constantBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
RotateTriangleVSUniform(Projection, View, Model, &constantBuffer, &cmdBuffer);
#ifdef PRINT_OBJECTS
- //uint64_t objTrackGetObjectCount(XGL_OBJECT_TYPE type)
- OBJ_TRACK_GET_OBJECT_COUNT pObjTrackGetObjectCount = (OBJ_TRACK_GET_OBJECT_COUNT)xglGetProcAddr(gpu(), (char*)"objTrackGetObjectCount");
- uint64_t numObjects = pObjTrackGetObjectCount(XGL_OBJECT_TYPE_ANY);
- //OBJ_TRACK_GET_OBJECTS pGetObjsFunc = xglGetProcAddr(gpu(), (char*)"objTrackGetObjects");
+ //uint64_t objTrackGetObjectCount(VK_OBJECT_TYPE type)
+ OBJ_TRACK_GET_OBJECT_COUNT pObjTrackGetObjectCount = (OBJ_TRACK_GET_OBJECT_COUNT)vkGetProcAddr(gpu(), (char*)"objTrackGetObjectCount");
+ uint64_t numObjects = pObjTrackGetObjectCount(VK_OBJECT_TYPE_ANY);
+ //OBJ_TRACK_GET_OBJECTS pGetObjsFunc = vkGetProcAddr(gpu(), (char*)"objTrackGetObjects");
printf("DEBUG : Number of Objects : %lu\n", numObjects);
- OBJ_TRACK_GET_OBJECTS pObjTrackGetObjs = (OBJ_TRACK_GET_OBJECTS)xglGetProcAddr(gpu(), (char*)"objTrackGetObjects");
+ OBJ_TRACK_GET_OBJECTS pObjTrackGetObjs = (OBJ_TRACK_GET_OBJECTS)vkGetProcAddr(gpu(), (char*)"objTrackGetObjects");
OBJTRACK_NODE* pObjNodeArray = (OBJTRACK_NODE*)malloc(sizeof(OBJTRACK_NODE)*numObjects);
- pObjTrackGetObjs(XGL_OBJECT_TYPE_ANY, numObjects, pObjNodeArray);
+ pObjTrackGetObjs(VK_OBJECT_TYPE_ANY, numObjects, pObjNodeArray);
for (i=0; i < numObjects; i++) {
- printf("Object %i of type %s has objID (%p) and %lu uses\n", i, string_XGL_OBJECT_TYPE(pObjNodeArray[i].objType), pObjNodeArray[i].pObj, pObjNodeArray[i].numUses);
+ printf("Object %i of type %s has objID (%p) and %lu uses\n", i, string_VK_OBJECT_TYPE(pObjNodeArray[i].objType), pObjNodeArray[i].pObj, pObjNodeArray[i].numUses);
}
free(pObjNodeArray);
#endif
}
-TEST_F(XglRenderTest, XGLTriangle_FragColor)
+TEST_F(XglRenderTest, VKTriangle_FragColor)
{
static const char *vertShaderText =
"#version 140\n"
" gl_FragColor = inColor;\n"
"}\n";
- TEST_DESCRIPTION("XGL-style shaders where fragment shader outputs to GLSL built-in gl_FragColor");
- XGLTriangleTest(vertShaderText, fragShaderText, true);
+ TEST_DESCRIPTION("VK-style shaders where fragment shader outputs to GLSL built-in gl_FragColor");
+ VKTriangleTest(vertShaderText, fragShaderText, true);
}
-TEST_F(XglRenderTest, XGLTriangle_OutputLocation)
+TEST_F(XglRenderTest, VKTriangle_OutputLocation)
{
static const char *vertShaderText =
"#version 140\n"
" outColor = inColor;\n"
"}\n";
- TEST_DESCRIPTION("XGL-style shaders where fragment shader outputs to output location 0, which should be the same as gl_FragColor");
+ TEST_DESCRIPTION("VK-style shaders where fragment shader outputs to output location 0, which should be the same as gl_FragColor");
- XGLTriangleTest(vertShaderText, fragShaderText, true);
+ VKTriangleTest(vertShaderText, fragShaderText, true);
}
#ifndef _WIN32 // Implicit (for now at least) in WIN32 is that we are using the Nvidia driver and it won't consume SPIRV yet
-TEST_F(XglRenderTest, SPV_XGLTriangle)
+TEST_F(XglRenderTest, SPV_VKTriangle)
{
bool saved_use_spv = XglTestFramework::m_use_spv;
" gl_FragColor = inColor;\n"
"}\n";
- TEST_DESCRIPTION("XGL-style shaders, but force test framework to compile shader to SPV and pass SPV to driver.");
+ TEST_DESCRIPTION("VK-style shaders, but force test framework to compile shader to SPV and pass SPV to driver.");
XglTestFramework::m_use_spv = true;
- XGLTriangleTest(vertShaderText, fragShaderText, true);
+ VKTriangleTest(vertShaderText, fragShaderText, true);
XglTestFramework::m_use_spv = saved_use_spv;
}
TEST_DESCRIPTION("Basic shader that renders a fixed Green triangle coded as part of the vertex shader.");
- XGLTriangleTest(vertShaderText, fragShaderText, false);
+ VKTriangleTest(vertShaderText, fragShaderText, false);
}
#ifndef _WIN32 // Implicit (for now at least) in WIN32 is that we are using the Nvidia driver and it won't consume SPIRV yet
TEST_F(XglRenderTest, SPV_GreenTriangle)
TEST_DESCRIPTION("Same shader as GreenTriangle, but compiles shader to SPV and gives SPV to driver.");
XglTestFramework::m_use_spv = true;
- XGLTriangleTest(vertShaderText, fragShaderText, false);
+ VKTriangleTest(vertShaderText, fragShaderText, false);
XglTestFramework::m_use_spv = saved_use_spv;
}
#endif
" gl_FragColor = vec4(1.0, 1.0, 0.0, 1.0);\n"
"}\n";
- XGLTriangleTest(vertShaderText, fragShaderText, false);
+ VKTriangleTest(vertShaderText, fragShaderText, false);
}
TEST_F(XglRenderTest, QuadWithVertexFetch)
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BIND_ID; // Binding ID
vi_attribs[0].location = 0; // location, position
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BIND_ID; // Binding ID
vi_attribs[1].location = 1; // location, color
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 1*sizeof(float)*4; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,2);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
XglConstantBufferObj meshBuffer(m_device, sizeof(vb_data) / sizeof(vb_data[0]), sizeof(vb_data[0]), vb_data);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
#define MESH_BUF_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BUF_ID, // Binding ID
sizeof(vb_data[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrib;
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrib;
vi_attrib.binding = MESH_BUF_ID; // index into vertexBindingDescriptions
vi_attrib.location = 0;
- vi_attrib.format = XGL_FMT_R32G32_SFLOAT; // format of source data
+ vi_attrib.format = VK_FMT_R32G32_SFLOAT; // format of source data
vi_attrib.offsetInBytes = 0; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(&vi_attrib, 1);
pipelineobj.AddVertexDataBuffer(&meshBuffer, MESH_BUF_ID);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget(2));
- XGL_PIPELINE_CB_ATTACHMENT_STATE att = {};
- att.blendEnable = XGL_FALSE;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att = {};
+ att.blendEnable = VK_FALSE;
att.format = m_render_target_fmt;
att.channelWriteMask = 0xf;
pipelineobj.AddColorAttachment(1, &att);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
cmdBuffer.AddRenderTarget(m_renderTargets[1]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
meshBuffer.BufferMemoryBarrier();
XglIndexBufferObj indexBuffer(m_device);
- indexBuffer.CreateAndInitBuffer(sizeof(g_idxData)/sizeof(g_idxData[0]), XGL_INDEX_16, g_idxData);
+ indexBuffer.CreateAndInitBuffer(sizeof(g_idxData)/sizeof(g_idxData[0]), VK_INDEX_16, g_idxData);
indexBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, indexBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, indexBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BIND_ID; // binding ID from BINDING_DESCRIPTION array to use for this attribute
vi_attribs[0].location = 0; // layout location of vertex attribute
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BIND_ID; // binding ID from BINDING_DESCRIPTION array to use for this attribute
vi_attribs[1].location = 1; // layout location of vertex attribute
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 16; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,2);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
vi_attribs[0].binding = MESH_BIND_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,1);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
vi_attribs[0].binding = MESH_BIND_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,1);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render two triangles
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
vi_attribs[0].binding = MESH_BIND_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,1);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[1];
vi_attribs[0].binding = MESH_BIND_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,1);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
const int matrixSize = sizeof(MVP) / sizeof(MVP[0]);
XglConstantBufferObj MVPBuffer(m_device, matrixSize, sizeof(MVP[0]), (const void*) &MVP[0][0]);
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
// Create descriptor set and attach the constant buffer to it
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, MVPBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, MVPBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
// cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BUF_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BUF_ID, // Binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BUF_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BUF_ID; // binding ID
vi_attribs[1].location = 1;
- vi_attribs[1].format = XGL_FMT_R32G32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 16; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs, 2);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(vData)/sizeof(vData[0]),sizeof(vData[0]), vData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BUF_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BUF_ID, // Binding ID
sizeof(vData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BUF_ID; // binding ID
vi_attribs[0].location = 4;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = sizeof(float) * 4 * 2; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BUF_ID; // binding ID
vi_attribs[1].location = 1;
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = sizeof(float) * 4 * 1; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs, 2);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, MESH_BUF_ID);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
XglConstantBufferObj meshBuffer(m_device,sizeof(g_vbData)/sizeof(g_vbData[0]),sizeof(g_vbData[0]), g_vbData);
meshBuffer.BufferMemoryBarrier();
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, meshBuffer);
#define MESH_BUF_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BUF_ID, // Binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BUF_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BUF_ID; // binding ID
vi_attribs[1].location = 1;
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 16; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs, 2);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
const int buf_size = sizeof(MVP) / sizeof(float);
XglConstantBufferObj MVPBuffer(m_device, buf_size, sizeof(MVP[0]), (const void*) &MVP[0][0]);
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglPipelineObj pipelineobj(m_device);
pipelineobj.AddShader(&vs);
pipelineobj.AddShader(&ps);
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds_state;
- ds_state.depthTestEnable = XGL_TRUE;
- ds_state.depthWriteEnable = XGL_TRUE;
- ds_state.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds_state.depthBoundsEnable = XGL_FALSE;
- ds_state.stencilTestEnable = XGL_FALSE;
- ds_state.back.stencilDepthFailOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds_state.format = XGL_FMT_D32_SFLOAT;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds_state;
+ ds_state.depthTestEnable = VK_TRUE;
+ ds_state.depthWriteEnable = VK_TRUE;
+ ds_state.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds_state.depthBoundsEnable = VK_FALSE;
+ ds_state.stencilTestEnable = VK_FALSE;
+ ds_state.back.stencilDepthFailOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds_state.format = VK_FMT_D32_SFLOAT;
ds_state.front = ds_state.back;
pipelineobj.SetDepthStencil(&ds_state);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, MVPBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, MVPBuffer);
#define MESH_BUF_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BUF_ID, // Binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BUF_ID; // binding ID
vi_attribs[0].location = 0;
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BUF_ID; // binding ID
vi_attribs[1].location = 1;
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 16; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs, 2);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler(m_device);
XglTextureObj texture(m_device);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler(m_device);
XglTextureObj texture(m_device);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler(m_device);
XglTextureObj texture(m_device);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler(m_device);
XglTextureObj texture(m_device);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler1(m_device);
XglSamplerObj sampler2(m_device);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
// Let's populate our buffer with the following:
// vec4 red;
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, colorBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, colorBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
// We're going to create a number of uniform buffers, and then allow
// the shader to select which it wants to read from with a binding
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
// We're going to create a number of uniform buffers, and then allow
// the shader to select which it wants to read from with a binding
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
const int buf_size = sizeof(MVP) / sizeof(float);
XglConstantBufferObj mvpBuffer(m_device, buf_size, sizeof(MVP[0]), (const void*) &MVP[0][0]);
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglSamplerObj sampler(m_device);
XglTextureObj texture(m_device);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, mvpBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, mvpBuffer);
descriptorSet.AppendSamplerTexture(&sampler, &texture);
#define MESH_BIND_ID 0
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_binding = {
MESH_BIND_ID, // binding ID
sizeof(g_vbData[0]), // strideInBytes; Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX // stepRate; // Rate at which binding is incremented
};
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attribs[2];
vi_attribs[0].binding = MESH_BIND_ID; // Binding ID
vi_attribs[0].location = 0; // location
- vi_attribs[0].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[0].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[0].offsetInBytes = 0; // Offset of first element in bytes from base of vertex
vi_attribs[1].binding = MESH_BIND_ID; // Binding ID
vi_attribs[1].location = 1; // location
- vi_attribs[1].format = XGL_FMT_R32G32B32A32_SFLOAT; // format of source data
+ vi_attribs[1].format = VK_FMT_R32G32B32A32_SFLOAT; // format of source data
vi_attribs[1].offsetInBytes = 16; // Offset of first element in bytes from base of vertex
pipelineobj.AddVertexInputAttribs(vi_attribs,2);
pipelineobj.AddVertexInputBindings(&vi_binding,1);
pipelineobj.AddVertexDataBuffer(&meshBuffer, MESH_BIND_ID);
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds_state;
- ds_state.depthTestEnable = XGL_TRUE;
- ds_state.depthWriteEnable = XGL_TRUE;
- ds_state.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds_state.depthBoundsEnable = XGL_FALSE;
- ds_state.stencilTestEnable = XGL_FALSE;
- ds_state.back.stencilDepthFailOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds_state.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds_state.format = XGL_FMT_D32_SFLOAT;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds_state;
+ ds_state.depthTestEnable = VK_TRUE;
+ ds_state.depthWriteEnable = VK_TRUE;
+ ds_state.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds_state.depthBoundsEnable = VK_FALSE;
+ ds_state.stencilTestEnable = VK_FALSE;
+ ds_state.back.stencilDepthFailOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds_state.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds_state.format = VK_FMT_D32_SFLOAT;
ds_state.front = ds_state.back;
pipelineobj.SetDepthStencil(&ds_state);
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
cmdBuffer.BindVertexBuffer(&meshBuffer, 0, 0);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
const float redVals[4] = { 1.0, 0.0, 0.0, 1.0 };
const float greenVals[4] = { 0.0, 1.0, 0.0, 1.0 };
descriptorSet.AppendSamplerTexture(&sampler2, &texture2);
descriptorSet.AppendSamplerTexture(&sampler4, &texture4);
descriptorSet.AppendSamplerTexture(&sampler7, &texture7);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
// swap blue and green
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
ASSERT_NO_FATAL_FAILURE(InitState());
ASSERT_NO_FATAL_FAILURE(InitViewport());
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
const float redVals[4] = { 1.0, 0.0, 0.0, 1.0 };
const float greenVals[4] = { 0.0, 1.0, 0.0, 1.0 };
descriptorSet.AppendSamplerTexture(&sampler2, &texture2);
descriptorSet.AppendSamplerTexture(&sampler4, &texture4);
descriptorSet.AppendSamplerTexture(&sampler7, &texture7);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, redBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, greenBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, blueBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, whiteBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
const int constCount = sizeof(mixedVals) / sizeof(float);
- XglShaderObj vs(m_device,vertShaderText,XGL_SHADER_STAGE_VERTEX, this);
- XglShaderObj ps(m_device,fragShaderText, XGL_SHADER_STAGE_FRAGMENT, this);
+ XglShaderObj vs(m_device,vertShaderText,VK_SHADER_STAGE_VERTEX, this);
+ XglShaderObj ps(m_device,fragShaderText, VK_SHADER_STAGE_FRAGMENT, this);
XglConstantBufferObj mixedBuffer(m_device, constCount, sizeof(mixedVals[0]), (const void*) mixedVals);
pipelineobj.AddShader(&ps);
XglDescriptorSetObj descriptorSet(m_device);
- descriptorSet.AppendBuffer(XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER, mixedBuffer);
+ descriptorSet.AppendBuffer(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, mixedBuffer);
ASSERT_NO_FATAL_FAILURE(InitRenderTarget());
XglCommandBufferObj cmdBuffer(m_device);
cmdBuffer.AddRenderTarget(m_renderTargets[0]);
- ASSERT_XGL_SUCCESS(BeginCommandBuffer(cmdBuffer));
+ ASSERT_VK_SUCCESS(BeginCommandBuffer(cmdBuffer));
GenericDrawPreparation(&cmdBuffer, pipelineobj, descriptorSet);
#ifdef DUMP_STATE_DOT
- DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)xglGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
+ DRAW_STATE_DUMP_DOT_FILE pDSDumpDot = (DRAW_STATE_DUMP_DOT_FILE)vkGetProcAddr(gpu(), (char*)"drawStateDumpDotFile");
pDSDumpDot((char*)"triTest2.dot");
#endif
// render triangle
#
# Run all the regression tests
-# xglbase tests that basic XGL calls are working (don't return an error).
-./xglbase
+# vkbase tests that basic VK calls are working (don't return an error).
+./vkbase
-# xgl_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
-./xgl_blit_tests
+# vk_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
+./vk_blit_tests
-# xgl_image_tests check that image can be allocated and bound.
-./xgl_image_tests
+# vk_image_tests check that image can be allocated and bound.
+./vk_image_tests
-#xgl_render_tests tests a variety of features using rendered images
+#vk_render_tests tests a variety of features using rendered images
# --compare-images will cause the test to check the resulting image against
# a saved "golden" image and will report an error if there is any difference
-./xgl_render_tests --compare-images
+./vk_render_tests --compare-images
# Run all the regression tests with validation layers enabled
# enable layers
-export LIBXGL_LAYER_NAMES=DrawState:MemTracker:ParamChecker:ObjectTracker
+export LIBVK_LAYER_NAMES=DrawState:MemTracker:ParamChecker:ObjectTracker
# Save any existing settings file
RESTORE_SETTINGS="false"
-SETTINGS_NAME="xgl_layer_settings.txt"
+SETTINGS_NAME="vk_layer_settings.txt"
TMP_SETTINGS_NAME="xls.txt"
-OUTPUT_LEVEL="XGL_DBG_LAYER_LEVEL_ERROR"
+OUTPUT_LEVEL="VK_DBG_LAYER_LEVEL_ERROR"
if [ -f $SETTINGS_NAME ]; then
echo Saving $SETTINGS_NAME to $TMP_SETTINGS_NAME
RESTORE_SETTINGS="true"
echo "ObjectTrackerReportLevel = $OUTPUT_LEVEL" >> $SETTINGS_NAME
echo "ParamCheckerReportLevel = $OUTPUT_LEVEL" >> $SETTINGS_NAME
-# xglbase tests that basic XGL calls are working (don't return an error).
-./xglbase
+# vkbase tests that basic VK calls are working (don't return an error).
+./vkbase
-# xgl_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
-./xgl_blit_tests
+# vk_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
+./vk_blit_tests
-# xgl_image_tests check that image can be allocated and bound.
-./xgl_image_tests
+# vk_image_tests check that image can be allocated and bound.
+./vk_image_tests
-#xgl_render_tests tests a variety of features using rendered images
+#vk_render_tests tests a variety of features using rendered images
# --compare-images will cause the test to check the resulting image against
# a saved "golden" image and will report an error if there is any difference
-./xgl_render_tests --compare-images
+./vk_render_tests --compare-images
if [ "$RESTORE_SETTINGS" = "true" ]; then
echo Restore $SETTINGS_NAME from $TMP_SETTINGS_NAME
#include <string.h>
#include <assert.h>
-#include <xgl.h>
+#include <vulkan.h>
#include "gtest/gtest.h"
#include "gtest-1.7.0/include/gtest/gtest.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
-#define ASSERT_XGL_SUCCESS(err) ASSERT_EQ(XGL_SUCCESS, err) << xgl_result_string(err)
+#define ASSERT_VK_SUCCESS(err) ASSERT_EQ(VK_SUCCESS, err) << vk_result_string(err)
-static inline const char *xgl_result_string(XGL_RESULT err)
+static inline const char *vk_result_string(VK_RESULT err)
{
switch (err) {
#define STR(r) case r: return #r
- STR(XGL_SUCCESS);
- STR(XGL_UNSUPPORTED);
- STR(XGL_NOT_READY);
- STR(XGL_TIMEOUT);
- STR(XGL_EVENT_SET);
- STR(XGL_EVENT_RESET);
- STR(XGL_ERROR_UNKNOWN);
- STR(XGL_ERROR_UNAVAILABLE);
- STR(XGL_ERROR_INITIALIZATION_FAILED);
- STR(XGL_ERROR_OUT_OF_MEMORY);
- STR(XGL_ERROR_OUT_OF_GPU_MEMORY);
- STR(XGL_ERROR_DEVICE_ALREADY_CREATED);
- STR(XGL_ERROR_DEVICE_LOST);
- STR(XGL_ERROR_INVALID_POINTER);
- STR(XGL_ERROR_INVALID_VALUE);
- STR(XGL_ERROR_INVALID_HANDLE);
- STR(XGL_ERROR_INVALID_ORDINAL);
- STR(XGL_ERROR_INVALID_MEMORY_SIZE);
- STR(XGL_ERROR_INVALID_EXTENSION);
- STR(XGL_ERROR_INVALID_FLAGS);
- STR(XGL_ERROR_INVALID_ALIGNMENT);
- STR(XGL_ERROR_INVALID_FORMAT);
- STR(XGL_ERROR_INVALID_IMAGE);
- STR(XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA);
- STR(XGL_ERROR_INVALID_QUEUE_TYPE);
- STR(XGL_ERROR_INVALID_OBJECT_TYPE);
- STR(XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
- STR(XGL_ERROR_BAD_SHADER_CODE);
- STR(XGL_ERROR_BAD_PIPELINE_DATA);
- STR(XGL_ERROR_TOO_MANY_MEMORY_REFERENCES);
- STR(XGL_ERROR_NOT_MAPPABLE);
- STR(XGL_ERROR_MEMORY_MAP_FAILED);
- STR(XGL_ERROR_MEMORY_UNMAP_FAILED);
- STR(XGL_ERROR_INCOMPATIBLE_DEVICE);
- STR(XGL_ERROR_INCOMPATIBLE_DRIVER);
- STR(XGL_ERROR_INCOMPLETE_COMMAND_BUFFER);
- STR(XGL_ERROR_BUILDING_COMMAND_BUFFER);
- STR(XGL_ERROR_MEMORY_NOT_BOUND);
- STR(XGL_ERROR_INCOMPATIBLE_QUEUE);
- STR(XGL_ERROR_NOT_SHAREABLE);
+ STR(VK_SUCCESS);
+ STR(VK_UNSUPPORTED);
+ STR(VK_NOT_READY);
+ STR(VK_TIMEOUT);
+ STR(VK_EVENT_SET);
+ STR(VK_EVENT_RESET);
+ STR(VK_ERROR_UNKNOWN);
+ STR(VK_ERROR_UNAVAILABLE);
+ STR(VK_ERROR_INITIALIZATION_FAILED);
+ STR(VK_ERROR_OUT_OF_MEMORY);
+ STR(VK_ERROR_OUT_OF_GPU_MEMORY);
+ STR(VK_ERROR_DEVICE_ALREADY_CREATED);
+ STR(VK_ERROR_DEVICE_LOST);
+ STR(VK_ERROR_INVALID_POINTER);
+ STR(VK_ERROR_INVALID_VALUE);
+ STR(VK_ERROR_INVALID_HANDLE);
+ STR(VK_ERROR_INVALID_ORDINAL);
+ STR(VK_ERROR_INVALID_MEMORY_SIZE);
+ STR(VK_ERROR_INVALID_EXTENSION);
+ STR(VK_ERROR_INVALID_FLAGS);
+ STR(VK_ERROR_INVALID_ALIGNMENT);
+ STR(VK_ERROR_INVALID_FORMAT);
+ STR(VK_ERROR_INVALID_IMAGE);
+ STR(VK_ERROR_INVALID_DESCRIPTOR_SET_DATA);
+ STR(VK_ERROR_INVALID_QUEUE_TYPE);
+ STR(VK_ERROR_INVALID_OBJECT_TYPE);
+ STR(VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
+ STR(VK_ERROR_BAD_SHADER_CODE);
+ STR(VK_ERROR_BAD_PIPELINE_DATA);
+ STR(VK_ERROR_TOO_MANY_MEMORY_REFERENCES);
+ STR(VK_ERROR_NOT_MAPPABLE);
+ STR(VK_ERROR_MEMORY_MAP_FAILED);
+ STR(VK_ERROR_MEMORY_UNMAP_FAILED);
+ STR(VK_ERROR_INCOMPATIBLE_DEVICE);
+ STR(VK_ERROR_INCOMPATIBLE_DRIVER);
+ STR(VK_ERROR_INCOMPLETE_COMMAND_BUFFER);
+ STR(VK_ERROR_BUILDING_COMMAND_BUFFER);
+ STR(VK_ERROR_MEMORY_NOT_BOUND);
+ STR(VK_ERROR_INCOMPATIBLE_QUEUE);
+ STR(VK_ERROR_NOT_SHAREABLE);
#undef STR
default: return "UNKNOWN_RESULT";
}
#include "test_common.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
#include "test_environment.h"
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
-namespace xgl_testing {
+namespace vk_testing {
Environment::Environment() :
m_connection(NULL), default_dev_(0)
{
- app_.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
- app_.pAppName = "xgl_testing";
+ app_.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
+ app_.pAppName = "vk_testing";
app_.appVersion = 1;
- app_.pEngineName = "xgl_testing";
+ app_.pEngineName = "vk_testing";
app_.engineVersion = 1;
- app_.apiVersion = XGL_API_VERSION;
+ app_.apiVersion = VK_API_VERSION;
}
bool Environment::parse_args(int argc, char **argv)
{
uint32_t count;
- XGL_RESULT err;
- XGL_INSTANCE_CREATE_INFO inst_info = {};
- inst_info.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ VK_RESULT err;
+ VK_INSTANCE_CREATE_INFO inst_info = {};
+ inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.pAppInfo = &app_;
inst_info.pAllocCb = NULL;
inst_info.extensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
- err = xglCreateInstance(&inst_info, &inst);
- ASSERT_EQ(XGL_SUCCESS, err);
- err = xglEnumerateGpus(inst, ARRAY_SIZE(gpus), &count, gpus);
- ASSERT_EQ(XGL_SUCCESS, err);
+ err = vkCreateInstance(&inst_info, &inst);
+ ASSERT_EQ(VK_SUCCESS, err);
+ err = vkEnumerateGpus(inst, ARRAY_SIZE(gpus), &count, gpus);
+ ASSERT_EQ(VK_SUCCESS, err);
ASSERT_GT(count, default_dev_);
devs_.reserve(count);
{
uint32_t count;
- XGL_RESULT err;
+ VK_RESULT err;
const xcb_setup_t *setup;
xcb_screen_iterator_t iter;
int scr;
- XGL_INSTANCE_CREATE_INFO instInfo = {};
- instInfo.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ VK_INSTANCE_CREATE_INFO instInfo = {};
+ instInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
instInfo.pNext = NULL;
instInfo.pAppInfo = &app_;
instInfo.pAllocCb = NULL;
instInfo.extensionCount = 0;
instInfo.ppEnabledExtensionNames = NULL;
- err = xglCreateInstance(&instInfo, &inst);
- ASSERT_EQ(XGL_SUCCESS, err);
- err = xglEnumerateGpus(inst, ARRAY_SIZE(gpus), &count, gpus);
- ASSERT_EQ(XGL_SUCCESS, err);
+ err = vkCreateInstance(&instInfo, &inst);
+ ASSERT_EQ(VK_SUCCESS, err);
+ err = vkEnumerateGpus(inst, ARRAY_SIZE(gpus), &count, gpus);
+ ASSERT_EQ(VK_SUCCESS, err);
ASSERT_GT(count, default_dev_);
m_connection = xcb_connect(NULL, &scr);
m_screen = iter.data;
- XGL_WSI_X11_CONNECTION_INFO connection_info = {};
+ VK_WSI_X11_CONNECTION_INFO connection_info = {};
connection_info.pConnection = m_connection;
connection_info.root = m_screen->root;
connection_info.provider = 0;
- err = xglWsiX11AssociateConnection(gpus[0], &connection_info);
+ err = vkWsiX11AssociateConnection(gpus[0], &connection_info);
assert(!err);
delete *it;
devs_.clear();
- xglDestroyInstance(inst);
+ vkDestroyInstance(inst);
}
-} // xgl_testing namespace
+} // vk_testing namespace
#ifndef TEST_ENVIRONMENT_H
#define TEST_ENVIRONMENT_H
-#include "xgltestbinding.h"
-#include <xglWsiX11Ext.h>
+#include "vktestbinding.h"
+#include <vkWsiX11Ext.h>
-namespace xgl_testing {
+namespace vk_testing {
class Environment : public ::testing::Environment {
public:
Environment();
const std::vector<Device *> &devices() { return devs_; }
Device &default_device() { return *(devs_[default_dev_]); }
- XGL_PHYSICAL_GPU gpus[XGL_MAX_PHYSICAL_GPUS];
+ VK_PHYSICAL_GPU gpus[VK_MAX_PHYSICAL_GPUS];
private:
- XGL_APPLICATION_INFO app_;
+ VK_APPLICATION_INFO app_;
int default_dev_;
- XGL_INSTANCE inst;
+ VK_INSTANCE inst;
std::vector<Device *> devs_;
};
--- /dev/null
+/*
+ * Vulkan Tests
+ *
+ * Copyright (C) 2014 LunarG, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Courtney Goeltzenleuchter <courtney@lunarg.com>
+ */
+
+#ifndef VKRENDERFRAMEWORK_H
+#define VKRENDERFRAMEWORK_H
+
+#include "vktestframework.h"
+
+
+class XglDevice : public vk_testing::Device
+{
+public:
+ XglDevice(uint32_t id, VK_PHYSICAL_GPU obj);
+
+ VK_DEVICE device() { return obj(); }
+ void get_device_queue();
+
+ uint32_t id;
+ VK_PHYSICAL_GPU_PROPERTIES props;
+ const VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+
+ VK_QUEUE m_queue;
+};
+
+class XglMemoryRefManager
+{
+public:
+ void AddMemoryRefs(vk_testing::Object &vkObject);
+ void AddMemoryRefs(vector<VK_GPU_MEMORY> mem);
+ void EmitAddMemoryRefs(VK_QUEUE queue);
+ void EmitRemoveMemoryRefs(VK_QUEUE queue);
+ vector<VK_GPU_MEMORY> mem_refs() const;
+
+protected:
+ vector<VK_GPU_MEMORY> mem_refs_;
+
+};
+
+class XglDepthStencilObj : public vk_testing::Image
+{
+public:
+ XglDepthStencilObj();
+ void Init(XglDevice *device, int32_t width, int32_t height);
+ bool Initialized();
+ VK_DEPTH_STENCIL_BIND_INFO* BindInfo();
+
+protected:
+ XglDevice *m_device;
+ bool m_initialized;
+ vk_testing::DepthStencilView m_depthStencilView;
+ VK_FORMAT m_depth_stencil_fmt;
+ VK_DEPTH_STENCIL_BIND_INFO m_depthStencilBindInfo;
+};
+
+class XglRenderFramework : public XglTestFramework
+{
+public:
+ XglRenderFramework();
+ ~XglRenderFramework();
+
+ VK_DEVICE device() {return m_device->device();}
+ VK_PHYSICAL_GPU gpu() {return objs[0];}
+ VK_RENDER_PASS renderPass() {return m_renderPass;}
+ VK_FRAMEBUFFER framebuffer() {return m_framebuffer;}
+ void InitViewport(float width, float height);
+ void InitViewport();
+ void InitRenderTarget();
+ void InitRenderTarget(uint32_t targets);
+ void InitRenderTarget(VK_DEPTH_STENCIL_BIND_INFO *dsBinding);
+ void InitRenderTarget(uint32_t targets, VK_DEPTH_STENCIL_BIND_INFO *dsBinding);
+ void InitFramework();
+ void ShutdownFramework();
+ void InitState();
+
+
+protected:
+ VK_APPLICATION_INFO app_info;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU objs[VK_MAX_PHYSICAL_GPUS];
+ uint32_t gpu_count;
+ XglDevice *m_device;
+ VK_CMD_BUFFER m_cmdBuffer;
+ VK_RENDER_PASS m_renderPass;
+ VK_FRAMEBUFFER m_framebuffer;
+ VK_DYNAMIC_RS_STATE_OBJECT m_stateRaster;
+ VK_DYNAMIC_CB_STATE_OBJECT m_colorBlend;
+ VK_DYNAMIC_VP_STATE_OBJECT m_stateViewport;
+ VK_DYNAMIC_DS_STATE_OBJECT m_stateDepthStencil;
+ vector<XglImage*> m_renderTargets;
+ float m_width, m_height;
+ VK_FORMAT m_render_target_fmt;
+ VK_FORMAT m_depth_stencil_fmt;
+ VK_COLOR_ATTACHMENT_BIND_INFO m_colorBindings[8];
+ VK_CLEAR_COLOR m_clear_color;
+ float m_depth_clear_color;
+ uint32_t m_stencil_clear_color;
+ XglDepthStencilObj *m_depthStencil;
+ XglMemoryRefManager m_mem_ref_mgr;
+
+ /*
+ * SetUp and TearDown are called by the Google Test framework
+ * to initialize a test framework based on this class.
+ */
+ virtual void SetUp() {
+ this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
+ this->app_info.pNext = NULL;
+ this->app_info.pAppName = "base";
+ this->app_info.appVersion = 1;
+ this->app_info.pEngineName = "unittest";
+ this->app_info.engineVersion = 1;
+ this->app_info.apiVersion = VK_API_VERSION;
+
+ InitFramework();
+ }
+
+ virtual void TearDown() {
+ ShutdownFramework();
+ }
+};
+
+class XglDescriptorSetObj;
+class XglIndexBufferObj;
+class XglConstantBufferObj;
+class XglPipelineObj;
+class XglDescriptorSetObj;
+
+class XglCommandBufferObj : public vk_testing::CmdBuffer
+{
+public:
+ XglCommandBufferObj(XglDevice *device);
+ VK_CMD_BUFFER GetBufferHandle();
+ VK_RESULT BeginCommandBuffer();
+ VK_RESULT BeginCommandBuffer(VK_CMD_BUFFER_BEGIN_INFO *pInfo);
+ VK_RESULT BeginCommandBuffer(VK_RENDER_PASS renderpass_obj, VK_FRAMEBUFFER framebuffer_obj);
+ VK_RESULT EndCommandBuffer();
+ void PipelineBarrier(VK_PIPELINE_BARRIER *barrierPtr);
+ void AddRenderTarget(XglImage *renderTarget);
+ void AddDepthStencil();
+ void ClearAllBuffers(VK_CLEAR_COLOR clear_color, float depth_clear_color, uint32_t stencil_clear_color, XglDepthStencilObj *depthStencilObj);
+ void PrepareAttachments();
+ void AddMemoryRefs(vk_testing::Object &vkObject);
+ void AddMemoryRefs(uint32_t ref_count, const VK_GPU_MEMORY *mem);
+ void AddMemoryRefs(vector<vk_testing::Object *> images);
+ void BindPipeline(XglPipelineObj &pipeline);
+ void BindDescriptorSet(XglDescriptorSetObj &descriptorSet);
+ void BindVertexBuffer(XglConstantBufferObj *vertexBuffer, uint32_t offset, uint32_t binding);
+ void BindIndexBuffer(XglIndexBufferObj *indexBuffer, uint32_t offset);
+ void BindStateObject(VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT stateObject);
+ void BeginRenderPass(VK_RENDER_PASS renderpass, VK_FRAMEBUFFER framebuffer);
+ void EndRenderPass(VK_RENDER_PASS renderpass);
+ void Draw(uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
+ void DrawIndexed(uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
+ void QueueCommandBuffer();
+ void QueueCommandBuffer(VK_FENCE fence);
+
+ XglMemoryRefManager mem_ref_mgr;
+
+protected:
+ XglDevice *m_device;
+ vector<XglImage*> m_renderTargets;
+};
+
+class XglConstantBufferObj : public vk_testing::Buffer
+{
+public:
+ XglConstantBufferObj(XglDevice *device);
+ XglConstantBufferObj(XglDevice *device, int constantCount, int constantSize, const void* data);
+ ~XglConstantBufferObj();
+ void BufferMemoryBarrier(
+ VK_FLAGS outputMask =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT,
+ VK_FLAGS inputMask =
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT);
+
+ void Bind(VK_CMD_BUFFER cmdBuffer, VK_GPU_SIZE offset, uint32_t binding);
+
+ VK_BUFFER_VIEW_ATTACH_INFO m_bufferViewInfo;
+
+protected:
+ XglDevice *m_device;
+ vk_testing::BufferView m_bufferView;
+ int m_numVertices;
+ int m_stride;
+ XglCommandBufferObj *m_commandBuffer;
+ vk_testing::Fence m_fence;
+};
+
+class XglIndexBufferObj : public XglConstantBufferObj
+{
+public:
+ XglIndexBufferObj(XglDevice *device);
+ void CreateAndInitBuffer(int numIndexes, VK_INDEX_TYPE dataFormat, const void* data);
+ void Bind(VK_CMD_BUFFER cmdBuffer, VK_GPU_SIZE offset);
+ VK_INDEX_TYPE GetIndexType();
+
+protected:
+ VK_INDEX_TYPE m_indexType;
+};
+
+class XglImage : public vk_testing::Image
+{
+public:
+ XglImage(XglDevice *dev);
+ bool IsCompatible(VK_FLAGS usage, VK_FLAGS features);
+
+public:
+ void init(uint32_t w, uint32_t h,
+ VK_FORMAT fmt, VK_FLAGS usage,
+ VK_IMAGE_TILING tiling=VK_LINEAR_TILING);
+
+ // void clear( CommandBuffer*, uint32_t[4] );
+
+ void layout( VK_IMAGE_LAYOUT layout )
+ {
+ m_imageInfo.layout = layout;
+ }
+
+ VK_GPU_MEMORY memory() const
+ {
+ const std::vector<VK_GPU_MEMORY> mems = memories();
+ return mems.empty() ? VK_NULL_HANDLE : mems[0];
+ }
+
+ void ImageMemoryBarrier(XglCommandBufferObj *cmd,
+ VK_IMAGE_ASPECT aspect,
+ VK_FLAGS output_mask,
+ VK_FLAGS input_mask,
+ VK_IMAGE_LAYOUT image_layout);
+
+ VK_RESULT CopyImage(XglImage &src_image);
+
+ VK_IMAGE image() const
+ {
+ return obj();
+ }
+
+ VK_COLOR_ATTACHMENT_VIEW targetView()
+ {
+ if (!m_targetView.initialized())
+ {
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO createView = {
+ VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
+ VK_NULL_HANDLE,
+ obj(),
+ VK_FMT_B8G8R8A8_UNORM,
+ 0,
+ 0,
+ 1
+ };
+ m_targetView.init(*m_device, createView);
+ }
+ return m_targetView.obj();
+ }
+
+ void SetLayout(XglCommandBufferObj *cmd_buf, VK_IMAGE_ASPECT aspect, VK_IMAGE_LAYOUT image_layout);
+ void SetLayout(VK_IMAGE_ASPECT aspect, VK_IMAGE_LAYOUT image_layout);
+
+ VK_IMAGE_LAYOUT layout() const
+ {
+ return ( VK_IMAGE_LAYOUT )m_imageInfo.layout;
+ }
+ uint32_t width() const
+ {
+ return extent().width;
+ }
+ uint32_t height() const
+ {
+ return extent().height;
+ }
+ XglDevice* device() const
+ {
+ return m_device;
+ }
+
+ VK_RESULT MapMemory(void** ptr);
+ VK_RESULT UnmapMemory();
+
+protected:
+ XglDevice *m_device;
+
+ vk_testing::ColorAttachmentView m_targetView;
+ VK_IMAGE_VIEW_ATTACH_INFO m_imageInfo;
+};
+
+class XglTextureObj : public XglImage
+{
+public:
+ XglTextureObj(XglDevice *device, uint32_t *colors = NULL);
+ VK_IMAGE_VIEW_ATTACH_INFO m_textureViewInfo;
+
+
+protected:
+ XglDevice *m_device;
+ vk_testing::ImageView m_textureView;
+ VK_GPU_SIZE m_rowPitch;
+};
+
+class XglSamplerObj : public vk_testing::Sampler
+{
+public:
+ XglSamplerObj(XglDevice *device);
+
+protected:
+ XglDevice *m_device;
+
+};
+
+class XglDescriptorSetObj : public vk_testing::DescriptorPool
+{
+public:
+ XglDescriptorSetObj(XglDevice *device);
+ ~XglDescriptorSetObj();
+
+ int AppendDummy();
+ int AppendBuffer(VK_DESCRIPTOR_TYPE type, XglConstantBufferObj &constantBuffer);
+ int AppendSamplerTexture(XglSamplerObj* sampler, XglTextureObj* texture);
+ void CreateVKDescriptorSet(XglCommandBufferObj *cmdBuffer);
+
+ VK_DESCRIPTOR_SET GetDescriptorSetHandle() const;
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN GetLayoutChain() const;
+
+ XglMemoryRefManager mem_ref_mgr;
+
+protected:
+ XglDevice *m_device;
+ vector<VK_DESCRIPTOR_TYPE_COUNT> m_type_counts;
+ int m_nextSlot;
+
+ vector<VK_UPDATE_BUFFERS> m_updateBuffers;
+
+ vector<VK_SAMPLER_IMAGE_VIEW_INFO> m_samplerTextureInfo;
+ vector<VK_UPDATE_SAMPLER_TEXTURES> m_updateSamplerTextures;
+
+ vk_testing::DescriptorSetLayout m_layout;
+ vk_testing::DescriptorSetLayoutChain m_layout_chain;
+ vk_testing::DescriptorSet *m_set;
+};
+
+
+class XglShaderObj : public vk_testing::Shader
+{
+public:
+ XglShaderObj(XglDevice *device, const char * shaderText, VK_PIPELINE_SHADER_STAGE stage, XglRenderFramework *framework);
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO* GetStageCreateInfo();
+
+protected:
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO stage_info;
+ VK_PIPELINE_SHADER_STAGE m_stage;
+ XglDevice *m_device;
+
+};
+
+class XglPipelineObj : public vk_testing::Pipeline
+{
+public:
+ XglPipelineObj(XglDevice *device);
+ void AddShader(XglShaderObj* shaderObj);
+ void AddVertexInputAttribs(VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* vi_attrib, int count);
+ void AddVertexInputBindings(VK_VERTEX_INPUT_BINDING_DESCRIPTION* vi_binding, int count);
+ void AddVertexDataBuffer(XglConstantBufferObj* vertexDataBuffer, int binding);
+ void AddColorAttachment(uint32_t binding, const VK_PIPELINE_CB_ATTACHMENT_STATE *att);
+ void SetDepthStencil(VK_PIPELINE_DS_STATE_CREATE_INFO *);
+ void CreateVKPipeline(XglDescriptorSetObj &descriptorSet);
+
+protected:
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO m_vi_state;
+ VK_PIPELINE_IA_STATE_CREATE_INFO m_ia_state;
+ VK_PIPELINE_RS_STATE_CREATE_INFO m_rs_state;
+ VK_PIPELINE_CB_STATE_CREATE_INFO m_cb_state;
+ VK_PIPELINE_DS_STATE_CREATE_INFO m_ds_state;
+ VK_PIPELINE_MS_STATE_CREATE_INFO m_ms_state;
+ XglDevice *m_device;
+ vector<XglShaderObj*> m_shaderObjs;
+ vector<XglConstantBufferObj*> m_vertexBufferObjs;
+ vector<int> m_vertexBufferBindings;
+ vector<VK_PIPELINE_CB_ATTACHMENT_STATE> m_colorAttachments;
+ int m_vertexBufferCount;
+
+};
+
+#endif // VKRENDERFRAMEWORK_H
--- /dev/null
+// VK tests
+//
+// Copyright (C) 2014 LunarG, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a
+// copy of this software and associated documentation files (the "Software"),
+// to deal in the Software without restriction, including without limitation
+// the rights to use, copy, modify, merge, publish, distribute, sublicense,
+// and/or sell copies of the Software, and to permit persons to whom the
+// Software is furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included
+// in all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+#ifndef VKTESTBINDING_H
+#define VKTESTBINDING_H
+
+#include <vector>
+
+#include "vulkan.h"
+
+namespace vk_testing {
+
+typedef void (*ErrorCallback)(const char *expr, const char *file, unsigned int line, const char *function);
+void set_error_callback(ErrorCallback callback);
+
+class PhysicalGpu;
+class BaseObject;
+class Object;
+class DynamicStateObject;
+class Device;
+class Queue;
+class GpuMemory;
+class Fence;
+class Semaphore;
+class Event;
+class QueryPool;
+class Buffer;
+class BufferView;
+class Image;
+class ImageView;
+class ColorAttachmentView;
+class DepthStencilView;
+class Shader;
+class Pipeline;
+class PipelineDelta;
+class Sampler;
+class DescriptorSetLayout;
+class DescriptorSetLayoutChain;
+class DescriptorSetPool;
+class DescriptorSet;
+class DynamicVpStateObject;
+class DynamicRsStateObject;
+class DynamicMsaaStateObject;
+class DynamicCbStateObject;
+class DynamicDsStateObject;
+class CmdBuffer;
+
+class PhysicalGpu {
+public:
+ explicit PhysicalGpu(VK_PHYSICAL_GPU gpu) : gpu_(gpu) {}
+
+ const VK_PHYSICAL_GPU &obj() const { return gpu_; }
+
+ // vkGetGpuInfo()
+ VK_PHYSICAL_GPU_PROPERTIES properties() const;
+ VK_PHYSICAL_GPU_PERFORMANCE performance() const;
+ VK_PHYSICAL_GPU_MEMORY_PROPERTIES memory_properties() const;
+ std::vector<VK_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_properties() const;
+
+ // vkGetProcAddr()
+ void *get_proc(const char *name) const { return vkGetProcAddr(gpu_, name); }
+
+ // vkGetExtensionSupport()
+ bool has_extension(const char *ext) const { return (vkGetExtensionSupport(gpu_, ext) == VK_SUCCESS); }
+ std::vector<const char *> extensions() const;
+
+ // vkEnumerateLayers()
+ std::vector<const char *> layers(std::vector<char> &buf) const;
+
+ // vkGetMultiGpuCompatibility()
+ VK_GPU_COMPATIBILITY_INFO compatibility(const PhysicalGpu &other) const;
+
+private:
+ VK_PHYSICAL_GPU gpu_;
+};
+
+class BaseObject {
+public:
+ const VK_BASE_OBJECT &obj() const { return obj_; }
+ bool initialized() const { return (obj_ != VK_NULL_HANDLE); }
+
+ // vkGetObjectInfo()
+ uint32_t memory_allocation_count() const;
+ std::vector<VK_MEMORY_REQUIREMENTS> memory_requirements() const;
+
+protected:
+ explicit BaseObject() : obj_(VK_NULL_HANDLE), own_obj_(false) {}
+ explicit BaseObject(VK_BASE_OBJECT obj) : obj_(VK_NULL_HANDLE), own_obj_(false) { init(obj); }
+
+ void init(VK_BASE_OBJECT obj, bool own);
+ void init(VK_BASE_OBJECT obj) { init(obj, true); }
+
+ void reinit(VK_BASE_OBJECT obj, bool own);
+ void reinit(VK_BASE_OBJECT obj) { reinit(obj, true); }
+
+ bool own() const { return own_obj_; }
+
+private:
+ // base objects are non-copyable
+ BaseObject(const BaseObject &);
+ BaseObject &operator=(const BaseObject &);
+
+ VK_BASE_OBJECT obj_;
+ bool own_obj_;
+};
+
+class Object : public BaseObject {
+public:
+ const VK_OBJECT &obj() const { return reinterpret_cast<const VK_OBJECT &>(BaseObject::obj()); }
+
+ // vkBindObjectMemory()
+ void bind_memory(uint32_t alloc_idx, const GpuMemory &mem, VK_GPU_SIZE mem_offset);
+ void unbind_memory(uint32_t alloc_idx);
+ void unbind_memory();
+
+ // vkBindObjectMemoryRange()
+ void bind_memory(uint32_t alloc_idx, VK_GPU_SIZE offset, VK_GPU_SIZE size,
+ const GpuMemory &mem, VK_GPU_SIZE mem_offset);
+
+ // Unless an object is initialized with init_no_mem(), memories are
+ // automatically allocated and bound. These methods can be used to get
+ // the memories (for vkQueueAddMemReference), or to map/unmap the primary memory.
+ std::vector<VK_GPU_MEMORY> memories() const;
+
+ const void *map(VK_FLAGS flags) const;
+ void *map(VK_FLAGS flags);
+ const void *map() const { return map(0); }
+ void *map() { return map(0); }
+
+ void unmap() const;
+
+protected:
+ explicit Object() : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) {}
+ explicit Object(VK_OBJECT obj) : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) { init(obj); }
+ ~Object() { cleanup(); }
+
+ void init(VK_OBJECT obj, bool own);
+ void init(VK_OBJECT obj) { init(obj, true); }
+
+ void reinit(VK_OBJECT obj, bool own);
+ void reinit(VK_OBJECT obj) { init(obj, true); }
+
+ // allocate and bind internal memories
+ void alloc_memory(const Device &dev, bool for_linear_img, bool for_img);
+ void alloc_memory(const Device &dev) { alloc_memory(dev, false, false); }
+ void alloc_memory(const std::vector<VK_GPU_MEMORY> &mems);
+
+private:
+ void cleanup();
+
+ uint32_t mem_alloc_count_;
+ GpuMemory *internal_mems_;
+ GpuMemory *primary_mem_;
+};
+
+class DynamicStateObject : public Object {
+public:
+ const VK_DYNAMIC_STATE_OBJECT &obj() const { return reinterpret_cast<const VK_DYNAMIC_STATE_OBJECT &>(Object::obj()); }
+
+protected:
+ explicit DynamicStateObject() {}
+ explicit DynamicStateObject(VK_DYNAMIC_STATE_OBJECT obj) : Object(obj) {}
+};
+
+template<typename T, class C>
+class DerivedObject : public C {
+public:
+ const T &obj() const { return reinterpret_cast<const T &>(C::obj()); }
+
+protected:
+ typedef T obj_type;
+ typedef C base_type;
+
+ explicit DerivedObject() {}
+ explicit DerivedObject(T obj) : C(obj) {}
+};
+
+class Device : public DerivedObject<VK_DEVICE, BaseObject> {
+public:
+ explicit Device(VK_PHYSICAL_GPU gpu) : gpu_(gpu) {}
+ ~Device();
+
+ // vkCreateDevice()
+ void init(const VK_DEVICE_CREATE_INFO &info);
+ void init(bool enable_layers); // all queues, all extensions, etc
+ void init() { init(false); };
+
+ const PhysicalGpu &gpu() const { return gpu_; }
+
+ // vkGetDeviceQueue()
+ const std::vector<Queue *> &graphics_queues() { return queues_[GRAPHICS]; }
+ const std::vector<Queue *> &compute_queues() { return queues_[COMPUTE]; }
+ const std::vector<Queue *> &dma_queues() { return queues_[DMA]; }
+ uint32_t graphics_queue_node_index_;
+
+ struct Format {
+ VK_FORMAT format;
+ VK_IMAGE_TILING tiling;
+ VK_FLAGS features;
+ };
+ // vkGetFormatInfo()
+ VK_FORMAT_PROPERTIES format_properties(VK_FORMAT format);
+ const std::vector<Format> &formats() const { return formats_; }
+
+ // vkDeviceWaitIdle()
+ void wait();
+
+ // vkWaitForFences()
+ VK_RESULT wait(const std::vector<const Fence *> &fences, bool wait_all, uint64_t timeout);
+ VK_RESULT wait(const Fence &fence) { return wait(std::vector<const Fence *>(1, &fence), true, (uint64_t) -1); }
+
+ // vkBeginDescriptorPoolUpdate()
+ // vkEndDescriptorPoolUpdate()
+ void begin_descriptor_pool_update(VK_DESCRIPTOR_UPDATE_MODE mode);
+ void end_descriptor_pool_update(CmdBuffer &cmd);
+
+private:
+ enum QueueIndex {
+ GRAPHICS,
+ COMPUTE,
+ DMA,
+ QUEUE_COUNT,
+ };
+
+ void init_queues();
+ void init_formats();
+
+ PhysicalGpu gpu_;
+
+ std::vector<Queue *> queues_[QUEUE_COUNT];
+ std::vector<Format> formats_;
+};
+
+class Queue : public DerivedObject<VK_QUEUE, BaseObject> {
+public:
+ explicit Queue(VK_QUEUE queue) : DerivedObject(queue) {}
+
+ // vkQueueSubmit()
+ void submit(const std::vector<const CmdBuffer *> &cmds, Fence &fence);
+ void submit(const CmdBuffer &cmd, Fence &fence);
+ void submit(const CmdBuffer &cmd);
+
+ // vkQueueAddMemReference()
+ // vkQueueRemoveMemReference()
+ void add_mem_references(const std::vector<VK_GPU_MEMORY> &mem_refs);
+ void remove_mem_references(const std::vector<VK_GPU_MEMORY> &mem_refs);
+
+ // vkQueueWaitIdle()
+ void wait();
+
+ // vkQueueSignalSemaphore()
+ // vkQueueWaitSemaphore()
+ void signal_semaphore(Semaphore &sem);
+ void wait_semaphore(Semaphore &sem);
+};
+
+class GpuMemory : public DerivedObject<VK_GPU_MEMORY, BaseObject> {
+public:
+ ~GpuMemory();
+
+ // vkAllocMemory()
+ void init(const Device &dev, const VK_MEMORY_ALLOC_INFO &info);
+ // vkPinSystemMemory()
+ void init(const Device &dev, size_t size, const void *data);
+ // vkOpenSharedMemory()
+ void init(const Device &dev, const VK_MEMORY_OPEN_INFO &info);
+ // vkOpenPeerMemory()
+ void init(const Device &dev, const VK_PEER_MEMORY_OPEN_INFO &info);
+
+ void init(VK_GPU_MEMORY mem) { BaseObject::init(mem, false); }
+
+ // vkSetMemoryPriority()
+ void set_priority(VK_MEMORY_PRIORITY priority);
+
+ // vkMapMemory()
+ const void *map(VK_FLAGS flags) const;
+ void *map(VK_FLAGS flags);
+ const void *map() const { return map(0); }
+ void *map() { return map(0); }
+
+ // vkUnmapMemory()
+ void unmap() const;
+
+ static VK_MEMORY_ALLOC_INFO alloc_info(const VK_MEMORY_REQUIREMENTS &reqs,
+ const VK_MEMORY_ALLOC_INFO *next_info);
+};
+
+class Fence : public DerivedObject<VK_FENCE, Object> {
+public:
+ // vkCreateFence()
+ void init(const Device &dev, const VK_FENCE_CREATE_INFO &info);
+
+ // vkGetFenceStatus()
+ VK_RESULT status() const { return vkGetFenceStatus(obj()); }
+
+ static VK_FENCE_CREATE_INFO create_info(VK_FENCE_CREATE_FLAGS flags);
+ static VK_FENCE_CREATE_INFO create_info();
+};
+
+class Semaphore : public DerivedObject<VK_SEMAPHORE, Object> {
+public:
+ // vkCreateSemaphore()
+ void init(const Device &dev, const VK_SEMAPHORE_CREATE_INFO &info);
+ // vkOpenSharedSemaphore()
+ void init(const Device &dev, const VK_SEMAPHORE_OPEN_INFO &info);
+
+ static VK_SEMAPHORE_CREATE_INFO create_info(uint32_t init_count, VK_FLAGS flags);
+};
+
+class Event : public DerivedObject<VK_EVENT, Object> {
+public:
+ // vkCreateEvent()
+ void init(const Device &dev, const VK_EVENT_CREATE_INFO &info);
+
+ // vkGetEventStatus()
+ // vkSetEvent()
+ // vkResetEvent()
+ VK_RESULT status() const { return vkGetEventStatus(obj()); }
+ void set();
+ void reset();
+
+ static VK_EVENT_CREATE_INFO create_info(VK_FLAGS flags);
+};
+
+class QueryPool : public DerivedObject<VK_QUERY_POOL, Object> {
+public:
+ // vkCreateQueryPool()
+ void init(const Device &dev, const VK_QUERY_POOL_CREATE_INFO &info);
+
+ // vkGetQueryPoolResults()
+ VK_RESULT results(uint32_t start, uint32_t count, size_t size, void *data);
+
+ static VK_QUERY_POOL_CREATE_INFO create_info(VK_QUERY_TYPE type, uint32_t slot_count);
+};
+
+class Buffer : public DerivedObject<VK_BUFFER, Object> {
+public:
+ explicit Buffer() {}
+ explicit Buffer(const Device &dev, const VK_BUFFER_CREATE_INFO &info) { init(dev, info); }
+ explicit Buffer(const Device &dev, VK_GPU_SIZE size) { init(dev, size); }
+
+ // vkCreateBuffer()
+ void init(const Device &dev, const VK_BUFFER_CREATE_INFO &info);
+ void init(const Device &dev, VK_GPU_SIZE size) { init(dev, create_info(size, 0)); }
+ void init_no_mem(const Device &dev, const VK_BUFFER_CREATE_INFO &info);
+
+ static VK_BUFFER_CREATE_INFO create_info(VK_GPU_SIZE size, VK_FLAGS usage);
+
+ VK_BUFFER_MEMORY_BARRIER buffer_memory_barrier(VK_FLAGS output_mask, VK_FLAGS input_mask,
+ VK_GPU_SIZE offset, VK_GPU_SIZE size) const
+ {
+ VK_BUFFER_MEMORY_BARRIER barrier = {};
+ barrier.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER;
+ barrier.buffer = obj();
+ barrier.outputMask = output_mask;
+ barrier.inputMask = input_mask;
+ barrier.offset = offset;
+ barrier.size = size;
+ return barrier;
+ }
+private:
+ VK_BUFFER_CREATE_INFO create_info_;
+};
+
+class BufferView : public DerivedObject<VK_BUFFER_VIEW, Object> {
+public:
+ // vkCreateBufferView()
+ void init(const Device &dev, const VK_BUFFER_VIEW_CREATE_INFO &info);
+};
+
+class Image : public DerivedObject<VK_IMAGE, Object> {
+public:
+ explicit Image() : format_features_(0) {}
+ explicit Image(const Device &dev, const VK_IMAGE_CREATE_INFO &info) : format_features_(0) { init(dev, info); }
+
+ // vkCreateImage()
+ void init(const Device &dev, const VK_IMAGE_CREATE_INFO &info);
+ void init_no_mem(const Device &dev, const VK_IMAGE_CREATE_INFO &info);
+ // vkOpenPeerImage()
+ void init(const Device &dev, const VK_PEER_IMAGE_OPEN_INFO &info, const VK_IMAGE_CREATE_INFO &original_info);
+
+ // vkBindImageMemoryRange()
+ void bind_memory(uint32_t alloc_idx, const VK_IMAGE_MEMORY_BIND_INFO &info,
+ const GpuMemory &mem, VK_GPU_SIZE mem_offset);
+
+ // vkGetImageSubresourceInfo()
+ VK_SUBRESOURCE_LAYOUT subresource_layout(const VK_IMAGE_SUBRESOURCE &subres) const;
+
+ bool transparent() const;
+ bool copyable() const { return (format_features_ & VK_FORMAT_IMAGE_COPY_BIT); }
+
+ VK_IMAGE_SUBRESOURCE_RANGE subresource_range(VK_IMAGE_ASPECT aspect) const { return subresource_range(create_info_, aspect); }
+ VK_EXTENT3D extent() const { return create_info_.extent; }
+ VK_EXTENT3D extent(uint32_t mip_level) const { return extent(create_info_.extent, mip_level); }
+ VK_FORMAT format() const {return create_info_.format;}
+
+ VK_IMAGE_MEMORY_BARRIER image_memory_barrier(VK_FLAGS output_mask, VK_FLAGS input_mask,
+ VK_IMAGE_LAYOUT old_layout,
+ VK_IMAGE_LAYOUT new_layout,
+ const VK_IMAGE_SUBRESOURCE_RANGE &range) const
+ {
+ VK_IMAGE_MEMORY_BARRIER barrier = {};
+ barrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
+ barrier.outputMask = output_mask;
+ barrier.inputMask = input_mask;
+ barrier.oldLayout = old_layout;
+ barrier.newLayout = new_layout;
+ barrier.image = obj();
+ barrier.subresourceRange = range;
+ return barrier;
+ }
+
+ static VK_IMAGE_CREATE_INFO create_info();
+ static VK_IMAGE_SUBRESOURCE subresource(VK_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice);
+ static VK_IMAGE_SUBRESOURCE subresource(const VK_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice);
+ static VK_IMAGE_SUBRESOURCE_RANGE subresource_range(VK_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
+ uint32_t base_array_slice, uint32_t array_size);
+ static VK_IMAGE_SUBRESOURCE_RANGE subresource_range(const VK_IMAGE_CREATE_INFO &info, VK_IMAGE_ASPECT aspect);
+ static VK_IMAGE_SUBRESOURCE_RANGE subresource_range(const VK_IMAGE_SUBRESOURCE &subres);
+
+ static VK_EXTENT2D extent(int32_t width, int32_t height);
+ static VK_EXTENT2D extent(const VK_EXTENT2D &extent, uint32_t mip_level);
+ static VK_EXTENT2D extent(const VK_EXTENT3D &extent);
+
+ static VK_EXTENT3D extent(int32_t width, int32_t height, int32_t depth);
+ static VK_EXTENT3D extent(const VK_EXTENT3D &extent, uint32_t mip_level);
+
+private:
+ void init_info(const Device &dev, const VK_IMAGE_CREATE_INFO &info);
+
+ VK_IMAGE_CREATE_INFO create_info_;
+ VK_FLAGS format_features_;
+};
+
+class ImageView : public DerivedObject<VK_IMAGE_VIEW, Object> {
+public:
+ // vkCreateImageView()
+ void init(const Device &dev, const VK_IMAGE_VIEW_CREATE_INFO &info);
+};
+
+class ColorAttachmentView : public DerivedObject<VK_COLOR_ATTACHMENT_VIEW, Object> {
+public:
+ // vkCreateColorAttachmentView()
+ void init(const Device &dev, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO &info);
+};
+
+class DepthStencilView : public DerivedObject<VK_DEPTH_STENCIL_VIEW, Object> {
+public:
+ // vkCreateDepthStencilView()
+ void init(const Device &dev, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO &info);
+};
+
+class Shader : public DerivedObject<VK_SHADER, Object> {
+public:
+ // vkCreateShader()
+ void init(const Device &dev, const VK_SHADER_CREATE_INFO &info);
+ VK_RESULT init_try(const Device &dev, const VK_SHADER_CREATE_INFO &info);
+
+ static VK_SHADER_CREATE_INFO create_info(size_t code_size, const void *code, VK_FLAGS flags);
+};
+
+class Pipeline : public DerivedObject<VK_PIPELINE, Object> {
+public:
+ // vkCreateGraphicsPipeline()
+ void init(const Device &dev, const VK_GRAPHICS_PIPELINE_CREATE_INFO &info);
+ // vkCreateGraphicsPipelineDerivative()
+ void init(const Device &dev, const VK_GRAPHICS_PIPELINE_CREATE_INFO &info, const VK_PIPELINE basePipeline);
+ // vkCreateComputePipeline()
+ void init(const Device &dev, const VK_COMPUTE_PIPELINE_CREATE_INFO &info);
+ // vkLoadPipeline()
+ void init(const Device&dev, size_t size, const void *data);
+ // vkLoadPipelineDerivative()
+ void init(const Device&dev, size_t size, const void *data, VK_PIPELINE basePipeline);
+
+ // vkStorePipeline()
+ size_t store(size_t size, void *data);
+};
+
+class Sampler : public DerivedObject<VK_SAMPLER, Object> {
+public:
+ // vkCreateSampler()
+ void init(const Device &dev, const VK_SAMPLER_CREATE_INFO &info);
+};
+
+class DescriptorSetLayout : public DerivedObject<VK_DESCRIPTOR_SET_LAYOUT, Object> {
+public:
+ // vkCreateDescriptorSetLayout()
+ void init(const Device &dev, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO &info);
+};
+
+class DescriptorSetLayoutChain : public DerivedObject<VK_DESCRIPTOR_SET_LAYOUT_CHAIN, Object> {
+public:
+ // vkCreateDescriptorSetLayoutChain()
+ void init(const Device &dev, const std::vector<const DescriptorSetLayout *> &layouts);
+};
+
+class DescriptorPool : public DerivedObject<VK_DESCRIPTOR_POOL, Object> {
+public:
+ // vkCreateDescriptorPool()
+ void init(const Device &dev, VK_DESCRIPTOR_POOL_USAGE usage,
+ uint32_t max_sets, const VK_DESCRIPTOR_POOL_CREATE_INFO &info);
+
+ // vkResetDescriptorPool()
+ void reset();
+
+ // vkAllocDescriptorSets()
+ std::vector<DescriptorSet *> alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const std::vector<const DescriptorSetLayout *> &layouts);
+ std::vector<DescriptorSet *> alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout, uint32_t count);
+ DescriptorSet *alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout);
+
+ // vkClearDescriptorSets()
+ void clear_sets(const std::vector<DescriptorSet *> &sets);
+ void clear_sets(DescriptorSet &set) { clear_sets(std::vector<DescriptorSet *>(1, &set)); }
+};
+
+class DescriptorSet : public DerivedObject<VK_DESCRIPTOR_SET, Object> {
+public:
+ explicit DescriptorSet(VK_DESCRIPTOR_SET set) : DerivedObject(set) {}
+
+ // vkUpdateDescriptors()
+ void update(const std::vector<const void *> &update_array);
+
+ static VK_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, uint32_t count, const VK_SAMPLER *samplers);
+ static VK_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, const std::vector<VK_SAMPLER> &samplers);
+
+ static VK_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, uint32_t count, const VK_SAMPLER_IMAGE_VIEW_INFO *textures);
+ static VK_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, const std::vector<VK_SAMPLER_IMAGE_VIEW_INFO> &textures);
+
+ static VK_UPDATE_IMAGES update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const VK_IMAGE_VIEW_ATTACH_INFO *views);
+ static VK_UPDATE_IMAGES update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<VK_IMAGE_VIEW_ATTACH_INFO> &views);
+
+ static VK_UPDATE_BUFFERS update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const VK_BUFFER_VIEW_ATTACH_INFO *views);
+ static VK_UPDATE_BUFFERS update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<VK_BUFFER_VIEW_ATTACH_INFO> &views);
+
+ static VK_UPDATE_AS_COPY update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set);
+
+ static VK_BUFFER_VIEW_ATTACH_INFO attach_info(const BufferView &view);
+ static VK_IMAGE_VIEW_ATTACH_INFO attach_info(const ImageView &view, VK_IMAGE_LAYOUT layout);
+};
+
+class DynamicVpStateObject : public DerivedObject<VK_DYNAMIC_VP_STATE_OBJECT, DynamicStateObject> {
+public:
+ // vkCreateDynamicViewportState()
+ void init(const Device &dev, const VK_DYNAMIC_VP_STATE_CREATE_INFO &info);
+};
+
+class DynamicRsStateObject : public DerivedObject<VK_DYNAMIC_RS_STATE_OBJECT, DynamicStateObject> {
+public:
+ // vkCreateDynamicRasterState()
+ void init(const Device &dev, const VK_DYNAMIC_RS_STATE_CREATE_INFO &info);
+};
+
+class DynamicCbStateObject : public DerivedObject<VK_DYNAMIC_CB_STATE_OBJECT, DynamicStateObject> {
+public:
+ // vkCreateDynamicColorBlendState()
+ void init(const Device &dev, const VK_DYNAMIC_CB_STATE_CREATE_INFO &info);
+};
+
+class DynamicDsStateObject : public DerivedObject<VK_DYNAMIC_DS_STATE_OBJECT, DynamicStateObject> {
+public:
+ // vkCreateDynamicDepthStencilState()
+ void init(const Device &dev, const VK_DYNAMIC_DS_STATE_CREATE_INFO &info);
+};
+
+class CmdBuffer : public DerivedObject<VK_CMD_BUFFER, Object> {
+public:
+ explicit CmdBuffer() {}
+ explicit CmdBuffer(const Device &dev, const VK_CMD_BUFFER_CREATE_INFO &info) { init(dev, info); }
+
+ // vkCreateCommandBuffer()
+ void init(const Device &dev, const VK_CMD_BUFFER_CREATE_INFO &info);
+
+ // vkBeginCommandBuffer()
+ void begin(const VK_CMD_BUFFER_BEGIN_INFO *info);
+ void begin(VK_RENDER_PASS renderpass_obj, VK_FRAMEBUFFER framebuffer_obj);
+ void begin();
+
+ // vkEndCommandBuffer()
+ // vkResetCommandBuffer()
+ void end();
+ void reset();
+
+ static VK_CMD_BUFFER_CREATE_INFO create_info(uint32_t queueNodeIndex);
+};
+
+inline const void *Object::map(VK_FLAGS flags) const
+{
+ return (primary_mem_) ? primary_mem_->map(flags) : NULL;
+}
+
+inline void *Object::map(VK_FLAGS flags)
+{
+ return (primary_mem_) ? primary_mem_->map(flags) : NULL;
+}
+
+inline void Object::unmap() const
+{
+ if (primary_mem_)
+ primary_mem_->unmap();
+}
+
+inline VK_MEMORY_ALLOC_INFO GpuMemory::alloc_info(const VK_MEMORY_REQUIREMENTS &reqs,
+ const VK_MEMORY_ALLOC_INFO *next_info)
+{
+ VK_MEMORY_ALLOC_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
+ if (next_info != NULL)
+ info.pNext = (void *) next_info;
+
+ info.allocationSize = reqs.size;
+ info.memProps = reqs.memProps;
+ info.memType = reqs.memType;
+ info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
+ return info;
+}
+
+inline VK_BUFFER_CREATE_INFO Buffer::create_info(VK_GPU_SIZE size, VK_FLAGS usage)
+{
+ VK_BUFFER_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ info.size = size;
+ info.usage = usage;
+ return info;
+}
+
+inline VK_FENCE_CREATE_INFO Fence::create_info(VK_FENCE_CREATE_FLAGS flags)
+{
+ VK_FENCE_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ info.flags = flags;
+ return info;
+}
+
+inline VK_FENCE_CREATE_INFO Fence::create_info()
+{
+ VK_FENCE_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ return info;
+}
+
+inline VK_SEMAPHORE_CREATE_INFO Semaphore::create_info(uint32_t init_count, VK_FLAGS flags)
+{
+ VK_SEMAPHORE_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO;
+ info.initialCount = init_count;
+ info.flags = flags;
+ return info;
+}
+
+inline VK_EVENT_CREATE_INFO Event::create_info(VK_FLAGS flags)
+{
+ VK_EVENT_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_EVENT_CREATE_INFO;
+ info.flags = flags;
+ return info;
+}
+
+inline VK_QUERY_POOL_CREATE_INFO QueryPool::create_info(VK_QUERY_TYPE type, uint32_t slot_count)
+{
+ VK_QUERY_POOL_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
+ info.queryType = type;
+ info.slots = slot_count;
+ return info;
+}
+
+inline VK_IMAGE_CREATE_INFO Image::create_info()
+{
+ VK_IMAGE_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
+ info.extent.width = 1;
+ info.extent.height = 1;
+ info.extent.depth = 1;
+ info.mipLevels = 1;
+ info.arraySize = 1;
+ info.samples = 1;
+ return info;
+}
+
+inline VK_IMAGE_SUBRESOURCE Image::subresource(VK_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice)
+{
+ VK_IMAGE_SUBRESOURCE subres = {};
+ subres.aspect = aspect;
+ subres.mipLevel = mip_level;
+ subres.arraySlice = array_slice;
+ return subres;
+}
+
+inline VK_IMAGE_SUBRESOURCE Image::subresource(const VK_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice)
+{
+ return subresource(range.aspect, range.baseMipLevel + mip_level, range.baseArraySlice + array_slice);
+}
+
+inline VK_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(VK_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
+ uint32_t base_array_slice, uint32_t array_size)
+{
+ VK_IMAGE_SUBRESOURCE_RANGE range = {};
+ range.aspect = aspect;
+ range.baseMipLevel = base_mip_level;
+ range.mipLevels = mip_levels;
+ range.baseArraySlice = base_array_slice;
+ range.arraySize = array_size;
+ return range;
+}
+
+inline VK_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const VK_IMAGE_CREATE_INFO &info, VK_IMAGE_ASPECT aspect)
+{
+ return subresource_range(aspect, 0, info.mipLevels, 0, info.arraySize);
+}
+
+inline VK_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const VK_IMAGE_SUBRESOURCE &subres)
+{
+ return subresource_range(subres.aspect, subres.mipLevel, 1, subres.arraySlice, 1);
+}
+
+inline VK_EXTENT2D Image::extent(int32_t width, int32_t height)
+{
+ VK_EXTENT2D extent = {};
+ extent.width = width;
+ extent.height = height;
+ return extent;
+}
+
+inline VK_EXTENT2D Image::extent(const VK_EXTENT2D &extent, uint32_t mip_level)
+{
+ const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
+ const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
+ return Image::extent(width, height);
+}
+
+inline VK_EXTENT2D Image::extent(const VK_EXTENT3D &extent)
+{
+ return Image::extent(extent.width, extent.height);
+}
+
+inline VK_EXTENT3D Image::extent(int32_t width, int32_t height, int32_t depth)
+{
+ VK_EXTENT3D extent = {};
+ extent.width = width;
+ extent.height = height;
+ extent.depth = depth;
+ return extent;
+}
+
+inline VK_EXTENT3D Image::extent(const VK_EXTENT3D &extent, uint32_t mip_level)
+{
+ const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
+ const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
+ const int32_t depth = (extent.depth >> mip_level) ? extent.depth >> mip_level : 1;
+ return Image::extent(width, height, depth);
+}
+
+inline VK_SHADER_CREATE_INFO Shader::create_info(size_t code_size, const void *code, VK_FLAGS flags)
+{
+ VK_SHADER_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ info.codeSize = code_size;
+ info.pCode = code;
+ info.flags = flags;
+ return info;
+}
+
+inline VK_BUFFER_VIEW_ATTACH_INFO DescriptorSet::attach_info(const BufferView &view)
+{
+ VK_BUFFER_VIEW_ATTACH_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
+ info.view = view.obj();
+ return info;
+}
+
+inline VK_IMAGE_VIEW_ATTACH_INFO DescriptorSet::attach_info(const ImageView &view, VK_IMAGE_LAYOUT layout)
+{
+ VK_IMAGE_VIEW_ATTACH_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ info.view = view.obj();
+ info.layout = layout;
+ return info;
+}
+
+inline VK_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const VK_SAMPLER *samplers)
+{
+ VK_UPDATE_SAMPLERS info = {};
+ info.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLERS;
+ info.binding = binding;
+ info.arrayIndex = index;
+ info.count = count;
+ info.pSamplers = samplers;
+ return info;
+}
+
+inline VK_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<VK_SAMPLER> &samplers)
+{
+ return update(binding, index, samplers.size(), &samplers[0]);
+}
+
+inline VK_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const VK_SAMPLER_IMAGE_VIEW_INFO *textures)
+{
+ VK_UPDATE_SAMPLER_TEXTURES info = {};
+ info.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
+ info.binding = binding;
+ info.arrayIndex = index;
+ info.count = count;
+ info.pSamplerImageViews = textures;
+ return info;
+}
+
+inline VK_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<VK_SAMPLER_IMAGE_VIEW_INFO> &textures)
+{
+ return update(binding, index, textures.size(), &textures[0]);
+}
+
+inline VK_UPDATE_IMAGES DescriptorSet::update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
+ const VK_IMAGE_VIEW_ATTACH_INFO *views)
+{
+ VK_UPDATE_IMAGES info = {};
+ info.sType = VK_STRUCTURE_TYPE_UPDATE_IMAGES;
+ info.descriptorType = type;
+ info.binding = binding;
+ info.arrayIndex = index;
+ info.count = count;
+ info.pImageViews = views;
+ return info;
+}
+
+inline VK_UPDATE_IMAGES DescriptorSet::update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
+ const std::vector<VK_IMAGE_VIEW_ATTACH_INFO> &views)
+{
+ return update(type, binding, index, views.size(), &views[0]);
+}
+
+inline VK_UPDATE_BUFFERS DescriptorSet::update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
+ const VK_BUFFER_VIEW_ATTACH_INFO *views)
+{
+ VK_UPDATE_BUFFERS info = {};
+ info.sType = VK_STRUCTURE_TYPE_UPDATE_BUFFERS;
+ info.descriptorType = type;
+ info.binding = binding;
+ info.arrayIndex = index;
+ info.count = count;
+ info.pBufferViews = views;
+ return info;
+}
+
+inline VK_UPDATE_BUFFERS DescriptorSet::update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
+ const std::vector<VK_BUFFER_VIEW_ATTACH_INFO> &views)
+{
+ return update(type, binding, index, views.size(), &views[0]);
+}
+
+inline VK_UPDATE_AS_COPY DescriptorSet::update(VK_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set)
+{
+ VK_UPDATE_AS_COPY info = {};
+ info.sType = VK_STRUCTURE_TYPE_UPDATE_AS_COPY;
+ info.descriptorType = type;
+ info.binding = binding;
+ info.arrayElement = index;
+ info.count = count;
+ info.descriptorSet = set.obj();
+ return info;
+}
+
+inline VK_CMD_BUFFER_CREATE_INFO CmdBuffer::create_info(uint32_t queueNodeIndex)
+{
+ VK_CMD_BUFFER_CREATE_INFO info = {};
+ info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
+ info.queueNodeIndex = queueNodeIndex;
+ return info;
+}
+
+}; // namespace vk_testing
+
+#endif // VKTESTBINDING_H
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
-#ifndef XGLTESTFRAMEWORK_H
-#define XGLTESTFRAMEWORK_H
+#ifndef VKTESTFRAMEWORK_H
+#define VKTESTFRAMEWORK_H
#include "gtest-1.7.0/include/gtest/gtest.h"
#include "ShaderLang.h"
#include "GLSL450Lib.h"
#include "icd-spv.h"
#include "test_common.h"
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
#include "test_environment.h"
#include <stdlib.h>
#include <iostream>
#include <fstream>
#include <list>
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
// Can be used by tests to record additional details / description of test
#define TEST_DESCRIPTION(desc) RecordProperty("description", desc)
int m_width;
int m_height;
void *m_data;
- XGL_IMAGE m_presentableImage;
- XGL_GPU_MEMORY m_presentableMemory;
+ VK_IMAGE m_presentableImage;
+ VK_GPU_MEMORY m_presentableMemory;
unsigned m_data_size;
};
void Compare(const char *comment, XglImage *image);
void RecordImage(XglImage * image);
void RecordImages(vector<XglImage *> image);
- bool GLSLtoSPV(const XGL_PIPELINE_SHADER_STAGE shader_type,
+ bool GLSLtoSPV(const VK_PIPELINE_SHADER_STAGE shader_type,
const char *pshader,
std::vector<unsigned int> &spv);
static bool m_use_spv;
void SetMessageOptions(EShMessages& messages);
void ProcessConfigFile();
EShLanguage FindLanguage(const std::string& name);
- EShLanguage FindLanguage(const XGL_PIPELINE_SHADER_STAGE shader_type);
+ EShLanguage FindLanguage(const VK_PIPELINE_SHADER_STAGE shader_type);
std::string ConfigFile;
bool SetConfigFile(const std::string& name);
void TearDown();
protected:
- xgl_testing::Device &m_device;
- xgl_testing::Queue &m_queue;
- xgl_testing::CmdBuffer m_cmdbuf;
+ vk_testing::Device &m_device;
+ vk_testing::Queue &m_queue;
+ vk_testing::CmdBuffer m_cmdbuf;
private:
xcb_window_t m_window;
void TearDown();
};
-#endif // XGLTESTFRAMEWORK_H
+#endif // VKTESTFRAMEWORK_H
/*
- * XGL Tests
+ * Vulkan Tests
*
* Copyright (C) 2014 LunarG, Inc.
*
* Courtney Goeltzenleuchter <courtney@lunarg.com>
*/
-#include "xglrenderframework.h"
+#include "vkrenderframework.h"
XglRenderFramework::XglRenderFramework() :
- m_cmdBuffer( XGL_NULL_HANDLE ),
- m_stateRaster( XGL_NULL_HANDLE ),
- m_colorBlend( XGL_NULL_HANDLE ),
- m_stateViewport( XGL_NULL_HANDLE ),
- m_stateDepthStencil( XGL_NULL_HANDLE ),
+ m_cmdBuffer( VK_NULL_HANDLE ),
+ m_stateRaster( VK_NULL_HANDLE ),
+ m_colorBlend( VK_NULL_HANDLE ),
+ m_stateViewport( VK_NULL_HANDLE ),
+ m_stateDepthStencil( VK_NULL_HANDLE ),
m_width( 256.0 ), // default window width
m_height( 256.0 ), // default window height
- m_render_target_fmt( XGL_FMT_R8G8B8A8_UNORM ),
- m_depth_stencil_fmt( XGL_FMT_UNDEFINED ),
+ m_render_target_fmt( VK_FMT_R8G8B8A8_UNORM ),
+ m_depth_stencil_fmt( VK_FMT_UNDEFINED ),
m_depth_clear_color( 1.0 ),
m_stencil_clear_color( 0 )
{
void XglRenderFramework::InitFramework()
{
- XGL_RESULT err;
- XGL_INSTANCE_CREATE_INFO instInfo = {};
- instInfo.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
+ VK_RESULT err;
+ VK_INSTANCE_CREATE_INFO instInfo = {};
+ instInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
instInfo.pNext = NULL;
instInfo.pAppInfo = &app_info;
instInfo.pAllocCb = NULL;
instInfo.extensionCount = 0;
instInfo.ppEnabledExtensionNames = NULL;
- err = xglCreateInstance(&instInfo, &this->inst);
- ASSERT_XGL_SUCCESS(err);
- err = xglEnumerateGpus(inst, XGL_MAX_PHYSICAL_GPUS, &this->gpu_count,
+ err = vkCreateInstance(&instInfo, &this->inst);
+ ASSERT_VK_SUCCESS(err);
+ err = vkEnumerateGpus(inst, VK_MAX_PHYSICAL_GPUS, &this->gpu_count,
objs);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
ASSERT_GE(this->gpu_count, 1) << "No GPU available";
m_device = new XglDevice(0, objs[0]);
void XglRenderFramework::ShutdownFramework()
{
- if (m_colorBlend) xglDestroyObject(m_colorBlend);
- if (m_stateDepthStencil) xglDestroyObject(m_stateDepthStencil);
- if (m_stateRaster) xglDestroyObject(m_stateRaster);
- if (m_cmdBuffer) xglDestroyObject(m_cmdBuffer);
- if (m_framebuffer) xglDestroyObject(m_framebuffer);
- if (m_renderPass) xglDestroyObject(m_renderPass);
+ if (m_colorBlend) vkDestroyObject(m_colorBlend);
+ if (m_stateDepthStencil) vkDestroyObject(m_stateDepthStencil);
+ if (m_stateRaster) vkDestroyObject(m_stateRaster);
+ if (m_cmdBuffer) vkDestroyObject(m_cmdBuffer);
+ if (m_framebuffer) vkDestroyObject(m_framebuffer);
+ if (m_renderPass) vkDestroyObject(m_renderPass);
if (m_stateViewport) {
- xglDestroyObject(m_stateViewport);
+ vkDestroyObject(m_stateViewport);
}
while (!m_renderTargets.empty()) {
- xglDestroyObject(m_renderTargets.back()->targetView());
- xglBindObjectMemory(m_renderTargets.back()->image(), 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(m_renderTargets.back()->image());
- xglFreeMemory(m_renderTargets.back()->memory());
+ vkDestroyObject(m_renderTargets.back()->targetView());
+ vkBindObjectMemory(m_renderTargets.back()->image(), 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(m_renderTargets.back()->image());
+ vkFreeMemory(m_renderTargets.back()->memory());
m_renderTargets.pop_back();
}
// reset the driver
delete m_device;
- xglDestroyInstance(this->inst);
+ vkDestroyInstance(this->inst);
}
void XglRenderFramework::InitState()
{
- XGL_RESULT err;
+ VK_RESULT err;
- m_render_target_fmt = XGL_FMT_B8G8R8A8_UNORM;
+ m_render_target_fmt = VK_FMT_B8G8R8A8_UNORM;
// create a raster state (solid, back-face culling)
- XGL_DYNAMIC_RS_STATE_CREATE_INFO raster = {};
- raster.sType = XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO raster = {};
+ raster.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
raster.pointSize = 1.0;
- err = xglCreateDynamicRasterState( device(), &raster, &m_stateRaster );
- ASSERT_XGL_SUCCESS(err);
+ err = vkCreateDynamicRasterState( device(), &raster, &m_stateRaster );
+ ASSERT_VK_SUCCESS(err);
- XGL_DYNAMIC_CB_STATE_CREATE_INFO blend = {};
- blend.sType = XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
- err = xglCreateDynamicColorBlendState(device(), &blend, &m_colorBlend);
- ASSERT_XGL_SUCCESS( err );
+ VK_DYNAMIC_CB_STATE_CREATE_INFO blend = {};
+ blend.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
+ err = vkCreateDynamicColorBlendState(device(), &blend, &m_colorBlend);
+ ASSERT_VK_SUCCESS( err );
- XGL_DYNAMIC_DS_STATE_CREATE_INFO depthStencil = {};
- depthStencil.sType = XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO depthStencil = {};
+ depthStencil.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
depthStencil.minDepth = 0.f;
depthStencil.maxDepth = 1.f;
depthStencil.stencilFrontRef = 0;
depthStencil.stencilBackRef = 0;
- err = xglCreateDynamicDepthStencilState( device(), &depthStencil, &m_stateDepthStencil );
- ASSERT_XGL_SUCCESS( err );
+ err = vkCreateDynamicDepthStencilState( device(), &depthStencil, &m_stateDepthStencil );
+ ASSERT_VK_SUCCESS( err );
- XGL_CMD_BUFFER_CREATE_INFO cmdInfo = {};
+ VK_CMD_BUFFER_CREATE_INFO cmdInfo = {};
- cmdInfo.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
+ cmdInfo.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
cmdInfo.queueNodeIndex = m_device->graphics_queue_node_index_;
- err = xglCreateCommandBuffer(device(), &cmdInfo, &m_cmdBuffer);
- ASSERT_XGL_SUCCESS(err) << "xglCreateCommandBuffer failed";
+ err = vkCreateCommandBuffer(device(), &cmdInfo, &m_cmdBuffer);
+ ASSERT_VK_SUCCESS(err) << "vkCreateCommandBuffer failed";
}
void XglRenderFramework::InitViewport(float width, float height)
{
- XGL_RESULT err;
+ VK_RESULT err;
- XGL_VIEWPORT viewport;
- XGL_RECT scissor;
+ VK_VIEWPORT viewport;
+ VK_RECT scissor;
- XGL_DYNAMIC_VP_STATE_CREATE_INFO viewportCreate = {};
- viewportCreate.sType = XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO viewportCreate = {};
+ viewportCreate.sType = VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
viewportCreate.viewportAndScissorCount = 1;
viewport.originX = 0;
viewport.originY = 0;
viewportCreate.pViewports = &viewport;
viewportCreate.pScissors = &scissor;
- err = xglCreateDynamicViewportState( device(), &viewportCreate, &m_stateViewport );
- ASSERT_XGL_SUCCESS( err );
+ err = vkCreateDynamicViewportState( device(), &viewportCreate, &m_stateViewport );
+ ASSERT_VK_SUCCESS( err );
m_width = width;
m_height = height;
}
InitRenderTarget(targets, NULL);
}
-void XglRenderFramework::InitRenderTarget(XGL_DEPTH_STENCIL_BIND_INFO *dsBinding)
+void XglRenderFramework::InitRenderTarget(VK_DEPTH_STENCIL_BIND_INFO *dsBinding)
{
InitRenderTarget(1, dsBinding);
}
-void XglRenderFramework::InitRenderTarget(uint32_t targets, XGL_DEPTH_STENCIL_BIND_INFO *dsBinding)
+void XglRenderFramework::InitRenderTarget(uint32_t targets, VK_DEPTH_STENCIL_BIND_INFO *dsBinding)
{
- std::vector<XGL_ATTACHMENT_LOAD_OP> load_ops;
- std::vector<XGL_ATTACHMENT_STORE_OP> store_ops;
- std::vector<XGL_CLEAR_COLOR> clear_colors;
+ std::vector<VK_ATTACHMENT_LOAD_OP> load_ops;
+ std::vector<VK_ATTACHMENT_STORE_OP> store_ops;
+ std::vector<VK_CLEAR_COLOR> clear_colors;
uint32_t i;
for (i = 0; i < targets; i++) {
XglImage *img = new XglImage(m_device);
img->init(m_width, m_height, m_render_target_fmt,
- XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT |
- XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT);
+ VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT |
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT);
m_colorBindings[i].view = img->targetView();
- m_colorBindings[i].layout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
+ m_colorBindings[i].layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
m_renderTargets.push_back(img);
- load_ops.push_back(XGL_ATTACHMENT_LOAD_OP_LOAD);
- store_ops.push_back(XGL_ATTACHMENT_STORE_OP_STORE);
+ load_ops.push_back(VK_ATTACHMENT_LOAD_OP_LOAD);
+ store_ops.push_back(VK_ATTACHMENT_STORE_OP_STORE);
clear_colors.push_back(m_clear_color);
// m_mem_ref_mgr.AddMemoryRefs(*img);
}
// Create Framebuffer and RenderPass with color attachments and any depth/stencil attachment
- XGL_FRAMEBUFFER_CREATE_INFO fb_info = {};
- fb_info.sType = XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
+ VK_FRAMEBUFFER_CREATE_INFO fb_info = {};
+ fb_info.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
fb_info.pNext = NULL;
fb_info.colorAttachmentCount = m_renderTargets.size();
fb_info.pColorAttachments = m_colorBindings;
fb_info.height = (uint32_t)m_height;
fb_info.layers = 1;
- xglCreateFramebuffer(device(), &fb_info, &m_framebuffer);
+ vkCreateFramebuffer(device(), &fb_info, &m_framebuffer);
- XGL_RENDER_PASS_CREATE_INFO rp_info = {};
- rp_info.sType = XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ VK_RENDER_PASS_CREATE_INFO rp_info = {};
+ rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
rp_info.renderArea.extent.width = m_width;
rp_info.renderArea.extent.height = m_height;
if (dsBinding) {
rp_info.depthStencilLayout = dsBinding->layout;
}
- rp_info.depthLoadOp = XGL_ATTACHMENT_LOAD_OP_LOAD;
+ rp_info.depthLoadOp = VK_ATTACHMENT_LOAD_OP_LOAD;
rp_info.depthLoadClearValue = m_depth_clear_color;
- rp_info.depthStoreOp = XGL_ATTACHMENT_STORE_OP_STORE;
- rp_info.stencilLoadOp = XGL_ATTACHMENT_LOAD_OP_LOAD;
+ rp_info.depthStoreOp = VK_ATTACHMENT_STORE_OP_STORE;
+ rp_info.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_LOAD;
rp_info.stencilLoadClearValue = m_stencil_clear_color;
- rp_info.stencilStoreOp = XGL_ATTACHMENT_STORE_OP_STORE;
- xglCreateRenderPass(device(), &rp_info, &m_renderPass);
+ rp_info.stencilStoreOp = VK_ATTACHMENT_STORE_OP_STORE;
+ vkCreateRenderPass(device(), &rp_info, &m_renderPass);
}
-XglDevice::XglDevice(uint32_t id, XGL_PHYSICAL_GPU obj) :
- xgl_testing::Device(obj), id(id)
+XglDevice::XglDevice(uint32_t id, VK_PHYSICAL_GPU obj) :
+ vk_testing::Device(obj), id(id)
{
init();
int XglDescriptorSetObj::AppendDummy()
{
/* request a descriptor but do not update it */
- XGL_DESCRIPTOR_TYPE_COUNT tc = {};
- tc.type = XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER;
+ VK_DESCRIPTOR_TYPE_COUNT tc = {};
+ tc.type = VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER;
tc.count = 1;
m_type_counts.push_back(tc);
return m_nextSlot++;
}
-int XglDescriptorSetObj::AppendBuffer(XGL_DESCRIPTOR_TYPE type, XglConstantBufferObj &constantBuffer)
+int XglDescriptorSetObj::AppendBuffer(VK_DESCRIPTOR_TYPE type, XglConstantBufferObj &constantBuffer)
{
- XGL_DESCRIPTOR_TYPE_COUNT tc = {};
+ VK_DESCRIPTOR_TYPE_COUNT tc = {};
tc.type = type;
tc.count = 1;
m_type_counts.push_back(tc);
- m_updateBuffers.push_back(xgl_testing::DescriptorSet::update(type,
+ m_updateBuffers.push_back(vk_testing::DescriptorSet::update(type,
m_nextSlot, 0, 1, &constantBuffer.m_bufferViewInfo));
// Track mem references for this descriptor set object
int XglDescriptorSetObj::AppendSamplerTexture( XglSamplerObj* sampler, XglTextureObj* texture)
{
- XGL_DESCRIPTOR_TYPE_COUNT tc = {};
- tc.type = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
+ VK_DESCRIPTOR_TYPE_COUNT tc = {};
+ tc.type = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
tc.count = 1;
m_type_counts.push_back(tc);
- XGL_SAMPLER_IMAGE_VIEW_INFO tmp = {};
+ VK_SAMPLER_IMAGE_VIEW_INFO tmp = {};
tmp.sampler = sampler->obj();
tmp.pImageView = &texture->m_textureViewInfo;
m_samplerTextureInfo.push_back(tmp);
- m_updateSamplerTextures.push_back(xgl_testing::DescriptorSet::update(m_nextSlot, 0, 1,
- (const XGL_SAMPLER_IMAGE_VIEW_INFO *) NULL));
+ m_updateSamplerTextures.push_back(vk_testing::DescriptorSet::update(m_nextSlot, 0, 1,
+ (const VK_SAMPLER_IMAGE_VIEW_INFO *) NULL));
// Track mem references for the texture referenced here
mem_ref_mgr.AddMemoryRefs(*texture);
return m_nextSlot++;
}
-XGL_DESCRIPTOR_SET_LAYOUT_CHAIN XglDescriptorSetObj::GetLayoutChain() const
+VK_DESCRIPTOR_SET_LAYOUT_CHAIN XglDescriptorSetObj::GetLayoutChain() const
{
return m_layout_chain.obj();
}
-XGL_DESCRIPTOR_SET XglDescriptorSetObj::GetDescriptorSetHandle() const
+VK_DESCRIPTOR_SET XglDescriptorSetObj::GetDescriptorSetHandle() const
{
return m_set->obj();
}
-void XglDescriptorSetObj::CreateXGLDescriptorSet(XglCommandBufferObj *cmdBuffer)
+void XglDescriptorSetObj::CreateVKDescriptorSet(XglCommandBufferObj *cmdBuffer)
{
- // create XGL_DESCRIPTOR_POOL
- XGL_DESCRIPTOR_POOL_CREATE_INFO pool = {};
- pool.sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
+ // create VK_DESCRIPTOR_POOL
+ VK_DESCRIPTOR_POOL_CREATE_INFO pool = {};
+ pool.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
pool.count = m_type_counts.size();
pool.pTypeCount = &m_type_counts[0];
- init(*m_device, XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1, pool);
+ init(*m_device, VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1, pool);
- // create XGL_DESCRIPTOR_SET_LAYOUT
- vector<XGL_DESCRIPTOR_SET_LAYOUT_BINDING> bindings;
+ // create VK_DESCRIPTOR_SET_LAYOUT
+ vector<VK_DESCRIPTOR_SET_LAYOUT_BINDING> bindings;
bindings.resize(m_type_counts.size());
for (int i = 0; i < m_type_counts.size(); i++) {
bindings[i].descriptorType = m_type_counts[i].type;
bindings[i].count = m_type_counts[i].count;
- bindings[i].stageFlags = XGL_SHADER_STAGE_FLAGS_ALL;
+ bindings[i].stageFlags = VK_SHADER_STAGE_FLAGS_ALL;
bindings[i].pImmutableSamplers = NULL;
}
- // create XGL_DESCRIPTOR_SET_LAYOUT
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO layout = {};
- layout.sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
+ // create VK_DESCRIPTOR_SET_LAYOUT
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO layout = {};
+ layout.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
layout.count = bindings.size();
layout.pBinding = &bindings[0];
m_layout.init(*m_device, layout);
- vector<const xgl_testing::DescriptorSetLayout *> layouts;
+ vector<const vk_testing::DescriptorSetLayout *> layouts;
layouts.push_back(&m_layout);
m_layout_chain.init(*m_device, layouts);
- // create XGL_DESCRIPTOR_SET
- m_set = alloc_sets(XGL_DESCRIPTOR_SET_USAGE_STATIC, m_layout);
+ // create VK_DESCRIPTOR_SET
+ m_set = alloc_sets(VK_DESCRIPTOR_SET_USAGE_STATIC, m_layout);
// build the update array
vector<const void *> update_array;
}
// do the updates
- m_device->begin_descriptor_pool_update(XGL_DESCRIPTOR_UPDATE_MODE_FASTEST);
+ m_device->begin_descriptor_pool_update(VK_DESCRIPTOR_UPDATE_MODE_FASTEST);
clear_sets(*m_set);
m_set->update(update_array);
m_device->end_descriptor_pool_update(*cmdBuffer);
XglImage::XglImage(XglDevice *dev)
{
m_device = dev;
- m_imageInfo.view = XGL_NULL_HANDLE;
- m_imageInfo.layout = XGL_IMAGE_LAYOUT_GENERAL;
+ m_imageInfo.view = VK_NULL_HANDLE;
+ m_imageInfo.layout = VK_IMAGE_LAYOUT_GENERAL;
}
void XglImage::ImageMemoryBarrier(
XglCommandBufferObj *cmd_buf,
- XGL_IMAGE_ASPECT aspect,
- XGL_FLAGS output_mask /*=
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT*/,
- XGL_FLAGS input_mask /*=
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT*/,
- XGL_IMAGE_LAYOUT image_layout)
-{
- const XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange = subresource_range(aspect, 0, 1, 0, 1);
- XGL_IMAGE_MEMORY_BARRIER barrier;
+ VK_IMAGE_ASPECT aspect,
+ VK_FLAGS output_mask /*=
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT*/,
+ VK_FLAGS input_mask /*=
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT*/,
+ VK_IMAGE_LAYOUT image_layout)
+{
+ const VK_IMAGE_SUBRESOURCE_RANGE subresourceRange = subresource_range(aspect, 0, 1, 0, 1);
+ VK_IMAGE_MEMORY_BARRIER barrier;
barrier = image_memory_barrier(output_mask, input_mask, layout(), image_layout,
subresourceRange);
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &barrier;
- XGL_PIPE_EVENT pipe_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT pipe_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.pNext = NULL;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = pipe_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
// write barrier to the command buffer
- xglCmdPipelineBarrier(cmd_buf->obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier(cmd_buf->obj(), &pipeline_barrier);
}
void XglImage::SetLayout(XglCommandBufferObj *cmd_buf,
- XGL_IMAGE_ASPECT aspect,
- XGL_IMAGE_LAYOUT image_layout)
-{
- XGL_FLAGS output_mask, input_mask;
- const XGL_FLAGS all_cache_outputs =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT;
- const XGL_FLAGS all_cache_inputs =
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT;
+ VK_IMAGE_ASPECT aspect,
+ VK_IMAGE_LAYOUT image_layout)
+{
+ VK_FLAGS output_mask, input_mask;
+ const VK_FLAGS all_cache_outputs =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT;
+ const VK_FLAGS all_cache_inputs =
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT;
if (image_layout == m_imageInfo.layout) {
return;
}
switch (image_layout) {
- case XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
- output_mask = XGL_MEMORY_OUTPUT_COPY_BIT;
- input_mask = XGL_MEMORY_INPUT_SHADER_READ_BIT | XGL_MEMORY_INPUT_COPY_BIT;
+ case VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL:
+ output_mask = VK_MEMORY_OUTPUT_COPY_BIT;
+ input_mask = VK_MEMORY_INPUT_SHADER_READ_BIT | VK_MEMORY_INPUT_COPY_BIT;
break;
- case XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
- output_mask = XGL_MEMORY_OUTPUT_COPY_BIT;
- input_mask = XGL_MEMORY_INPUT_SHADER_READ_BIT | XGL_MEMORY_INPUT_COPY_BIT;
+ case VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL:
+ output_mask = VK_MEMORY_OUTPUT_COPY_BIT;
+ input_mask = VK_MEMORY_INPUT_SHADER_READ_BIT | VK_MEMORY_INPUT_COPY_BIT;
break;
- case XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL:
+ case VK_IMAGE_LAYOUT_CLEAR_OPTIMAL:
break;
- case XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
- output_mask = XGL_MEMORY_OUTPUT_COPY_BIT;
- input_mask = XGL_MEMORY_INPUT_SHADER_READ_BIT | XGL_MEMORY_INPUT_COPY_BIT;
+ case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
+ output_mask = VK_MEMORY_OUTPUT_COPY_BIT;
+ input_mask = VK_MEMORY_INPUT_SHADER_READ_BIT | VK_MEMORY_INPUT_COPY_BIT;
break;
default:
m_imageInfo.layout = image_layout;
}
-void XglImage::SetLayout(XGL_IMAGE_ASPECT aspect,
- XGL_IMAGE_LAYOUT image_layout)
+void XglImage::SetLayout(VK_IMAGE_ASPECT aspect,
+ VK_IMAGE_LAYOUT image_layout)
{
- XGL_RESULT err;
+ VK_RESULT err;
XglCommandBufferObj cmd_buf(m_device);
/* Build command buffer to set image layout in the driver */
cmd_buf.QueueCommandBuffer();
}
-bool XglImage::IsCompatible(XGL_FLAGS usage, XGL_FLAGS features)
+bool XglImage::IsCompatible(VK_FLAGS usage, VK_FLAGS features)
{
- if ((usage & XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) &&
- !(features & XGL_FORMAT_IMAGE_SHADER_READ_BIT))
+ if ((usage & VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) &&
+ !(features & VK_FORMAT_IMAGE_SHADER_READ_BIT))
return false;
- if ((usage & XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT) &&
- !(features & XGL_FORMAT_IMAGE_SHADER_WRITE_BIT))
+ if ((usage & VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT) &&
+ !(features & VK_FORMAT_IMAGE_SHADER_WRITE_BIT))
return false;
return true;
}
void XglImage::init(uint32_t w, uint32_t h,
- XGL_FORMAT fmt, XGL_FLAGS usage,
- XGL_IMAGE_TILING requested_tiling)
+ VK_FORMAT fmt, VK_FLAGS usage,
+ VK_IMAGE_TILING requested_tiling)
{
uint32_t mipCount;
- XGL_FORMAT_PROPERTIES image_fmt;
- XGL_IMAGE_TILING tiling;
- XGL_RESULT err;
+ VK_FORMAT_PROPERTIES image_fmt;
+ VK_IMAGE_TILING tiling;
+ VK_RESULT err;
size_t size;
mipCount = 0;
}
size = sizeof(image_fmt);
- err = xglGetFormatInfo(m_device->obj(), fmt,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(m_device->obj(), fmt,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &image_fmt);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
- if (requested_tiling == XGL_LINEAR_TILING) {
+ if (requested_tiling == VK_LINEAR_TILING) {
if (IsCompatible(usage, image_fmt.linearTilingFeatures)) {
- tiling = XGL_LINEAR_TILING;
+ tiling = VK_LINEAR_TILING;
} else if (IsCompatible(usage, image_fmt.optimalTilingFeatures)) {
- tiling = XGL_OPTIMAL_TILING;
+ tiling = VK_OPTIMAL_TILING;
} else {
ASSERT_TRUE(false) << "Error: Cannot find requested tiling configuration";
}
} else if (IsCompatible(usage, image_fmt.optimalTilingFeatures)) {
- tiling = XGL_OPTIMAL_TILING;
+ tiling = VK_OPTIMAL_TILING;
} else if (IsCompatible(usage, image_fmt.linearTilingFeatures)) {
- tiling = XGL_LINEAR_TILING;
+ tiling = VK_LINEAR_TILING;
} else {
ASSERT_TRUE(false) << "Error: Cannot find requested tiling configuration";
}
- XGL_IMAGE_CREATE_INFO imageCreateInfo = xgl_testing::Image::create_info();
- imageCreateInfo.imageType = XGL_IMAGE_2D;
+ VK_IMAGE_CREATE_INFO imageCreateInfo = vk_testing::Image::create_info();
+ imageCreateInfo.imageType = VK_IMAGE_2D;
imageCreateInfo.format = fmt;
imageCreateInfo.extent.width = w;
imageCreateInfo.extent.height = h;
imageCreateInfo.usage = usage;
- xgl_testing::Image::init(*m_device, imageCreateInfo);
+ vk_testing::Image::init(*m_device, imageCreateInfo);
- if (usage & XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
- SetLayout(XGL_IMAGE_ASPECT_COLOR, XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
+ if (usage & VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT) {
+ SetLayout(VK_IMAGE_ASPECT_COLOR, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
} else {
- SetLayout(XGL_IMAGE_ASPECT_COLOR, XGL_IMAGE_LAYOUT_GENERAL);
+ SetLayout(VK_IMAGE_ASPECT_COLOR, VK_IMAGE_LAYOUT_GENERAL);
}
}
-XGL_RESULT XglImage::MapMemory(void** ptr)
+VK_RESULT XglImage::MapMemory(void** ptr)
{
*ptr = map();
- return (*ptr) ? XGL_SUCCESS : XGL_ERROR_UNKNOWN;
+ return (*ptr) ? VK_SUCCESS : VK_ERROR_UNKNOWN;
}
-XGL_RESULT XglImage::UnmapMemory()
+VK_RESULT XglImage::UnmapMemory()
{
unmap();
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT XglImage::CopyImage(XglImage &src_image)
+VK_RESULT XglImage::CopyImage(XglImage &src_image)
{
- XGL_RESULT err;
+ VK_RESULT err;
XglCommandBufferObj cmd_buf(m_device);
- XGL_IMAGE_LAYOUT src_image_layout, dest_image_layout;
+ VK_IMAGE_LAYOUT src_image_layout, dest_image_layout;
/* Build command buffer to copy staging texture to usable texture */
err = cmd_buf.BeginCommandBuffer();
/* TODO: Can we determine image aspect from image object? */
src_image_layout = src_image.layout();
- src_image.SetLayout(&cmd_buf, XGL_IMAGE_ASPECT_COLOR, XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
+ src_image.SetLayout(&cmd_buf, VK_IMAGE_ASPECT_COLOR, VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
dest_image_layout = this->layout();
- this->SetLayout(&cmd_buf, XGL_IMAGE_ASPECT_COLOR, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
+ this->SetLayout(&cmd_buf, VK_IMAGE_ASPECT_COLOR, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
- XGL_IMAGE_COPY copy_region = {};
- copy_region.srcSubresource.aspect = XGL_IMAGE_ASPECT_COLOR;
+ VK_IMAGE_COPY copy_region = {};
+ copy_region.srcSubresource.aspect = VK_IMAGE_ASPECT_COLOR;
copy_region.srcSubresource.arraySlice = 0;
copy_region.srcSubresource.mipLevel = 0;
copy_region.srcOffset.x = 0;
copy_region.srcOffset.y = 0;
copy_region.srcOffset.z = 0;
- copy_region.destSubresource.aspect = XGL_IMAGE_ASPECT_COLOR;
+ copy_region.destSubresource.aspect = VK_IMAGE_ASPECT_COLOR;
copy_region.destSubresource.arraySlice = 0;
copy_region.destSubresource.mipLevel = 0;
copy_region.destOffset.x = 0;
copy_region.destOffset.z = 0;
copy_region.extent = src_image.extent();
- xglCmdCopyImage(cmd_buf.obj(),
+ vkCmdCopyImage(cmd_buf.obj(),
src_image.obj(), src_image.layout(),
obj(), layout(),
1, ©_region);
cmd_buf.mem_ref_mgr.AddMemoryRefs(src_image);
cmd_buf.mem_ref_mgr.AddMemoryRefs(*this);
- src_image.SetLayout(&cmd_buf, XGL_IMAGE_ASPECT_COLOR, src_image_layout);
+ src_image.SetLayout(&cmd_buf, VK_IMAGE_ASPECT_COLOR, src_image_layout);
- this->SetLayout(&cmd_buf, XGL_IMAGE_ASPECT_COLOR, dest_image_layout);
+ this->SetLayout(&cmd_buf, VK_IMAGE_ASPECT_COLOR, dest_image_layout);
err = cmd_buf.EndCommandBuffer();
assert(!err);
cmd_buf.QueueCommandBuffer();
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
XglTextureObj::XglTextureObj(XglDevice *device, uint32_t *colors)
:XglImage(device)
{
m_device = device;
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
uint32_t tex_colors[2] = { 0xffff0000, 0xff00ff00 };
void *data;
int32_t x, y;
XglImage stagingImage(device);
- stagingImage.init(16, 16, tex_format, 0, XGL_LINEAR_TILING);
- XGL_SUBRESOURCE_LAYOUT layout = stagingImage.subresource_layout(subresource(XGL_IMAGE_ASPECT_COLOR, 0, 0));
+ stagingImage.init(16, 16, tex_format, 0, VK_LINEAR_TILING);
+ VK_SUBRESOURCE_LAYOUT layout = stagingImage.subresource_layout(subresource(VK_IMAGE_ASPECT_COLOR, 0, 0));
if (colors == NULL)
colors = tex_colors;
memset(&m_textureViewInfo,0,sizeof(m_textureViewInfo));
- m_textureViewInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ m_textureViewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
- XGL_IMAGE_VIEW_CREATE_INFO view = {};
- view.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
+ VK_IMAGE_VIEW_CREATE_INFO view = {};
+ view.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
view.pNext = NULL;
- view.image = XGL_NULL_HANDLE;
- view.viewType = XGL_IMAGE_VIEW_2D;
+ view.image = VK_NULL_HANDLE;
+ view.viewType = VK_IMAGE_VIEW_2D;
view.format = tex_format;
- view.channels.r = XGL_CHANNEL_SWIZZLE_R;
- view.channels.g = XGL_CHANNEL_SWIZZLE_G;
- view.channels.b = XGL_CHANNEL_SWIZZLE_B;
- view.channels.a = XGL_CHANNEL_SWIZZLE_A;
- view.subresourceRange.aspect = XGL_IMAGE_ASPECT_COLOR;
+ view.channels.r = VK_CHANNEL_SWIZZLE_R;
+ view.channels.g = VK_CHANNEL_SWIZZLE_G;
+ view.channels.b = VK_CHANNEL_SWIZZLE_B;
+ view.channels.a = VK_CHANNEL_SWIZZLE_A;
+ view.subresourceRange.aspect = VK_IMAGE_ASPECT_COLOR;
view.subresourceRange.baseMipLevel = 0;
view.subresourceRange.mipLevels = 1;
view.subresourceRange.baseArraySlice = 0;
view.minLod = 0.0f;
/* create image */
- init(16, 16, tex_format, XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT, XGL_OPTIMAL_TILING);
+ init(16, 16, tex_format, VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT, VK_OPTIMAL_TILING);
/* create image view */
view.image = obj();
{
m_device = device;
- XGL_SAMPLER_CREATE_INFO samplerCreateInfo;
+ VK_SAMPLER_CREATE_INFO samplerCreateInfo;
memset(&samplerCreateInfo,0,sizeof(samplerCreateInfo));
- samplerCreateInfo.sType = XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
- samplerCreateInfo.magFilter = XGL_TEX_FILTER_NEAREST;
- samplerCreateInfo.minFilter = XGL_TEX_FILTER_NEAREST;
- samplerCreateInfo.mipMode = XGL_TEX_MIPMAP_BASE;
- samplerCreateInfo.addressU = XGL_TEX_ADDRESS_WRAP;
- samplerCreateInfo.addressV = XGL_TEX_ADDRESS_WRAP;
- samplerCreateInfo.addressW = XGL_TEX_ADDRESS_WRAP;
+ samplerCreateInfo.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
+ samplerCreateInfo.magFilter = VK_TEX_FILTER_NEAREST;
+ samplerCreateInfo.minFilter = VK_TEX_FILTER_NEAREST;
+ samplerCreateInfo.mipMode = VK_TEX_MIPMAP_BASE;
+ samplerCreateInfo.addressU = VK_TEX_ADDRESS_WRAP;
+ samplerCreateInfo.addressV = VK_TEX_ADDRESS_WRAP;
+ samplerCreateInfo.addressW = VK_TEX_ADDRESS_WRAP;
samplerCreateInfo.mipLodBias = 0.0;
samplerCreateInfo.maxAnisotropy = 0.0;
- samplerCreateInfo.compareFunc = XGL_COMPARE_NEVER;
+ samplerCreateInfo.compareFunc = VK_COMPARE_NEVER;
samplerCreateInfo.minLod = 0.0;
samplerCreateInfo.maxLod = 0.0;
- samplerCreateInfo.borderColorType = XGL_BORDER_COLOR_OPAQUE_WHITE;
+ samplerCreateInfo.borderColorType = VK_BORDER_COLOR_OPAQUE_WHITE;
init(*m_device, samplerCreateInfo);
}
unmap();
// set up the buffer view for the constant buffer
- XGL_BUFFER_VIEW_CREATE_INFO view_info = {};
- view_info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
+ VK_BUFFER_VIEW_CREATE_INFO view_info = {};
+ view_info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
view_info.buffer = obj();
- view_info.viewType = XGL_BUFFER_VIEW_RAW;
+ view_info.viewType = VK_BUFFER_VIEW_RAW;
view_info.offset = 0;
view_info.range = allocationSize;
m_bufferView.init(*m_device, view_info);
- this->m_bufferViewInfo.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
+ this->m_bufferViewInfo.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
this->m_bufferViewInfo.view = m_bufferView.obj();
}
-void XglConstantBufferObj::Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset, uint32_t binding)
+void XglConstantBufferObj::Bind(VK_CMD_BUFFER cmdBuffer, VK_GPU_SIZE offset, uint32_t binding)
{
- xglCmdBindVertexBuffer(cmdBuffer, obj(), offset, binding);
+ vkCmdBindVertexBuffer(cmdBuffer, obj(), offset, binding);
}
void XglConstantBufferObj::BufferMemoryBarrier(
- XGL_FLAGS outputMask /*=
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT*/,
- XGL_FLAGS inputMask /*=
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT*/)
-{
- XGL_RESULT err = XGL_SUCCESS;
+ VK_FLAGS outputMask /*=
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT*/,
+ VK_FLAGS inputMask /*=
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT*/)
+{
+ VK_RESULT err = VK_SUCCESS;
if (!m_commandBuffer)
{
- m_fence.init(*m_device, xgl_testing::Fence::create_info());
+ m_fence.init(*m_device, vk_testing::Fence::create_info());
m_commandBuffer = new XglCommandBufferObj(m_device);
}
}
// open the command buffer
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {};
- cmd_buf_info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {};
+ cmd_buf_info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
cmd_buf_info.pNext = NULL;
cmd_buf_info.flags = 0;
err = m_commandBuffer->BeginCommandBuffer(&cmd_buf_info);
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
- XGL_BUFFER_MEMORY_BARRIER memory_barrier =
+ VK_BUFFER_MEMORY_BARRIER memory_barrier =
buffer_memory_barrier(outputMask, inputMask, 0, m_numVertices * m_stride);
- XGL_BUFFER_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
+ VK_BUFFER_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
// finish recording the command buffer
err = m_commandBuffer->EndCommandBuffer();
- ASSERT_XGL_SUCCESS(err);
+ ASSERT_VK_SUCCESS(err);
/*
* Tell driver about memory references made in this command buffer
m_commandBuffer->mem_ref_mgr.EmitAddMemoryRefs(m_device->m_queue);
// submit the command buffer to the universal queue
- XGL_CMD_BUFFER bufferArray[1];
+ VK_CMD_BUFFER bufferArray[1];
bufferArray[0] = m_commandBuffer->GetBufferHandle();
- err = xglQueueSubmit( m_device->m_queue, 1, bufferArray, m_fence.obj() );
- ASSERT_XGL_SUCCESS(err);
+ err = vkQueueSubmit( m_device->m_queue, 1, bufferArray, m_fence.obj() );
+ ASSERT_VK_SUCCESS(err);
}
XglIndexBufferObj::XglIndexBufferObj(XglDevice *device)
}
-void XglIndexBufferObj::CreateAndInitBuffer(int numIndexes, XGL_INDEX_TYPE indexType, const void* data)
+void XglIndexBufferObj::CreateAndInitBuffer(int numIndexes, VK_INDEX_TYPE indexType, const void* data)
{
- XGL_FORMAT viewFormat;
+ VK_FORMAT viewFormat;
m_numVertices = numIndexes;
m_indexType = indexType;
switch (indexType) {
- case XGL_INDEX_8:
+ case VK_INDEX_8:
m_stride = 1;
- viewFormat = XGL_FMT_R8_UINT;
+ viewFormat = VK_FMT_R8_UINT;
break;
- case XGL_INDEX_16:
+ case VK_INDEX_16:
m_stride = 2;
- viewFormat = XGL_FMT_R16_UINT;
+ viewFormat = VK_FMT_R16_UINT;
break;
- case XGL_INDEX_32:
+ case VK_INDEX_32:
m_stride = 4;
- viewFormat = XGL_FMT_R32_UINT;
+ viewFormat = VK_FMT_R32_UINT;
break;
default:
assert(!"unknown index type");
unmap();
// set up the buffer view for the constant buffer
- XGL_BUFFER_VIEW_CREATE_INFO view_info = {};
- view_info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
+ VK_BUFFER_VIEW_CREATE_INFO view_info = {};
+ view_info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
view_info.buffer = obj();
- view_info.viewType = XGL_BUFFER_VIEW_TYPED;
+ view_info.viewType = VK_BUFFER_VIEW_TYPED;
view_info.format = viewFormat;
view_info.offset = 0;
view_info.range = allocationSize;
m_bufferView.init(*m_device, view_info);
- this->m_bufferViewInfo.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
+ this->m_bufferViewInfo.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
this->m_bufferViewInfo.view = m_bufferView.obj();
}
-void XglIndexBufferObj::Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset)
+void XglIndexBufferObj::Bind(VK_CMD_BUFFER cmdBuffer, VK_GPU_SIZE offset)
{
- xglCmdBindIndexBuffer(cmdBuffer, obj(), offset, m_indexType);
+ vkCmdBindIndexBuffer(cmdBuffer, obj(), offset, m_indexType);
}
-XGL_INDEX_TYPE XglIndexBufferObj::GetIndexType()
+VK_INDEX_TYPE XglIndexBufferObj::GetIndexType()
{
return m_indexType;
}
-XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* XglShaderObj::GetStageCreateInfo()
+VK_PIPELINE_SHADER_STAGE_CREATE_INFO* XglShaderObj::GetStageCreateInfo()
{
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO *stageInfo = (XGL_PIPELINE_SHADER_STAGE_CREATE_INFO*) calloc( 1,sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO) );
- stageInfo->sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO *stageInfo = (VK_PIPELINE_SHADER_STAGE_CREATE_INFO*) calloc( 1,sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO) );
+ stageInfo->sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
stageInfo->shader.stage = m_stage;
stageInfo->shader.shader = obj();
stageInfo->shader.linkConstBufferCount = 0;
- stageInfo->shader.pLinkConstBufferInfo = XGL_NULL_HANDLE;
+ stageInfo->shader.pLinkConstBufferInfo = VK_NULL_HANDLE;
return stageInfo;
}
-XglShaderObj::XglShaderObj(XglDevice *device, const char * shader_code, XGL_PIPELINE_SHADER_STAGE stage, XglRenderFramework *framework)
+XglShaderObj::XglShaderObj(XglDevice *device, const char * shader_code, VK_PIPELINE_SHADER_STAGE stage, XglRenderFramework *framework)
{
- XGL_RESULT err = XGL_SUCCESS;
+ VK_RESULT err = VK_SUCCESS;
std::vector<unsigned int> spv;
- XGL_SHADER_CREATE_INFO createInfo;
+ VK_SHADER_CREATE_INFO createInfo;
size_t shader_len;
m_stage = stage;
m_device = device;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
if (!framework->m_use_spv) {
createInfo.pCode = malloc(createInfo.codeSize);
createInfo.flags = 0;
- /* try version 0 first: XGL_PIPELINE_SHADER_STAGE followed by GLSL */
+ /* try version 0 first: VK_PIPELINE_SHADER_STAGE followed by GLSL */
((uint32_t *) createInfo.pCode)[0] = ICD_SPV_MAGIC;
((uint32_t *) createInfo.pCode)[1] = 0;
((uint32_t *) createInfo.pCode)[2] = stage;
if (framework->m_use_spv || err) {
std::vector<unsigned int> spv;
- err = XGL_SUCCESS;
+ err = VK_SUCCESS;
// Use Reference GLSL to SPV compiler
framework->GLSLtoSPV(stage, shader_code, spv);
m_vi_state.attributeCount = m_vi_state.bindingCount = 0;
m_vertexBufferCount = 0;
- m_ia_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
- m_ia_state.pNext = XGL_NULL_HANDLE;
- m_ia_state.topology = XGL_TOPOLOGY_TRIANGLE_LIST;
- m_ia_state.disableVertexReuse = XGL_FALSE;
- m_ia_state.primitiveRestartEnable = XGL_FALSE;
+ m_ia_state.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
+ m_ia_state.pNext = VK_NULL_HANDLE;
+ m_ia_state.topology = VK_TOPOLOGY_TRIANGLE_LIST;
+ m_ia_state.disableVertexReuse = VK_FALSE;
+ m_ia_state.primitiveRestartEnable = VK_FALSE;
m_ia_state.primitiveRestartIndex = 0;
- m_rs_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
+ m_rs_state.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
m_rs_state.pNext = &m_ia_state;
- m_rs_state.depthClipEnable = XGL_FALSE;
- m_rs_state.rasterizerDiscardEnable = XGL_FALSE;
- m_rs_state.programPointSize = XGL_FALSE;
- m_rs_state.pointOrigin = XGL_COORDINATE_ORIGIN_UPPER_LEFT;
- m_rs_state.provokingVertex = XGL_PROVOKING_VERTEX_LAST;
- m_rs_state.fillMode = XGL_FILL_SOLID;
- m_rs_state.cullMode = XGL_CULL_NONE;
- m_rs_state.frontFace = XGL_FRONT_FACE_CCW;
+ m_rs_state.depthClipEnable = VK_FALSE;
+ m_rs_state.rasterizerDiscardEnable = VK_FALSE;
+ m_rs_state.programPointSize = VK_FALSE;
+ m_rs_state.pointOrigin = VK_COORDINATE_ORIGIN_UPPER_LEFT;
+ m_rs_state.provokingVertex = VK_PROVOKING_VERTEX_LAST;
+ m_rs_state.fillMode = VK_FILL_SOLID;
+ m_rs_state.cullMode = VK_CULL_NONE;
+ m_rs_state.frontFace = VK_FRONT_FACE_CCW;
memset(&m_cb_state,0,sizeof(m_cb_state));
- m_cb_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
+ m_cb_state.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
m_cb_state.pNext = &m_rs_state;
- m_cb_state.alphaToCoverageEnable = XGL_FALSE;
- m_cb_state.logicOp = XGL_LOGIC_OP_COPY;
+ m_cb_state.alphaToCoverageEnable = VK_FALSE;
+ m_cb_state.logicOp = VK_LOGIC_OP_COPY;
m_ms_state.pNext = &m_cb_state;
- m_ms_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
- m_ms_state.multisampleEnable = XGL_FALSE;
+ m_ms_state.sType = VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
+ m_ms_state.multisampleEnable = VK_FALSE;
m_ms_state.sampleMask = 1; // Do we have to specify MSAA even just to disable it?
m_ms_state.samples = 1;
m_ms_state.minSampleShading = 0;
m_ms_state.sampleShadingEnable = 0;
- m_ds_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
+ m_ds_state.sType = VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
m_ds_state.pNext = &m_ms_state,
- m_ds_state.format = XGL_FMT_D32_SFLOAT;
- m_ds_state.depthTestEnable = XGL_FALSE;
- m_ds_state.depthWriteEnable = XGL_FALSE;
- m_ds_state.depthBoundsEnable = XGL_FALSE;
- m_ds_state.depthFunc = XGL_COMPARE_LESS_EQUAL;
- m_ds_state.back.stencilDepthFailOp = XGL_STENCIL_OP_KEEP;
- m_ds_state.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- m_ds_state.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- m_ds_state.back.stencilFunc = XGL_COMPARE_ALWAYS;
- m_ds_state.stencilTestEnable = XGL_FALSE;
+ m_ds_state.format = VK_FMT_D32_SFLOAT;
+ m_ds_state.depthTestEnable = VK_FALSE;
+ m_ds_state.depthWriteEnable = VK_FALSE;
+ m_ds_state.depthBoundsEnable = VK_FALSE;
+ m_ds_state.depthFunc = VK_COMPARE_LESS_EQUAL;
+ m_ds_state.back.stencilDepthFailOp = VK_STENCIL_OP_KEEP;
+ m_ds_state.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ m_ds_state.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ m_ds_state.back.stencilFunc = VK_COMPARE_ALWAYS;
+ m_ds_state.stencilTestEnable = VK_FALSE;
m_ds_state.front = m_ds_state.back;
- XGL_PIPELINE_CB_ATTACHMENT_STATE att = {};
- att.blendEnable = XGL_FALSE;
- att.format = XGL_FMT_B8G8R8A8_UNORM;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att = {};
+ att.blendEnable = VK_FALSE;
+ att.format = VK_FMT_B8G8R8A8_UNORM;
att.channelWriteMask = 0xf;
AddColorAttachment(0, &att);
m_shaderObjs.push_back(shader);
}
-void XglPipelineObj::AddVertexInputAttribs(XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* vi_attrib, int count)
+void XglPipelineObj::AddVertexInputAttribs(VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* vi_attrib, int count)
{
m_vi_state.pVertexAttributeDescriptions = vi_attrib;
m_vi_state.attributeCount = count;
}
-void XglPipelineObj::AddVertexInputBindings(XGL_VERTEX_INPUT_BINDING_DESCRIPTION* vi_binding, int count)
+void XglPipelineObj::AddVertexInputBindings(VK_VERTEX_INPUT_BINDING_DESCRIPTION* vi_binding, int count)
{
m_vi_state.pVertexBindingDescriptions = vi_binding;
m_vi_state.bindingCount = count;
m_vertexBufferCount++;
}
-void XglPipelineObj::AddColorAttachment(uint32_t binding, const XGL_PIPELINE_CB_ATTACHMENT_STATE *att)
+void XglPipelineObj::AddColorAttachment(uint32_t binding, const VK_PIPELINE_CB_ATTACHMENT_STATE *att)
{
if (binding+1 > m_colorAttachments.size())
{
m_colorAttachments[binding] = *att;
}
-void XglPipelineObj::SetDepthStencil(XGL_PIPELINE_DS_STATE_CREATE_INFO *ds_state)
+void XglPipelineObj::SetDepthStencil(VK_PIPELINE_DS_STATE_CREATE_INFO *ds_state)
{
m_ds_state.format = ds_state->format;
m_ds_state.depthTestEnable = ds_state->depthTestEnable;
m_ds_state.front = ds_state->front;
}
-void XglPipelineObj::CreateXGLPipeline(XglDescriptorSetObj &descriptorSet)
+void XglPipelineObj::CreateVKPipeline(XglDescriptorSetObj &descriptorSet)
{
void* head_ptr = &m_ds_state;
- XGL_GRAPHICS_PIPELINE_CREATE_INFO info = {};
+ VK_GRAPHICS_PIPELINE_CREATE_INFO info = {};
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* shaderCreateInfo;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO* shaderCreateInfo;
for (int i=0; i<m_shaderObjs.size(); i++)
{
if (m_vi_state.attributeCount && m_vi_state.bindingCount)
{
- m_vi_state.sType = XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
+ m_vi_state.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
m_vi_state.pNext = head_ptr;
head_ptr = &m_vi_state;
}
- info.sType = XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ info.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
info.pNext = head_ptr;
info.flags = 0;
info.pSetLayoutChain = descriptorSet.GetLayoutChain();
init(*m_device, info);
}
-vector<XGL_GPU_MEMORY> XglMemoryRefManager::mem_refs() const
+vector<VK_GPU_MEMORY> XglMemoryRefManager::mem_refs() const
{
- std::vector<XGL_GPU_MEMORY> mems;
+ std::vector<VK_GPU_MEMORY> mems;
if (this->mem_refs_.size()) {
mems.reserve(this->mem_refs_.size());
for (uint32_t i = 0; i < this->mem_refs_.size(); i++)
return mems;
}
-void XglMemoryRefManager::AddMemoryRefs(xgl_testing::Object &xglObject)
+void XglMemoryRefManager::AddMemoryRefs(vk_testing::Object &vkObject)
{
- const std::vector<XGL_GPU_MEMORY> mems = xglObject.memories();
+ const std::vector<VK_GPU_MEMORY> mems = vkObject.memories();
AddMemoryRefs(mems);
}
-void XglMemoryRefManager::AddMemoryRefs(vector<XGL_GPU_MEMORY> mem)
+void XglMemoryRefManager::AddMemoryRefs(vector<VK_GPU_MEMORY> mem)
{
for (size_t i = 0; i < mem.size(); i++) {
if (mem[i] != NULL) {
}
}
-void XglMemoryRefManager::EmitAddMemoryRefs(XGL_QUEUE queue)
+void XglMemoryRefManager::EmitAddMemoryRefs(VK_QUEUE queue)
{
for (uint32_t i = 0; i < mem_refs_.size(); i++) {
- xglQueueAddMemReference(queue, mem_refs_[i]);
+ vkQueueAddMemReference(queue, mem_refs_[i]);
}
}
-void XglMemoryRefManager::EmitRemoveMemoryRefs(XGL_QUEUE queue)
+void XglMemoryRefManager::EmitRemoveMemoryRefs(VK_QUEUE queue)
{
for (uint32_t i = 0; i < mem_refs_.size(); i++) {
- xglQueueRemoveMemReference(queue, mem_refs_[i]);
+ vkQueueRemoveMemReference(queue, mem_refs_[i]);
}
}
XglCommandBufferObj::XglCommandBufferObj(XglDevice *device)
- : xgl_testing::CmdBuffer(*device, xgl_testing::CmdBuffer::create_info(device->graphics_queue_node_index_))
+ : vk_testing::CmdBuffer(*device, vk_testing::CmdBuffer::create_info(device->graphics_queue_node_index_))
{
m_device = device;
}
-XGL_CMD_BUFFER XglCommandBufferObj::GetBufferHandle()
+VK_CMD_BUFFER XglCommandBufferObj::GetBufferHandle()
{
return obj();
}
-XGL_RESULT XglCommandBufferObj::BeginCommandBuffer(XGL_CMD_BUFFER_BEGIN_INFO *pInfo)
+VK_RESULT XglCommandBufferObj::BeginCommandBuffer(VK_CMD_BUFFER_BEGIN_INFO *pInfo)
{
begin(pInfo);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT XglCommandBufferObj::BeginCommandBuffer(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj)
+VK_RESULT XglCommandBufferObj::BeginCommandBuffer(VK_RENDER_PASS renderpass_obj, VK_FRAMEBUFFER framebuffer_obj)
{
begin(renderpass_obj, framebuffer_obj);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT XglCommandBufferObj::BeginCommandBuffer()
+VK_RESULT XglCommandBufferObj::BeginCommandBuffer()
{
begin();
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT XglCommandBufferObj::EndCommandBuffer()
+VK_RESULT XglCommandBufferObj::EndCommandBuffer()
{
end();
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-void XglCommandBufferObj::PipelineBarrier(XGL_PIPELINE_BARRIER *barrierPtr)
+void XglCommandBufferObj::PipelineBarrier(VK_PIPELINE_BARRIER *barrierPtr)
{
- xglCmdPipelineBarrier(obj(), barrierPtr);
+ vkCmdPipelineBarrier(obj(), barrierPtr);
}
-void XglCommandBufferObj::ClearAllBuffers(XGL_CLEAR_COLOR clear_color, float depth_clear_color, uint32_t stencil_clear_color,
+void XglCommandBufferObj::ClearAllBuffers(VK_CLEAR_COLOR clear_color, float depth_clear_color, uint32_t stencil_clear_color,
XglDepthStencilObj *depthStencilObj)
{
uint32_t i;
- const XGL_FLAGS output_mask =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT;
- const XGL_FLAGS input_mask = 0;
+ const VK_FLAGS output_mask =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT;
+ const VK_FLAGS input_mask = 0;
// whatever we want to do, we do it to the whole buffer
- XGL_IMAGE_SUBRESOURCE_RANGE srRange = {};
- srRange.aspect = XGL_IMAGE_ASPECT_COLOR;
+ VK_IMAGE_SUBRESOURCE_RANGE srRange = {};
+ srRange.aspect = VK_IMAGE_ASPECT_COLOR;
srRange.baseMipLevel = 0;
- srRange.mipLevels = XGL_LAST_MIP_OR_SLICE;
+ srRange.mipLevels = VK_LAST_MIP_OR_SLICE;
srRange.baseArraySlice = 0;
- srRange.arraySize = XGL_LAST_MIP_OR_SLICE;
+ srRange.arraySize = VK_LAST_MIP_OR_SLICE;
- XGL_IMAGE_MEMORY_BARRIER memory_barrier = {};
- memory_barrier.sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
+ VK_IMAGE_MEMORY_BARRIER memory_barrier = {};
+ memory_barrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
memory_barrier.outputMask = output_mask;
memory_barrier.inputMask = input_mask;
- memory_barrier.newLayout = XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL;
+ memory_barrier.newLayout = VK_IMAGE_LAYOUT_CLEAR_OPTIMAL;
memory_barrier.subresourceRange = srRange;
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
for (i = 0; i < m_renderTargets.size(); i++) {
memory_barrier.image = m_renderTargets[i]->image();
memory_barrier.oldLayout = m_renderTargets[i]->layout();
- xglCmdPipelineBarrier( obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier( obj(), &pipeline_barrier);
m_renderTargets[i]->layout(memory_barrier.newLayout);
- xglCmdClearColorImage(obj(),
- m_renderTargets[i]->image(), XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ vkCmdClearColorImage(obj(),
+ m_renderTargets[i]->image(), VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, 1, &srRange );
mem_ref_mgr.AddMemoryRefs(*m_renderTargets[i]);
if (depthStencilObj)
{
- XGL_IMAGE_SUBRESOURCE_RANGE dsRange = {};
- dsRange.aspect = XGL_IMAGE_ASPECT_DEPTH;
+ VK_IMAGE_SUBRESOURCE_RANGE dsRange = {};
+ dsRange.aspect = VK_IMAGE_ASPECT_DEPTH;
dsRange.baseMipLevel = 0;
- dsRange.mipLevels = XGL_LAST_MIP_OR_SLICE;
+ dsRange.mipLevels = VK_LAST_MIP_OR_SLICE;
dsRange.baseArraySlice = 0;
- dsRange.arraySize = XGL_LAST_MIP_OR_SLICE;
+ dsRange.arraySize = VK_LAST_MIP_OR_SLICE;
// prepare the depth buffer for clear
memory_barrier.oldLayout = depthStencilObj->BindInfo()->layout;
- memory_barrier.newLayout = XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL;
+ memory_barrier.newLayout = VK_IMAGE_LAYOUT_CLEAR_OPTIMAL;
memory_barrier.image = depthStencilObj->obj();
memory_barrier.subresourceRange = dsRange;
- xglCmdPipelineBarrier( obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier( obj(), &pipeline_barrier);
- xglCmdClearDepthStencil(obj(),
- depthStencilObj->obj(), XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ vkCmdClearDepthStencil(obj(),
+ depthStencilObj->obj(), VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
depth_clear_color, stencil_clear_color,
1, &dsRange);
mem_ref_mgr.AddMemoryRefs(*depthStencilObj);
// prepare depth buffer for rendering
memory_barrier.image = depthStencilObj->obj();
- memory_barrier.oldLayout = XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL;
+ memory_barrier.oldLayout = VK_IMAGE_LAYOUT_CLEAR_OPTIMAL;
memory_barrier.newLayout = depthStencilObj->BindInfo()->layout;
memory_barrier.subresourceRange = dsRange;
- xglCmdPipelineBarrier( obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier( obj(), &pipeline_barrier);
}
}
void XglCommandBufferObj::PrepareAttachments()
{
uint32_t i;
- const XGL_FLAGS output_mask =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT;
- const XGL_FLAGS input_mask =
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT;
-
- XGL_IMAGE_SUBRESOURCE_RANGE srRange = {};
- srRange.aspect = XGL_IMAGE_ASPECT_COLOR;
+ const VK_FLAGS output_mask =
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT |
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT |
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_OUTPUT_COPY_BIT;
+ const VK_FLAGS input_mask =
+ VK_MEMORY_INPUT_CPU_READ_BIT |
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT |
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT |
+ VK_MEMORY_INPUT_SHADER_READ_BIT |
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
+ VK_MEMORY_INPUT_COPY_BIT;
+
+ VK_IMAGE_SUBRESOURCE_RANGE srRange = {};
+ srRange.aspect = VK_IMAGE_ASPECT_COLOR;
srRange.baseMipLevel = 0;
- srRange.mipLevels = XGL_LAST_MIP_OR_SLICE;
+ srRange.mipLevels = VK_LAST_MIP_OR_SLICE;
srRange.baseArraySlice = 0;
- srRange.arraySize = XGL_LAST_MIP_OR_SLICE;
+ srRange.arraySize = VK_LAST_MIP_OR_SLICE;
- XGL_IMAGE_MEMORY_BARRIER memory_barrier = {};
- memory_barrier.sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
+ VK_IMAGE_MEMORY_BARRIER memory_barrier = {};
+ memory_barrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
memory_barrier.outputMask = output_mask;
memory_barrier.inputMask = input_mask;
- memory_barrier.newLayout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
+ memory_barrier.newLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
memory_barrier.subresourceRange = srRange;
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
- XGL_PIPELINE_BARRIER pipeline_barrier = {};
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE };
+ VK_PIPELINE_BARRIER pipeline_barrier = {};
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
{
memory_barrier.image = m_renderTargets[i]->image();
memory_barrier.oldLayout = m_renderTargets[i]->layout();
- xglCmdPipelineBarrier( obj(), &pipeline_barrier);
+ vkCmdPipelineBarrier( obj(), &pipeline_barrier);
m_renderTargets[i]->layout(memory_barrier.newLayout);
}
}
-void XglCommandBufferObj::BeginRenderPass(XGL_RENDER_PASS renderpass, XGL_FRAMEBUFFER framebuffer)
+void XglCommandBufferObj::BeginRenderPass(VK_RENDER_PASS renderpass, VK_FRAMEBUFFER framebuffer)
{
- XGL_RENDER_PASS_BEGIN rp_begin = {
+ VK_RENDER_PASS_BEGIN rp_begin = {
renderpass,
framebuffer,
};
- xglCmdBeginRenderPass( obj(), &rp_begin);
+ vkCmdBeginRenderPass( obj(), &rp_begin);
}
-void XglCommandBufferObj::EndRenderPass(XGL_RENDER_PASS renderpass)
+void XglCommandBufferObj::EndRenderPass(VK_RENDER_PASS renderpass)
{
- xglCmdEndRenderPass( obj(), renderpass);
+ vkCmdEndRenderPass( obj(), renderpass);
}
-void XglCommandBufferObj::BindStateObject(XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT stateObject)
+void XglCommandBufferObj::BindStateObject(VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT stateObject)
{
- xglCmdBindDynamicStateObject( obj(), stateBindPoint, stateObject);
+ vkCmdBindDynamicStateObject( obj(), stateBindPoint, stateObject);
}
void XglCommandBufferObj::AddRenderTarget(XglImage *renderTarget)
void XglCommandBufferObj::DrawIndexed(uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
- xglCmdDrawIndexed(obj(), firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
+ vkCmdDrawIndexed(obj(), firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
void XglCommandBufferObj::Draw(uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
- xglCmdDraw(obj(), firstVertex, vertexCount, firstInstance, instanceCount);
+ vkCmdDraw(obj(), firstVertex, vertexCount, firstInstance, instanceCount);
}
void XglCommandBufferObj::QueueCommandBuffer()
QueueCommandBuffer(NULL);
}
-void XglCommandBufferObj::QueueCommandBuffer(XGL_FENCE fence)
+void XglCommandBufferObj::QueueCommandBuffer(VK_FENCE fence)
{
- XGL_RESULT err = XGL_SUCCESS;
+ VK_RESULT err = VK_SUCCESS;
mem_ref_mgr.EmitAddMemoryRefs(m_device->m_queue);
// submit the command buffer to the universal queue
- err = xglQueueSubmit( m_device->m_queue, 1, &obj(), fence );
- ASSERT_XGL_SUCCESS( err );
+ err = vkQueueSubmit( m_device->m_queue, 1, &obj(), fence );
+ ASSERT_VK_SUCCESS( err );
- err = xglQueueWaitIdle( m_device->m_queue );
- ASSERT_XGL_SUCCESS( err );
+ err = vkQueueWaitIdle( m_device->m_queue );
+ ASSERT_VK_SUCCESS( err );
// Wait for work to finish before cleaning up.
- xglDeviceWaitIdle(m_device->device());
+ vkDeviceWaitIdle(m_device->device());
/*
* Now that processing on this command buffer is complete
void XglCommandBufferObj::BindPipeline(XglPipelineObj &pipeline)
{
- xglCmdBindPipeline( obj(), XGL_PIPELINE_BIND_POINT_GRAPHICS, pipeline.obj() );
+ vkCmdBindPipeline( obj(), VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline.obj() );
mem_ref_mgr.AddMemoryRefs(pipeline);
}
void XglCommandBufferObj::BindDescriptorSet(XglDescriptorSetObj &descriptorSet)
{
- XGL_DESCRIPTOR_SET set_obj = descriptorSet.GetDescriptorSetHandle();
+ VK_DESCRIPTOR_SET set_obj = descriptorSet.GetDescriptorSetHandle();
// bind pipeline, vertex buffer (descriptor set) and WVP (dynamic buffer view)
- xglCmdBindDescriptorSets(obj(), XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindDescriptorSets(obj(), VK_PIPELINE_BIND_POINT_GRAPHICS,
descriptorSet.GetLayoutChain(), 0, 1, &set_obj, NULL );
// Add descriptor set mem refs to command buffer's list
void XglCommandBufferObj::BindIndexBuffer(XglIndexBufferObj *indexBuffer, uint32_t offset)
{
- xglCmdBindIndexBuffer(obj(), indexBuffer->obj(), offset, indexBuffer->GetIndexType());
+ vkCmdBindIndexBuffer(obj(), indexBuffer->obj(), offset, indexBuffer->GetIndexType());
mem_ref_mgr.AddMemoryRefs(*indexBuffer);
}
void XglCommandBufferObj::BindVertexBuffer(XglConstantBufferObj *vertexBuffer, uint32_t offset, uint32_t binding)
{
- xglCmdBindVertexBuffer(obj(), vertexBuffer->obj(), offset, binding);
+ vkCmdBindVertexBuffer(obj(), vertexBuffer->obj(), offset, binding);
mem_ref_mgr.AddMemoryRefs(*vertexBuffer);
}
return m_initialized;
}
-XGL_DEPTH_STENCIL_BIND_INFO* XglDepthStencilObj::BindInfo()
+VK_DEPTH_STENCIL_BIND_INFO* XglDepthStencilObj::BindInfo()
{
return &m_depthStencilBindInfo;
}
void XglDepthStencilObj::Init(XglDevice *device, int32_t width, int32_t height)
{
- XGL_IMAGE_CREATE_INFO image_info;
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO view_info;
+ VK_IMAGE_CREATE_INFO image_info;
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO view_info;
m_device = device;
m_initialized = true;
- m_depth_stencil_fmt = XGL_FMT_D16_UNORM;
+ m_depth_stencil_fmt = VK_FMT_D16_UNORM;
- image_info.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
+ image_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
image_info.pNext = NULL;
- image_info.imageType = XGL_IMAGE_2D;
+ image_info.imageType = VK_IMAGE_2D;
image_info.format = m_depth_stencil_fmt;
image_info.extent.width = width;
image_info.extent.height = height;
image_info.mipLevels = 1;
image_info.arraySize = 1;
image_info.samples = 1;
- image_info.tiling = XGL_OPTIMAL_TILING;
- image_info.usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT;
+ image_info.tiling = VK_OPTIMAL_TILING;
+ image_info.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT;
image_info.flags = 0;
init(*m_device, image_info);
- view_info.sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ view_info.sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO;
view_info.pNext = NULL;
- view_info.image = XGL_NULL_HANDLE;
+ view_info.image = VK_NULL_HANDLE;
view_info.mipLevel = 0;
view_info.baseArraySlice = 0;
view_info.arraySize = 1;
m_depthStencilView.init(*m_device, view_info);
m_depthStencilBindInfo.view = m_depthStencilView.obj();
- m_depthStencilBindInfo.layout = XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
+ m_depthStencilBindInfo.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
}
+++ /dev/null
-/*
- * XGL Tests
- *
- * Copyright (C) 2014 LunarG, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included
- * in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- *
- * Authors:
- * Courtney Goeltzenleuchter <courtney@lunarg.com>
- */
-
-#ifndef XGLRENDERFRAMEWORK_H
-#define XGLRENDERFRAMEWORK_H
-
-#include "xgltestframework.h"
-
-
-class XglDevice : public xgl_testing::Device
-{
-public:
- XglDevice(uint32_t id, XGL_PHYSICAL_GPU obj);
-
- XGL_DEVICE device() { return obj(); }
- void get_device_queue();
-
- uint32_t id;
- XGL_PHYSICAL_GPU_PROPERTIES props;
- const XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
-
- XGL_QUEUE m_queue;
-};
-
-class XglMemoryRefManager
-{
-public:
- void AddMemoryRefs(xgl_testing::Object &xglObject);
- void AddMemoryRefs(vector<XGL_GPU_MEMORY> mem);
- void EmitAddMemoryRefs(XGL_QUEUE queue);
- void EmitRemoveMemoryRefs(XGL_QUEUE queue);
- vector<XGL_GPU_MEMORY> mem_refs() const;
-
-protected:
- vector<XGL_GPU_MEMORY> mem_refs_;
-
-};
-
-class XglDepthStencilObj : public xgl_testing::Image
-{
-public:
- XglDepthStencilObj();
- void Init(XglDevice *device, int32_t width, int32_t height);
- bool Initialized();
- XGL_DEPTH_STENCIL_BIND_INFO* BindInfo();
-
-protected:
- XglDevice *m_device;
- bool m_initialized;
- xgl_testing::DepthStencilView m_depthStencilView;
- XGL_FORMAT m_depth_stencil_fmt;
- XGL_DEPTH_STENCIL_BIND_INFO m_depthStencilBindInfo;
-};
-
-class XglRenderFramework : public XglTestFramework
-{
-public:
- XglRenderFramework();
- ~XglRenderFramework();
-
- XGL_DEVICE device() {return m_device->device();}
- XGL_PHYSICAL_GPU gpu() {return objs[0];}
- XGL_RENDER_PASS renderPass() {return m_renderPass;}
- XGL_FRAMEBUFFER framebuffer() {return m_framebuffer;}
- void InitViewport(float width, float height);
- void InitViewport();
- void InitRenderTarget();
- void InitRenderTarget(uint32_t targets);
- void InitRenderTarget(XGL_DEPTH_STENCIL_BIND_INFO *dsBinding);
- void InitRenderTarget(uint32_t targets, XGL_DEPTH_STENCIL_BIND_INFO *dsBinding);
- void InitFramework();
- void ShutdownFramework();
- void InitState();
-
-
-protected:
- XGL_APPLICATION_INFO app_info;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
- uint32_t gpu_count;
- XglDevice *m_device;
- XGL_CMD_BUFFER m_cmdBuffer;
- XGL_RENDER_PASS m_renderPass;
- XGL_FRAMEBUFFER m_framebuffer;
- XGL_DYNAMIC_RS_STATE_OBJECT m_stateRaster;
- XGL_DYNAMIC_CB_STATE_OBJECT m_colorBlend;
- XGL_DYNAMIC_VP_STATE_OBJECT m_stateViewport;
- XGL_DYNAMIC_DS_STATE_OBJECT m_stateDepthStencil;
- vector<XglImage*> m_renderTargets;
- float m_width, m_height;
- XGL_FORMAT m_render_target_fmt;
- XGL_FORMAT m_depth_stencil_fmt;
- XGL_COLOR_ATTACHMENT_BIND_INFO m_colorBindings[8];
- XGL_CLEAR_COLOR m_clear_color;
- float m_depth_clear_color;
- uint32_t m_stencil_clear_color;
- XglDepthStencilObj *m_depthStencil;
- XglMemoryRefManager m_mem_ref_mgr;
-
- /*
- * SetUp and TearDown are called by the Google Test framework
- * to initialize a test framework based on this class.
- */
- virtual void SetUp() {
- this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
- this->app_info.pNext = NULL;
- this->app_info.pAppName = "base";
- this->app_info.appVersion = 1;
- this->app_info.pEngineName = "unittest";
- this->app_info.engineVersion = 1;
- this->app_info.apiVersion = XGL_API_VERSION;
-
- InitFramework();
- }
-
- virtual void TearDown() {
- ShutdownFramework();
- }
-};
-
-class XglDescriptorSetObj;
-class XglIndexBufferObj;
-class XglConstantBufferObj;
-class XglPipelineObj;
-class XglDescriptorSetObj;
-
-class XglCommandBufferObj : public xgl_testing::CmdBuffer
-{
-public:
- XglCommandBufferObj(XglDevice *device);
- XGL_CMD_BUFFER GetBufferHandle();
- XGL_RESULT BeginCommandBuffer();
- XGL_RESULT BeginCommandBuffer(XGL_CMD_BUFFER_BEGIN_INFO *pInfo);
- XGL_RESULT BeginCommandBuffer(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj);
- XGL_RESULT EndCommandBuffer();
- void PipelineBarrier(XGL_PIPELINE_BARRIER *barrierPtr);
- void AddRenderTarget(XglImage *renderTarget);
- void AddDepthStencil();
- void ClearAllBuffers(XGL_CLEAR_COLOR clear_color, float depth_clear_color, uint32_t stencil_clear_color, XglDepthStencilObj *depthStencilObj);
- void PrepareAttachments();
- void AddMemoryRefs(xgl_testing::Object &xglObject);
- void AddMemoryRefs(uint32_t ref_count, const XGL_GPU_MEMORY *mem);
- void AddMemoryRefs(vector<xgl_testing::Object *> images);
- void BindPipeline(XglPipelineObj &pipeline);
- void BindDescriptorSet(XglDescriptorSetObj &descriptorSet);
- void BindVertexBuffer(XglConstantBufferObj *vertexBuffer, uint32_t offset, uint32_t binding);
- void BindIndexBuffer(XglIndexBufferObj *indexBuffer, uint32_t offset);
- void BindStateObject(XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT stateObject);
- void BeginRenderPass(XGL_RENDER_PASS renderpass, XGL_FRAMEBUFFER framebuffer);
- void EndRenderPass(XGL_RENDER_PASS renderpass);
- void Draw(uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
- void DrawIndexed(uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
- void QueueCommandBuffer();
- void QueueCommandBuffer(XGL_FENCE fence);
-
- XglMemoryRefManager mem_ref_mgr;
-
-protected:
- XglDevice *m_device;
- vector<XglImage*> m_renderTargets;
-};
-
-class XglConstantBufferObj : public xgl_testing::Buffer
-{
-public:
- XglConstantBufferObj(XglDevice *device);
- XglConstantBufferObj(XglDevice *device, int constantCount, int constantSize, const void* data);
- ~XglConstantBufferObj();
- void BufferMemoryBarrier(
- XGL_FLAGS outputMask =
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_OUTPUT_COPY_BIT,
- XGL_FLAGS inputMask =
- XGL_MEMORY_INPUT_CPU_READ_BIT |
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
- XGL_MEMORY_INPUT_SHADER_READ_BIT |
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
- XGL_MEMORY_INPUT_COPY_BIT);
-
- void Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset, uint32_t binding);
-
- XGL_BUFFER_VIEW_ATTACH_INFO m_bufferViewInfo;
-
-protected:
- XglDevice *m_device;
- xgl_testing::BufferView m_bufferView;
- int m_numVertices;
- int m_stride;
- XglCommandBufferObj *m_commandBuffer;
- xgl_testing::Fence m_fence;
-};
-
-class XglIndexBufferObj : public XglConstantBufferObj
-{
-public:
- XglIndexBufferObj(XglDevice *device);
- void CreateAndInitBuffer(int numIndexes, XGL_INDEX_TYPE dataFormat, const void* data);
- void Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset);
- XGL_INDEX_TYPE GetIndexType();
-
-protected:
- XGL_INDEX_TYPE m_indexType;
-};
-
-class XglImage : public xgl_testing::Image
-{
-public:
- XglImage(XglDevice *dev);
- bool IsCompatible(XGL_FLAGS usage, XGL_FLAGS features);
-
-public:
- void init(uint32_t w, uint32_t h,
- XGL_FORMAT fmt, XGL_FLAGS usage,
- XGL_IMAGE_TILING tiling=XGL_LINEAR_TILING);
-
- // void clear( CommandBuffer*, uint32_t[4] );
-
- void layout( XGL_IMAGE_LAYOUT layout )
- {
- m_imageInfo.layout = layout;
- }
-
- XGL_GPU_MEMORY memory() const
- {
- const std::vector<XGL_GPU_MEMORY> mems = memories();
- return mems.empty() ? XGL_NULL_HANDLE : mems[0];
- }
-
- void ImageMemoryBarrier(XglCommandBufferObj *cmd,
- XGL_IMAGE_ASPECT aspect,
- XGL_FLAGS output_mask,
- XGL_FLAGS input_mask,
- XGL_IMAGE_LAYOUT image_layout);
-
- XGL_RESULT CopyImage(XglImage &src_image);
-
- XGL_IMAGE image() const
- {
- return obj();
- }
-
- XGL_COLOR_ATTACHMENT_VIEW targetView()
- {
- if (!m_targetView.initialized())
- {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO createView = {
- XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
- XGL_NULL_HANDLE,
- obj(),
- XGL_FMT_B8G8R8A8_UNORM,
- 0,
- 0,
- 1
- };
- m_targetView.init(*m_device, createView);
- }
- return m_targetView.obj();
- }
-
- void SetLayout(XglCommandBufferObj *cmd_buf, XGL_IMAGE_ASPECT aspect, XGL_IMAGE_LAYOUT image_layout);
- void SetLayout(XGL_IMAGE_ASPECT aspect, XGL_IMAGE_LAYOUT image_layout);
-
- XGL_IMAGE_LAYOUT layout() const
- {
- return ( XGL_IMAGE_LAYOUT )m_imageInfo.layout;
- }
- uint32_t width() const
- {
- return extent().width;
- }
- uint32_t height() const
- {
- return extent().height;
- }
- XglDevice* device() const
- {
- return m_device;
- }
-
- XGL_RESULT MapMemory(void** ptr);
- XGL_RESULT UnmapMemory();
-
-protected:
- XglDevice *m_device;
-
- xgl_testing::ColorAttachmentView m_targetView;
- XGL_IMAGE_VIEW_ATTACH_INFO m_imageInfo;
-};
-
-class XglTextureObj : public XglImage
-{
-public:
- XglTextureObj(XglDevice *device, uint32_t *colors = NULL);
- XGL_IMAGE_VIEW_ATTACH_INFO m_textureViewInfo;
-
-
-protected:
- XglDevice *m_device;
- xgl_testing::ImageView m_textureView;
- XGL_GPU_SIZE m_rowPitch;
-};
-
-class XglSamplerObj : public xgl_testing::Sampler
-{
-public:
- XglSamplerObj(XglDevice *device);
-
-protected:
- XglDevice *m_device;
-
-};
-
-class XglDescriptorSetObj : public xgl_testing::DescriptorPool
-{
-public:
- XglDescriptorSetObj(XglDevice *device);
- ~XglDescriptorSetObj();
-
- int AppendDummy();
- int AppendBuffer(XGL_DESCRIPTOR_TYPE type, XglConstantBufferObj &constantBuffer);
- int AppendSamplerTexture(XglSamplerObj* sampler, XglTextureObj* texture);
- void CreateXGLDescriptorSet(XglCommandBufferObj *cmdBuffer);
-
- XGL_DESCRIPTOR_SET GetDescriptorSetHandle() const;
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN GetLayoutChain() const;
-
- XglMemoryRefManager mem_ref_mgr;
-
-protected:
- XglDevice *m_device;
- vector<XGL_DESCRIPTOR_TYPE_COUNT> m_type_counts;
- int m_nextSlot;
-
- vector<XGL_UPDATE_BUFFERS> m_updateBuffers;
-
- vector<XGL_SAMPLER_IMAGE_VIEW_INFO> m_samplerTextureInfo;
- vector<XGL_UPDATE_SAMPLER_TEXTURES> m_updateSamplerTextures;
-
- xgl_testing::DescriptorSetLayout m_layout;
- xgl_testing::DescriptorSetLayoutChain m_layout_chain;
- xgl_testing::DescriptorSet *m_set;
-};
-
-
-class XglShaderObj : public xgl_testing::Shader
-{
-public:
- XglShaderObj(XglDevice *device, const char * shaderText, XGL_PIPELINE_SHADER_STAGE stage, XglRenderFramework *framework);
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* GetStageCreateInfo();
-
-protected:
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO stage_info;
- XGL_PIPELINE_SHADER_STAGE m_stage;
- XglDevice *m_device;
-
-};
-
-class XglPipelineObj : public xgl_testing::Pipeline
-{
-public:
- XglPipelineObj(XglDevice *device);
- void AddShader(XglShaderObj* shaderObj);
- void AddVertexInputAttribs(XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* vi_attrib, int count);
- void AddVertexInputBindings(XGL_VERTEX_INPUT_BINDING_DESCRIPTION* vi_binding, int count);
- void AddVertexDataBuffer(XglConstantBufferObj* vertexDataBuffer, int binding);
- void AddColorAttachment(uint32_t binding, const XGL_PIPELINE_CB_ATTACHMENT_STATE *att);
- void SetDepthStencil(XGL_PIPELINE_DS_STATE_CREATE_INFO *);
- void CreateXGLPipeline(XglDescriptorSetObj &descriptorSet);
-
-protected:
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO m_vi_state;
- XGL_PIPELINE_IA_STATE_CREATE_INFO m_ia_state;
- XGL_PIPELINE_RS_STATE_CREATE_INFO m_rs_state;
- XGL_PIPELINE_CB_STATE_CREATE_INFO m_cb_state;
- XGL_PIPELINE_DS_STATE_CREATE_INFO m_ds_state;
- XGL_PIPELINE_MS_STATE_CREATE_INFO m_ms_state;
- XglDevice *m_device;
- vector<XglShaderObj*> m_shaderObjs;
- vector<XglConstantBufferObj*> m_vertexBufferObjs;
- vector<int> m_vertexBufferBindings;
- vector<XGL_PIPELINE_CB_ATTACHMENT_STATE> m_colorAttachments;
- int m_vertexBufferCount;
-
-};
-
-#endif // XGLRENDERFRAMEWORK_H
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
#include <iostream>
#include <string.h> // memset(), memcmp()
-#include "xgltestbinding.h"
+#include "vktestbinding.h"
namespace {
#define DERIVED_OBJECT_INIT(create_func, ...) \
do { \
obj_type obj; \
- if (EXPECT(create_func(__VA_ARGS__, &obj) == XGL_SUCCESS)) \
+ if (EXPECT(create_func(__VA_ARGS__, &obj) == VK_SUCCESS)) \
base_type::init(obj); \
} while (0)
#define STRINGIFY(x) #x
#define EXPECT(expr) ((expr) ? true : expect_failure(STRINGIFY(expr), __FILE__, __LINE__, __FUNCTION__))
-xgl_testing::ErrorCallback error_callback;
+vk_testing::ErrorCallback error_callback;
bool expect_failure(const char *expr, const char *file, unsigned int line, const char *function)
{
}
template<typename T>
-std::vector<T> get_info(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE type, size_t min_elems)
+std::vector<T> get_info(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE type, size_t min_elems)
{
std::vector<T> info;
size_t size;
- if (EXPECT(xglGetGpuInfo(gpu, type, &size, NULL) == XGL_SUCCESS && size % sizeof(T) == 0)) {
+ if (EXPECT(vkGetGpuInfo(gpu, type, &size, NULL) == VK_SUCCESS && size % sizeof(T) == 0)) {
info.resize(size / sizeof(T));
- if (!EXPECT(xglGetGpuInfo(gpu, type, &size, &info[0]) == XGL_SUCCESS && size == info.size() * sizeof(T)))
+ if (!EXPECT(vkGetGpuInfo(gpu, type, &size, &info[0]) == VK_SUCCESS && size == info.size() * sizeof(T)))
info.clear();
}
}
template<typename T>
-std::vector<T> get_info(XGL_BASE_OBJECT obj, XGL_OBJECT_INFO_TYPE type, size_t min_elems)
+std::vector<T> get_info(VK_BASE_OBJECT obj, VK_OBJECT_INFO_TYPE type, size_t min_elems)
{
std::vector<T> info;
size_t size;
- if (EXPECT(xglGetObjectInfo(obj, type, &size, NULL) == XGL_SUCCESS && size % sizeof(T) == 0)) {
+ if (EXPECT(vkGetObjectInfo(obj, type, &size, NULL) == VK_SUCCESS && size % sizeof(T) == 0)) {
info.resize(size / sizeof(T));
- if (!EXPECT(xglGetObjectInfo(obj, type, &size, &info[0]) == XGL_SUCCESS && size == info.size() * sizeof(T)))
+ if (!EXPECT(vkGetObjectInfo(obj, type, &size, &info[0]) == VK_SUCCESS && size == info.size() * sizeof(T)))
info.clear();
}
} // namespace
-namespace xgl_testing {
+namespace vk_testing {
void set_error_callback(ErrorCallback callback)
{
error_callback = callback;
}
-XGL_PHYSICAL_GPU_PROPERTIES PhysicalGpu::properties() const
+VK_PHYSICAL_GPU_PROPERTIES PhysicalGpu::properties() const
{
- return get_info<XGL_PHYSICAL_GPU_PROPERTIES>(gpu_, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, 1)[0];
+ return get_info<VK_PHYSICAL_GPU_PROPERTIES>(gpu_, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, 1)[0];
}
-XGL_PHYSICAL_GPU_PERFORMANCE PhysicalGpu::performance() const
+VK_PHYSICAL_GPU_PERFORMANCE PhysicalGpu::performance() const
{
- return get_info<XGL_PHYSICAL_GPU_PERFORMANCE>(gpu_, XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE, 1)[0];
+ return get_info<VK_PHYSICAL_GPU_PERFORMANCE>(gpu_, VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE, 1)[0];
}
-std::vector<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES> PhysicalGpu::queue_properties() const
+std::vector<VK_PHYSICAL_GPU_QUEUE_PROPERTIES> PhysicalGpu::queue_properties() const
{
- return get_info<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES>(gpu_, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES, 0);
+ return get_info<VK_PHYSICAL_GPU_QUEUE_PROPERTIES>(gpu_, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES, 0);
}
-XGL_PHYSICAL_GPU_MEMORY_PROPERTIES PhysicalGpu::memory_properties() const
+VK_PHYSICAL_GPU_MEMORY_PROPERTIES PhysicalGpu::memory_properties() const
{
- return get_info<XGL_PHYSICAL_GPU_MEMORY_PROPERTIES>(gpu_, XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES, 1)[0];
+ return get_info<VK_PHYSICAL_GPU_MEMORY_PROPERTIES>(gpu_, VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES, 1)[0];
}
std::vector<const char *> PhysicalGpu::layers(std::vector<char> &buf) const
char * const *out = const_cast<char * const *>(&layers[0]);
size_t count;
- if (!EXPECT(xglEnumerateLayers(gpu_, max_layer_count, max_string_size, &count, out, NULL) == XGL_SUCCESS))
+ if (!EXPECT(vkEnumerateLayers(gpu_, max_layer_count, max_string_size, &count, out, NULL) == VK_SUCCESS))
count = 0;
layers.resize(count);
std::vector<const char *> PhysicalGpu::extensions() const
{
static const char *known_exts[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
std::vector<const char *> exts;
for (int i = 0; i < sizeof(known_exts) / sizeof(known_exts[0]); i++) {
- XGL_RESULT err = xglGetExtensionSupport(gpu_, known_exts[i]);
- if (err == XGL_SUCCESS)
+ VK_RESULT err = vkGetExtensionSupport(gpu_, known_exts[i]);
+ if (err == VK_SUCCESS)
exts.push_back(known_exts[i]);
}
return exts;
}
-XGL_GPU_COMPATIBILITY_INFO PhysicalGpu::compatibility(const PhysicalGpu &other) const
+VK_GPU_COMPATIBILITY_INFO PhysicalGpu::compatibility(const PhysicalGpu &other) const
{
- XGL_GPU_COMPATIBILITY_INFO data;
- if (!EXPECT(xglGetMultiGpuCompatibility(gpu_, other.gpu_, &data) == XGL_SUCCESS))
+ VK_GPU_COMPATIBILITY_INFO data;
+ if (!EXPECT(vkGetMultiGpuCompatibility(gpu_, other.gpu_, &data) == VK_SUCCESS))
memset(&data, 0, sizeof(data));
return data;
}
-void BaseObject::init(XGL_BASE_OBJECT obj, bool own)
+void BaseObject::init(VK_BASE_OBJECT obj, bool own)
{
EXPECT(!initialized());
reinit(obj, own);
}
-void BaseObject::reinit(XGL_BASE_OBJECT obj, bool own)
+void BaseObject::reinit(VK_BASE_OBJECT obj, bool own)
{
obj_ = obj;
own_obj_ = own;
uint32_t BaseObject::memory_allocation_count() const
{
- return get_info<uint32_t>(obj_, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT, 1)[0];
+ return get_info<uint32_t>(obj_, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT, 1)[0];
}
-std::vector<XGL_MEMORY_REQUIREMENTS> BaseObject::memory_requirements() const
+std::vector<VK_MEMORY_REQUIREMENTS> BaseObject::memory_requirements() const
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- err = xglGetObjectInfo(obj_, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(obj_, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
- EXPECT(err == XGL_SUCCESS && num_alloc_size == sizeof(num_allocations));
- std::vector<XGL_MEMORY_REQUIREMENTS> info =
- get_info<XGL_MEMORY_REQUIREMENTS>(obj_, XGL_INFO_TYPE_MEMORY_REQUIREMENTS, 0);
+ EXPECT(err == VK_SUCCESS && num_alloc_size == sizeof(num_allocations));
+ std::vector<VK_MEMORY_REQUIREMENTS> info =
+ get_info<VK_MEMORY_REQUIREMENTS>(obj_, VK_INFO_TYPE_MEMORY_REQUIREMENTS, 0);
EXPECT(info.size() == num_allocations);
if (info.size() == 1 && !info[0].size)
info.clear();
return info;
}
-void Object::init(XGL_OBJECT obj, bool own)
+void Object::init(VK_OBJECT obj, bool own)
{
BaseObject::init(obj, own);
mem_alloc_count_ = memory_allocation_count();
}
-void Object::reinit(XGL_OBJECT obj, bool own)
+void Object::reinit(VK_OBJECT obj, bool own)
{
cleanup();
BaseObject::reinit(obj, own);
mem_alloc_count_ = 0;
if (own())
- EXPECT(xglDestroyObject(obj()) == XGL_SUCCESS);
+ EXPECT(vkDestroyObject(obj()) == VK_SUCCESS);
}
-void Object::bind_memory(uint32_t alloc_idx, const GpuMemory &mem, XGL_GPU_SIZE mem_offset)
+void Object::bind_memory(uint32_t alloc_idx, const GpuMemory &mem, VK_GPU_SIZE mem_offset)
{
- EXPECT(xglBindObjectMemory(obj(), alloc_idx, mem.obj(), mem_offset) == XGL_SUCCESS);
+ EXPECT(vkBindObjectMemory(obj(), alloc_idx, mem.obj(), mem_offset) == VK_SUCCESS);
}
-void Object::bind_memory(uint32_t alloc_idx, XGL_GPU_SIZE offset, XGL_GPU_SIZE size,
- const GpuMemory &mem, XGL_GPU_SIZE mem_offset)
+void Object::bind_memory(uint32_t alloc_idx, VK_GPU_SIZE offset, VK_GPU_SIZE size,
+ const GpuMemory &mem, VK_GPU_SIZE mem_offset)
{
- EXPECT(!alloc_idx && xglBindObjectMemoryRange(obj(), 0, offset, size, mem.obj(), mem_offset) == XGL_SUCCESS);
+ EXPECT(!alloc_idx && vkBindObjectMemoryRange(obj(), 0, offset, size, mem.obj(), mem_offset) == VK_SUCCESS);
}
void Object::unbind_memory(uint32_t alloc_idx)
{
- EXPECT(xglBindObjectMemory(obj(), alloc_idx, XGL_NULL_HANDLE, 0) == XGL_SUCCESS);
+ EXPECT(vkBindObjectMemory(obj(), alloc_idx, VK_NULL_HANDLE, 0) == VK_SUCCESS);
}
void Object::unbind_memory()
internal_mems_ = new GpuMemory[mem_alloc_count_];
- const std::vector<XGL_MEMORY_REQUIREMENTS> mem_reqs = memory_requirements();
- std::vector<XGL_IMAGE_MEMORY_REQUIREMENTS> img_reqs;
- std::vector<XGL_BUFFER_MEMORY_REQUIREMENTS> buf_reqs;
- XGL_MEMORY_ALLOC_IMAGE_INFO img_info;
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_info;
- XGL_MEMORY_ALLOC_INFO info, *next_info = NULL;
+ const std::vector<VK_MEMORY_REQUIREMENTS> mem_reqs = memory_requirements();
+ std::vector<VK_IMAGE_MEMORY_REQUIREMENTS> img_reqs;
+ std::vector<VK_BUFFER_MEMORY_REQUIREMENTS> buf_reqs;
+ VK_MEMORY_ALLOC_IMAGE_INFO img_info;
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_info;
+ VK_MEMORY_ALLOC_INFO info, *next_info = NULL;
if (for_img) {
- img_reqs = get_info<XGL_IMAGE_MEMORY_REQUIREMENTS>(obj(),
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS, 0);
+ img_reqs = get_info<VK_IMAGE_MEMORY_REQUIREMENTS>(obj(),
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS, 0);
EXPECT(img_reqs.size() == 1);
- next_info = (XGL_MEMORY_ALLOC_INFO *) &img_info;
+ next_info = (VK_MEMORY_ALLOC_INFO *) &img_info;
img_info.pNext = NULL;
- img_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
+ img_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
img_info.usage = img_reqs[0].usage;
img_info.formatClass = img_reqs[0].formatClass;
img_info.samples = img_reqs[0].samples;
if (for_buf) {
- buf_reqs = get_info<XGL_BUFFER_MEMORY_REQUIREMENTS>(obj(),
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS, 0);
+ buf_reqs = get_info<VK_BUFFER_MEMORY_REQUIREMENTS>(obj(),
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS, 0);
if (for_img)
img_info.pNext = &buf_info;
else
- next_info = (XGL_MEMORY_ALLOC_INFO *) &buf_info;
+ next_info = (VK_MEMORY_ALLOC_INFO *) &buf_info;
buf_info.pNext = NULL;
- buf_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO;
+ buf_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO;
buf_info.usage = buf_reqs[0].usage;
}
info = GpuMemory::alloc_info(mem_reqs[i], next_info);
switch (info.memType) {
- case XGL_MEMORY_TYPE_BUFFER:
+ case VK_MEMORY_TYPE_BUFFER:
EXPECT(for_buf);
- info.memProps |= XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ info.memProps |= VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
primary_mem_ = &internal_mems_[i];
break;
- case XGL_MEMORY_TYPE_IMAGE:
+ case VK_MEMORY_TYPE_IMAGE:
EXPECT(for_img);
primary_mem_ = &internal_mems_[i];
break;
}
}
-void Object::alloc_memory(const std::vector<XGL_GPU_MEMORY> &mems)
+void Object::alloc_memory(const std::vector<VK_GPU_MEMORY> &mems)
{
if (!EXPECT(!internal_mems_) || !mem_alloc_count_)
return;
internal_mems_ = new GpuMemory[mem_alloc_count_];
- const std::vector<XGL_MEMORY_REQUIREMENTS> mem_reqs = memory_requirements();
+ const std::vector<VK_MEMORY_REQUIREMENTS> mem_reqs = memory_requirements();
if (!EXPECT(mem_reqs.size() == mems.size()))
return;
}
}
-std::vector<XGL_GPU_MEMORY> Object::memories() const
+std::vector<VK_GPU_MEMORY> Object::memories() const
{
- std::vector<XGL_GPU_MEMORY> mems;
+ std::vector<VK_GPU_MEMORY> mems;
if (internal_mems_) {
mems.reserve(mem_alloc_count_);
for (uint32_t i = 0; i < mem_alloc_count_; i++)
queues_[i].clear();
}
- EXPECT(xglDestroyDevice(obj()) == XGL_SUCCESS);
+ EXPECT(vkDestroyDevice(obj()) == VK_SUCCESS);
}
void Device::init(bool enable_layers)
{
// request all queues
- const std::vector<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_props = gpu_.queue_properties();
- std::vector<XGL_DEVICE_QUEUE_CREATE_INFO> queue_info;
+ const std::vector<VK_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_props = gpu_.queue_properties();
+ std::vector<VK_DEVICE_QUEUE_CREATE_INFO> queue_info;
queue_info.reserve(queue_props.size());
for (int i = 0; i < queue_props.size(); i++) {
- XGL_DEVICE_QUEUE_CREATE_INFO qi = {};
+ VK_DEVICE_QUEUE_CREATE_INFO qi = {};
qi.queueNodeIndex = i;
qi.queueCount = queue_props[i].queueCount;
- if (queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT) {
+ if (queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) {
graphics_queue_node_index_ = i;
}
queue_info.push_back(qi);
}
- XGL_LAYER_CREATE_INFO layer_info = {};
- layer_info.sType = XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO;
+ VK_LAYER_CREATE_INFO layer_info = {};
+ layer_info.sType = VK_STRUCTURE_TYPE_LAYER_CREATE_INFO;
std::vector<const char *> layers;
std::vector<char> layer_buf;
const std::vector<const char *> exts = gpu_.extensions();
- XGL_DEVICE_CREATE_INFO dev_info = {};
- dev_info.sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
+ VK_DEVICE_CREATE_INFO dev_info = {};
+ dev_info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
dev_info.pNext = (enable_layers) ? static_cast<void *>(&layer_info) : NULL;
dev_info.queueRecordCount = queue_info.size();
dev_info.pRequestedQueues = &queue_info[0];
dev_info.extensionCount = exts.size();
dev_info.ppEnabledExtensionNames = &exts[0];
- dev_info.maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE;
- dev_info.flags = XGL_DEVICE_CREATE_VALIDATION_BIT;
+ dev_info.maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE;
+ dev_info.flags = VK_DEVICE_CREATE_VALIDATION_BIT;
init(dev_info);
}
-void Device::init(const XGL_DEVICE_CREATE_INFO &info)
+void Device::init(const VK_DEVICE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDevice, gpu_.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDevice, gpu_.obj(), &info);
init_queues();
init_formats();
void Device::init_queues()
{
- XGL_RESULT err;
+ VK_RESULT err;
size_t data_size;
uint32_t queue_node_count;
- err = xglGetGpuInfo(gpu_.obj(), XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(gpu_.obj(), VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, NULL);
- EXPECT(err == XGL_SUCCESS);
+ EXPECT(err == VK_SUCCESS);
- queue_node_count = data_size / sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES);
+ queue_node_count = data_size / sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES);
EXPECT(queue_node_count >= 1);
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES queue_props[queue_node_count];
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES queue_props[queue_node_count];
- err = xglGetGpuInfo(gpu_.obj(), XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(gpu_.obj(), VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, queue_props);
- EXPECT(err == XGL_SUCCESS);
+ EXPECT(err == VK_SUCCESS);
for (int i = 0; i < queue_node_count; i++) {
- XGL_QUEUE queue;
+ VK_QUEUE queue;
for (int j = 0; j < queue_props[i].queueCount; j++) {
- err = xglGetDeviceQueue(obj(), i, j, &queue);
- EXPECT(err == XGL_SUCCESS);
+ err = vkGetDeviceQueue(obj(), i, j, &queue);
+ EXPECT(err == VK_SUCCESS);
- if (queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT) {
+ if (queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT) {
queues_[GRAPHICS].push_back(new Queue(queue));
}
- if (queue_props[i].queueFlags & XGL_QUEUE_COMPUTE_BIT) {
+ if (queue_props[i].queueFlags & VK_QUEUE_COMPUTE_BIT) {
queues_[COMPUTE].push_back(new Queue(queue));
}
- if (queue_props[i].queueFlags & XGL_QUEUE_DMA_BIT) {
+ if (queue_props[i].queueFlags & VK_QUEUE_DMA_BIT) {
queues_[DMA].push_back(new Queue(queue));
}
}
void Device::init_formats()
{
- for (int f = XGL_FMT_BEGIN_RANGE; f <= XGL_FMT_END_RANGE; f++) {
- const XGL_FORMAT fmt = static_cast<XGL_FORMAT>(f);
- const XGL_FORMAT_PROPERTIES props = format_properties(fmt);
+ for (int f = VK_FMT_BEGIN_RANGE; f <= VK_FMT_END_RANGE; f++) {
+ const VK_FORMAT fmt = static_cast<VK_FORMAT>(f);
+ const VK_FORMAT_PROPERTIES props = format_properties(fmt);
if (props.linearTilingFeatures) {
- const Format tmp = { fmt, XGL_LINEAR_TILING, props.linearTilingFeatures };
+ const Format tmp = { fmt, VK_LINEAR_TILING, props.linearTilingFeatures };
formats_.push_back(tmp);
}
if (props.optimalTilingFeatures) {
- const Format tmp = { fmt, XGL_OPTIMAL_TILING, props.optimalTilingFeatures };
+ const Format tmp = { fmt, VK_OPTIMAL_TILING, props.optimalTilingFeatures };
formats_.push_back(tmp);
}
}
EXPECT(!formats_.empty());
}
-XGL_FORMAT_PROPERTIES Device::format_properties(XGL_FORMAT format)
+VK_FORMAT_PROPERTIES Device::format_properties(VK_FORMAT format)
{
- const XGL_FORMAT_INFO_TYPE type = XGL_INFO_TYPE_FORMAT_PROPERTIES;
- XGL_FORMAT_PROPERTIES data;
+ const VK_FORMAT_INFO_TYPE type = VK_INFO_TYPE_FORMAT_PROPERTIES;
+ VK_FORMAT_PROPERTIES data;
size_t size = sizeof(data);
- if (!EXPECT(xglGetFormatInfo(obj(), format, type, &size, &data) == XGL_SUCCESS && size == sizeof(data)))
+ if (!EXPECT(vkGetFormatInfo(obj(), format, type, &size, &data) == VK_SUCCESS && size == sizeof(data)))
memset(&data, 0, sizeof(data));
return data;
void Device::wait()
{
- EXPECT(xglDeviceWaitIdle(obj()) == XGL_SUCCESS);
+ EXPECT(vkDeviceWaitIdle(obj()) == VK_SUCCESS);
}
-XGL_RESULT Device::wait(const std::vector<const Fence *> &fences, bool wait_all, uint64_t timeout)
+VK_RESULT Device::wait(const std::vector<const Fence *> &fences, bool wait_all, uint64_t timeout)
{
- const std::vector<XGL_FENCE> fence_objs = make_objects<XGL_FENCE>(fences);
- XGL_RESULT err = xglWaitForFences(obj(), fence_objs.size(), &fence_objs[0], wait_all, timeout);
- EXPECT(err == XGL_SUCCESS || err == XGL_TIMEOUT);
+ const std::vector<VK_FENCE> fence_objs = make_objects<VK_FENCE>(fences);
+ VK_RESULT err = vkWaitForFences(obj(), fence_objs.size(), &fence_objs[0], wait_all, timeout);
+ EXPECT(err == VK_SUCCESS || err == VK_TIMEOUT);
return err;
}
-void Device::begin_descriptor_pool_update(XGL_DESCRIPTOR_UPDATE_MODE mode)
+void Device::begin_descriptor_pool_update(VK_DESCRIPTOR_UPDATE_MODE mode)
{
- EXPECT(xglBeginDescriptorPoolUpdate(obj(), mode) == XGL_SUCCESS);
+ EXPECT(vkBeginDescriptorPoolUpdate(obj(), mode) == VK_SUCCESS);
}
void Device::end_descriptor_pool_update(CmdBuffer &cmd)
{
- EXPECT(xglEndDescriptorPoolUpdate(obj(), cmd.obj()) == XGL_SUCCESS);
+ EXPECT(vkEndDescriptorPoolUpdate(obj(), cmd.obj()) == VK_SUCCESS);
}
void Queue::submit(const std::vector<const CmdBuffer *> &cmds, Fence &fence)
{
- const std::vector<XGL_CMD_BUFFER> cmd_objs = make_objects<XGL_CMD_BUFFER>(cmds);
- EXPECT(xglQueueSubmit(obj(), cmd_objs.size(), &cmd_objs[0], fence.obj()) == XGL_SUCCESS);
+ const std::vector<VK_CMD_BUFFER> cmd_objs = make_objects<VK_CMD_BUFFER>(cmds);
+ EXPECT(vkQueueSubmit(obj(), cmd_objs.size(), &cmd_objs[0], fence.obj()) == VK_SUCCESS);
}
void Queue::submit(const CmdBuffer &cmd, Fence &fence)
submit(cmd, fence);
}
-void Queue::add_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs)
+void Queue::add_mem_references(const std::vector<VK_GPU_MEMORY> &mem_refs)
{
for (int i = 0; i < mem_refs.size(); i++) {
- EXPECT(xglQueueAddMemReference(obj(), mem_refs[i]) == XGL_SUCCESS);
+ EXPECT(vkQueueAddMemReference(obj(), mem_refs[i]) == VK_SUCCESS);
}
}
-void Queue::remove_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs)
+void Queue::remove_mem_references(const std::vector<VK_GPU_MEMORY> &mem_refs)
{
for (int i = 0; i < mem_refs.size(); i++) {
- EXPECT(xglQueueRemoveMemReference(obj(), mem_refs[i]) == XGL_SUCCESS);
+ EXPECT(vkQueueRemoveMemReference(obj(), mem_refs[i]) == VK_SUCCESS);
}
}
void Queue::wait()
{
- EXPECT(xglQueueWaitIdle(obj()) == XGL_SUCCESS);
+ EXPECT(vkQueueWaitIdle(obj()) == VK_SUCCESS);
}
void Queue::signal_semaphore(Semaphore &sem)
{
- EXPECT(xglQueueSignalSemaphore(obj(), sem.obj()) == XGL_SUCCESS);
+ EXPECT(vkQueueSignalSemaphore(obj(), sem.obj()) == VK_SUCCESS);
}
void Queue::wait_semaphore(Semaphore &sem)
{
- EXPECT(xglQueueWaitSemaphore(obj(), sem.obj()) == XGL_SUCCESS);
+ EXPECT(vkQueueWaitSemaphore(obj(), sem.obj()) == VK_SUCCESS);
}
GpuMemory::~GpuMemory()
{
if (initialized() && own())
- EXPECT(xglFreeMemory(obj()) == XGL_SUCCESS);
+ EXPECT(vkFreeMemory(obj()) == VK_SUCCESS);
}
-void GpuMemory::init(const Device &dev, const XGL_MEMORY_ALLOC_INFO &info)
+void GpuMemory::init(const Device &dev, const VK_MEMORY_ALLOC_INFO &info)
{
- DERIVED_OBJECT_INIT(xglAllocMemory, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkAllocMemory, dev.obj(), &info);
}
void GpuMemory::init(const Device &dev, size_t size, const void *data)
{
- DERIVED_OBJECT_INIT(xglPinSystemMemory, dev.obj(), data, size);
+ DERIVED_OBJECT_INIT(vkPinSystemMemory, dev.obj(), data, size);
}
-void GpuMemory::init(const Device &dev, const XGL_MEMORY_OPEN_INFO &info)
+void GpuMemory::init(const Device &dev, const VK_MEMORY_OPEN_INFO &info)
{
- DERIVED_OBJECT_INIT(xglOpenSharedMemory, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkOpenSharedMemory, dev.obj(), &info);
}
-void GpuMemory::init(const Device &dev, const XGL_PEER_MEMORY_OPEN_INFO &info)
+void GpuMemory::init(const Device &dev, const VK_PEER_MEMORY_OPEN_INFO &info)
{
- DERIVED_OBJECT_INIT(xglOpenPeerMemory, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkOpenPeerMemory, dev.obj(), &info);
}
-void GpuMemory::set_priority(XGL_MEMORY_PRIORITY priority)
+void GpuMemory::set_priority(VK_MEMORY_PRIORITY priority)
{
- EXPECT(xglSetMemoryPriority(obj(), priority) == XGL_SUCCESS);
+ EXPECT(vkSetMemoryPriority(obj(), priority) == VK_SUCCESS);
}
-const void *GpuMemory::map(XGL_FLAGS flags) const
+const void *GpuMemory::map(VK_FLAGS flags) const
{
void *data;
- if (!EXPECT(xglMapMemory(obj(), flags, &data) == XGL_SUCCESS))
+ if (!EXPECT(vkMapMemory(obj(), flags, &data) == VK_SUCCESS))
data = NULL;
return data;
}
-void *GpuMemory::map(XGL_FLAGS flags)
+void *GpuMemory::map(VK_FLAGS flags)
{
void *data;
- if (!EXPECT(xglMapMemory(obj(), flags, &data) == XGL_SUCCESS))
+ if (!EXPECT(vkMapMemory(obj(), flags, &data) == VK_SUCCESS))
data = NULL;
return data;
void GpuMemory::unmap() const
{
- EXPECT(xglUnmapMemory(obj()) == XGL_SUCCESS);
+ EXPECT(vkUnmapMemory(obj()) == VK_SUCCESS);
}
-void Fence::init(const Device &dev, const XGL_FENCE_CREATE_INFO &info)
+void Fence::init(const Device &dev, const VK_FENCE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateFence, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateFence, dev.obj(), &info);
alloc_memory(dev);
}
-void Semaphore::init(const Device &dev, const XGL_SEMAPHORE_CREATE_INFO &info)
+void Semaphore::init(const Device &dev, const VK_SEMAPHORE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateSemaphore, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateSemaphore, dev.obj(), &info);
alloc_memory(dev);
}
-void Semaphore::init(const Device &dev, const XGL_SEMAPHORE_OPEN_INFO &info)
+void Semaphore::init(const Device &dev, const VK_SEMAPHORE_OPEN_INFO &info)
{
- DERIVED_OBJECT_INIT(xglOpenSharedSemaphore, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkOpenSharedSemaphore, dev.obj(), &info);
}
-void Event::init(const Device &dev, const XGL_EVENT_CREATE_INFO &info)
+void Event::init(const Device &dev, const VK_EVENT_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateEvent, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateEvent, dev.obj(), &info);
alloc_memory(dev);
}
void Event::set()
{
- EXPECT(xglSetEvent(obj()) == XGL_SUCCESS);
+ EXPECT(vkSetEvent(obj()) == VK_SUCCESS);
}
void Event::reset()
{
- EXPECT(xglResetEvent(obj()) == XGL_SUCCESS);
+ EXPECT(vkResetEvent(obj()) == VK_SUCCESS);
}
-void QueryPool::init(const Device &dev, const XGL_QUERY_POOL_CREATE_INFO &info)
+void QueryPool::init(const Device &dev, const VK_QUERY_POOL_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateQueryPool, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateQueryPool, dev.obj(), &info);
alloc_memory(dev);
}
-XGL_RESULT QueryPool::results(uint32_t start, uint32_t count, size_t size, void *data)
+VK_RESULT QueryPool::results(uint32_t start, uint32_t count, size_t size, void *data)
{
size_t tmp = size;
- XGL_RESULT err = xglGetQueryPoolResults(obj(), start, count, &tmp, data);
- if (err == XGL_SUCCESS) {
+ VK_RESULT err = vkGetQueryPoolResults(obj(), start, count, &tmp, data);
+ if (err == VK_SUCCESS) {
if (!EXPECT(tmp == size))
memset(data, 0, size);
} else {
- EXPECT(err == XGL_NOT_READY);
+ EXPECT(err == VK_NOT_READY);
}
return err;
}
-void Buffer::init(const Device &dev, const XGL_BUFFER_CREATE_INFO &info)
+void Buffer::init(const Device &dev, const VK_BUFFER_CREATE_INFO &info)
{
init_no_mem(dev, info);
alloc_memory(dev, true, false);
}
-void Buffer::init_no_mem(const Device &dev, const XGL_BUFFER_CREATE_INFO &info)
+void Buffer::init_no_mem(const Device &dev, const VK_BUFFER_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateBuffer, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateBuffer, dev.obj(), &info);
create_info_ = info;
}
-void BufferView::init(const Device &dev, const XGL_BUFFER_VIEW_CREATE_INFO &info)
+void BufferView::init(const Device &dev, const VK_BUFFER_VIEW_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateBufferView, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateBufferView, dev.obj(), &info);
alloc_memory(dev);
}
-void Image::init(const Device &dev, const XGL_IMAGE_CREATE_INFO &info)
+void Image::init(const Device &dev, const VK_IMAGE_CREATE_INFO &info)
{
init_no_mem(dev, info);
- alloc_memory(dev, info.tiling == XGL_LINEAR_TILING, true);
+ alloc_memory(dev, info.tiling == VK_LINEAR_TILING, true);
}
-void Image::init_no_mem(const Device &dev, const XGL_IMAGE_CREATE_INFO &info)
+void Image::init_no_mem(const Device &dev, const VK_IMAGE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateImage, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateImage, dev.obj(), &info);
init_info(dev, info);
}
-void Image::init(const Device &dev, const XGL_PEER_IMAGE_OPEN_INFO &info, const XGL_IMAGE_CREATE_INFO &original_info)
+void Image::init(const Device &dev, const VK_PEER_IMAGE_OPEN_INFO &info, const VK_IMAGE_CREATE_INFO &original_info)
{
- XGL_IMAGE img;
- XGL_GPU_MEMORY mem;
- EXPECT(xglOpenPeerImage(dev.obj(), &info, &img, &mem) == XGL_SUCCESS);
+ VK_IMAGE img;
+ VK_GPU_MEMORY mem;
+ EXPECT(vkOpenPeerImage(dev.obj(), &info, &img, &mem) == VK_SUCCESS);
Object::init(img);
init_info(dev, original_info);
- alloc_memory(std::vector<XGL_GPU_MEMORY>(1, mem));
+ alloc_memory(std::vector<VK_GPU_MEMORY>(1, mem));
}
-void Image::init_info(const Device &dev, const XGL_IMAGE_CREATE_INFO &info)
+void Image::init_info(const Device &dev, const VK_IMAGE_CREATE_INFO &info)
{
create_info_ = info;
}
}
-void Image::bind_memory(uint32_t alloc_idx, const XGL_IMAGE_MEMORY_BIND_INFO &info,
- const GpuMemory &mem, XGL_GPU_SIZE mem_offset)
+void Image::bind_memory(uint32_t alloc_idx, const VK_IMAGE_MEMORY_BIND_INFO &info,
+ const GpuMemory &mem, VK_GPU_SIZE mem_offset)
{
- EXPECT(!alloc_idx && xglBindImageMemoryRange(obj(), 0, &info, mem.obj(), mem_offset) == XGL_SUCCESS);
+ EXPECT(!alloc_idx && vkBindImageMemoryRange(obj(), 0, &info, mem.obj(), mem_offset) == VK_SUCCESS);
}
-XGL_SUBRESOURCE_LAYOUT Image::subresource_layout(const XGL_IMAGE_SUBRESOURCE &subres) const
+VK_SUBRESOURCE_LAYOUT Image::subresource_layout(const VK_IMAGE_SUBRESOURCE &subres) const
{
- const XGL_SUBRESOURCE_INFO_TYPE type = XGL_INFO_TYPE_SUBRESOURCE_LAYOUT;
- XGL_SUBRESOURCE_LAYOUT data;
+ const VK_SUBRESOURCE_INFO_TYPE type = VK_INFO_TYPE_SUBRESOURCE_LAYOUT;
+ VK_SUBRESOURCE_LAYOUT data;
size_t size = sizeof(data);
- if (!EXPECT(xglGetImageSubresourceInfo(obj(), &subres, type, &size, &data) == XGL_SUCCESS && size == sizeof(data)))
+ if (!EXPECT(vkGetImageSubresourceInfo(obj(), &subres, type, &size, &data) == VK_SUCCESS && size == sizeof(data)))
memset(&data, 0, sizeof(data));
return data;
bool Image::transparent() const
{
- return (create_info_.tiling == XGL_LINEAR_TILING &&
+ return (create_info_.tiling == VK_LINEAR_TILING &&
create_info_.samples == 1 &&
- !(create_info_.usage & (XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
- XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT)));
+ !(create_info_.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
+ VK_IMAGE_USAGE_DEPTH_STENCIL_BIT)));
}
-void ImageView::init(const Device &dev, const XGL_IMAGE_VIEW_CREATE_INFO &info)
+void ImageView::init(const Device &dev, const VK_IMAGE_VIEW_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateImageView, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateImageView, dev.obj(), &info);
alloc_memory(dev);
}
-void ColorAttachmentView::init(const Device &dev, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO &info)
+void ColorAttachmentView::init(const Device &dev, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateColorAttachmentView, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateColorAttachmentView, dev.obj(), &info);
alloc_memory(dev);
}
-void DepthStencilView::init(const Device &dev, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO &info)
+void DepthStencilView::init(const Device &dev, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDepthStencilView, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDepthStencilView, dev.obj(), &info);
alloc_memory(dev);
}
-void Shader::init(const Device &dev, const XGL_SHADER_CREATE_INFO &info)
+void Shader::init(const Device &dev, const VK_SHADER_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateShader, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateShader, dev.obj(), &info);
}
-XGL_RESULT Shader::init_try(const Device &dev, const XGL_SHADER_CREATE_INFO &info)
+VK_RESULT Shader::init_try(const Device &dev, const VK_SHADER_CREATE_INFO &info)
{
- XGL_SHADER sh;
- XGL_RESULT err = xglCreateShader(dev.obj(), &info, &sh);
- if (err == XGL_SUCCESS)
+ VK_SHADER sh;
+ VK_RESULT err = vkCreateShader(dev.obj(), &info, &sh);
+ if (err == VK_SUCCESS)
Object::init(sh);
return err;
}
-void Pipeline::init(const Device &dev, const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info)
+void Pipeline::init(const Device &dev, const VK_GRAPHICS_PIPELINE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateGraphicsPipeline, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateGraphicsPipeline, dev.obj(), &info);
alloc_memory(dev);
}
void Pipeline::init(
const Device &dev,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info,
- const XGL_PIPELINE basePipeline)
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO &info,
+ const VK_PIPELINE basePipeline)
{
- DERIVED_OBJECT_INIT(xglCreateGraphicsPipelineDerivative, dev.obj(), &info, basePipeline);
+ DERIVED_OBJECT_INIT(vkCreateGraphicsPipelineDerivative, dev.obj(), &info, basePipeline);
alloc_memory(dev);
}
-void Pipeline::init(const Device &dev, const XGL_COMPUTE_PIPELINE_CREATE_INFO &info)
+void Pipeline::init(const Device &dev, const VK_COMPUTE_PIPELINE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateComputePipeline, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateComputePipeline, dev.obj(), &info);
alloc_memory(dev);
}
void Pipeline::init(const Device&dev, size_t size, const void *data)
{
- DERIVED_OBJECT_INIT(xglLoadPipeline, dev.obj(), size, data);
+ DERIVED_OBJECT_INIT(vkLoadPipeline, dev.obj(), size, data);
alloc_memory(dev);
}
const Device&dev,
size_t size,
const void *data,
- const XGL_PIPELINE basePipeline)
+ const VK_PIPELINE basePipeline)
{
- DERIVED_OBJECT_INIT(xglLoadPipelineDerivative, dev.obj(), size, data, basePipeline);
+ DERIVED_OBJECT_INIT(vkLoadPipelineDerivative, dev.obj(), size, data, basePipeline);
alloc_memory(dev);
}
size_t Pipeline::store(size_t size, void *data)
{
- if (!EXPECT(xglStorePipeline(obj(), &size, data) == XGL_SUCCESS))
+ if (!EXPECT(vkStorePipeline(obj(), &size, data) == VK_SUCCESS))
size = 0;
return size;
}
-void Sampler::init(const Device &dev, const XGL_SAMPLER_CREATE_INFO &info)
+void Sampler::init(const Device &dev, const VK_SAMPLER_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateSampler, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateSampler, dev.obj(), &info);
alloc_memory(dev);
}
-void DescriptorSetLayout::init(const Device &dev, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO &info)
+void DescriptorSetLayout::init(const Device &dev, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDescriptorSetLayout, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDescriptorSetLayout, dev.obj(), &info);
alloc_memory(dev);
}
void DescriptorSetLayoutChain::init(const Device &dev, const std::vector<const DescriptorSetLayout *> &layouts)
{
- const std::vector<XGL_DESCRIPTOR_SET_LAYOUT> layout_objs = make_objects<XGL_DESCRIPTOR_SET_LAYOUT>(layouts);
+ const std::vector<VK_DESCRIPTOR_SET_LAYOUT> layout_objs = make_objects<VK_DESCRIPTOR_SET_LAYOUT>(layouts);
- DERIVED_OBJECT_INIT(xglCreateDescriptorSetLayoutChain, dev.obj(), layout_objs.size(), &layout_objs[0]);
+ DERIVED_OBJECT_INIT(vkCreateDescriptorSetLayoutChain, dev.obj(), layout_objs.size(), &layout_objs[0]);
alloc_memory(dev);
}
-void DescriptorPool::init(const Device &dev, XGL_DESCRIPTOR_POOL_USAGE usage,
- uint32_t max_sets, const XGL_DESCRIPTOR_POOL_CREATE_INFO &info)
+void DescriptorPool::init(const Device &dev, VK_DESCRIPTOR_POOL_USAGE usage,
+ uint32_t max_sets, const VK_DESCRIPTOR_POOL_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDescriptorPool, dev.obj(), usage, max_sets, &info);
+ DERIVED_OBJECT_INIT(vkCreateDescriptorPool, dev.obj(), usage, max_sets, &info);
alloc_memory(dev);
}
void DescriptorPool::reset()
{
- EXPECT(xglResetDescriptorPool(obj()) == XGL_SUCCESS);
+ EXPECT(vkResetDescriptorPool(obj()) == VK_SUCCESS);
}
-std::vector<DescriptorSet *> DescriptorPool::alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const std::vector<const DescriptorSetLayout *> &layouts)
+std::vector<DescriptorSet *> DescriptorPool::alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const std::vector<const DescriptorSetLayout *> &layouts)
{
- const std::vector<XGL_DESCRIPTOR_SET_LAYOUT> layout_objs = make_objects<XGL_DESCRIPTOR_SET_LAYOUT>(layouts);
+ const std::vector<VK_DESCRIPTOR_SET_LAYOUT> layout_objs = make_objects<VK_DESCRIPTOR_SET_LAYOUT>(layouts);
- std::vector<XGL_DESCRIPTOR_SET> set_objs;
+ std::vector<VK_DESCRIPTOR_SET> set_objs;
set_objs.resize(layout_objs.size());
uint32_t set_count;
- XGL_RESULT err = xglAllocDescriptorSets(obj(), usage, layout_objs.size(), &layout_objs[0], &set_objs[0], &set_count);
- if (err == XGL_SUCCESS)
+ VK_RESULT err = vkAllocDescriptorSets(obj(), usage, layout_objs.size(), &layout_objs[0], &set_objs[0], &set_count);
+ if (err == VK_SUCCESS)
EXPECT(set_count == set_objs.size());
set_objs.resize(set_count);
std::vector<DescriptorSet *> sets;
sets.reserve(set_count);
- for (std::vector<XGL_DESCRIPTOR_SET>::const_iterator it = set_objs.begin(); it != set_objs.end(); it++) {
+ for (std::vector<VK_DESCRIPTOR_SET>::const_iterator it = set_objs.begin(); it != set_objs.end(); it++) {
// do descriptor sets need memories bound?
sets.push_back(new DescriptorSet(*it));
}
return sets;
}
-std::vector<DescriptorSet *> DescriptorPool::alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout, uint32_t count)
+std::vector<DescriptorSet *> DescriptorPool::alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout, uint32_t count)
{
return alloc_sets(usage, std::vector<const DescriptorSetLayout *>(count, &layout));
}
-DescriptorSet *DescriptorPool::alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout)
+DescriptorSet *DescriptorPool::alloc_sets(VK_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout)
{
std::vector<DescriptorSet *> set = alloc_sets(usage, layout, 1);
return (set.empty()) ? NULL : set[0];
void DescriptorPool::clear_sets(const std::vector<DescriptorSet *> &sets)
{
- const std::vector<XGL_DESCRIPTOR_SET> set_objs = make_objects<XGL_DESCRIPTOR_SET>(sets);
- xglClearDescriptorSets(obj(), set_objs.size(), &set_objs[0]);
+ const std::vector<VK_DESCRIPTOR_SET> set_objs = make_objects<VK_DESCRIPTOR_SET>(sets);
+ vkClearDescriptorSets(obj(), set_objs.size(), &set_objs[0]);
}
void DescriptorSet::update(const std::vector<const void *> &update_array)
{
- xglUpdateDescriptors(obj(), update_array.size(), const_cast<const void **>(&update_array[0]));
+ vkUpdateDescriptors(obj(), update_array.size(), const_cast<const void **>(&update_array[0]));
}
-void DynamicVpStateObject::init(const Device &dev, const XGL_DYNAMIC_VP_STATE_CREATE_INFO &info)
+void DynamicVpStateObject::init(const Device &dev, const VK_DYNAMIC_VP_STATE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDynamicViewportState, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDynamicViewportState, dev.obj(), &info);
alloc_memory(dev);
}
-void DynamicRsStateObject::init(const Device &dev, const XGL_DYNAMIC_RS_STATE_CREATE_INFO &info)
+void DynamicRsStateObject::init(const Device &dev, const VK_DYNAMIC_RS_STATE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDynamicRasterState, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDynamicRasterState, dev.obj(), &info);
alloc_memory(dev);
}
-void DynamicCbStateObject::init(const Device &dev, const XGL_DYNAMIC_CB_STATE_CREATE_INFO &info)
+void DynamicCbStateObject::init(const Device &dev, const VK_DYNAMIC_CB_STATE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDynamicColorBlendState, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDynamicColorBlendState, dev.obj(), &info);
alloc_memory(dev);
}
-void DynamicDsStateObject::init(const Device &dev, const XGL_DYNAMIC_DS_STATE_CREATE_INFO &info)
+void DynamicDsStateObject::init(const Device &dev, const VK_DYNAMIC_DS_STATE_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateDynamicDepthStencilState, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateDynamicDepthStencilState, dev.obj(), &info);
alloc_memory(dev);
}
-void CmdBuffer::init(const Device &dev, const XGL_CMD_BUFFER_CREATE_INFO &info)
+void CmdBuffer::init(const Device &dev, const VK_CMD_BUFFER_CREATE_INFO &info)
{
- DERIVED_OBJECT_INIT(xglCreateCommandBuffer, dev.obj(), &info);
+ DERIVED_OBJECT_INIT(vkCreateCommandBuffer, dev.obj(), &info);
}
-void CmdBuffer::begin(const XGL_CMD_BUFFER_BEGIN_INFO *info)
+void CmdBuffer::begin(const VK_CMD_BUFFER_BEGIN_INFO *info)
{
- EXPECT(xglBeginCommandBuffer(obj(), info) == XGL_SUCCESS);
+ EXPECT(vkBeginCommandBuffer(obj(), info) == VK_SUCCESS);
}
-void CmdBuffer::begin(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj)
+void CmdBuffer::begin(VK_RENDER_PASS renderpass_obj, VK_FRAMEBUFFER framebuffer_obj)
{
- XGL_CMD_BUFFER_BEGIN_INFO info = {};
- XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO graphics_cmd_buf_info = {};
- graphics_cmd_buf_info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
+ VK_CMD_BUFFER_BEGIN_INFO info = {};
+ VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO graphics_cmd_buf_info = {};
+ graphics_cmd_buf_info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
graphics_cmd_buf_info.pNext = NULL;
graphics_cmd_buf_info.renderPassContinue.renderPass = renderpass_obj;
graphics_cmd_buf_info.renderPassContinue.framebuffer = framebuffer_obj;
- info.flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT;
- info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
+ info.flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT;
+ info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
info.pNext = &graphics_cmd_buf_info;
begin(&info);
void CmdBuffer::begin()
{
- XGL_CMD_BUFFER_BEGIN_INFO info = {};
- info.flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT;
- info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
+ VK_CMD_BUFFER_BEGIN_INFO info = {};
+ info.flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT;
+ info.sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO;
begin(&info);
}
void CmdBuffer::end()
{
- EXPECT(xglEndCommandBuffer(obj()) == XGL_SUCCESS);
+ EXPECT(vkEndCommandBuffer(obj()) == VK_SUCCESS);
}
void CmdBuffer::reset()
{
- EXPECT(xglResetCommandBuffer(obj()) == XGL_SUCCESS);
+ EXPECT(vkResetCommandBuffer(obj()) == VK_SUCCESS);
}
-}; // namespace xgl_testing
+}; // namespace vk_testing
+++ /dev/null
-// XGL tests
-//
-// Copyright (C) 2014 LunarG, Inc.
-//
-// Permission is hereby granted, free of charge, to any person obtaining a
-// copy of this software and associated documentation files (the "Software"),
-// to deal in the Software without restriction, including without limitation
-// the rights to use, copy, modify, merge, publish, distribute, sublicense,
-// and/or sell copies of the Software, and to permit persons to whom the
-// Software is furnished to do so, subject to the following conditions:
-//
-// The above copyright notice and this permission notice shall be included
-// in all copies or substantial portions of the Software.
-//
-// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
-// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-// DEALINGS IN THE SOFTWARE.
-
-#ifndef XGLTESTBINDING_H
-#define XGLTESTBINDING_H
-
-#include <vector>
-
-#include "xgl.h"
-
-namespace xgl_testing {
-
-typedef void (*ErrorCallback)(const char *expr, const char *file, unsigned int line, const char *function);
-void set_error_callback(ErrorCallback callback);
-
-class PhysicalGpu;
-class BaseObject;
-class Object;
-class DynamicStateObject;
-class Device;
-class Queue;
-class GpuMemory;
-class Fence;
-class Semaphore;
-class Event;
-class QueryPool;
-class Buffer;
-class BufferView;
-class Image;
-class ImageView;
-class ColorAttachmentView;
-class DepthStencilView;
-class Shader;
-class Pipeline;
-class PipelineDelta;
-class Sampler;
-class DescriptorSetLayout;
-class DescriptorSetLayoutChain;
-class DescriptorSetPool;
-class DescriptorSet;
-class DynamicVpStateObject;
-class DynamicRsStateObject;
-class DynamicMsaaStateObject;
-class DynamicCbStateObject;
-class DynamicDsStateObject;
-class CmdBuffer;
-
-class PhysicalGpu {
-public:
- explicit PhysicalGpu(XGL_PHYSICAL_GPU gpu) : gpu_(gpu) {}
-
- const XGL_PHYSICAL_GPU &obj() const { return gpu_; }
-
- // xglGetGpuInfo()
- XGL_PHYSICAL_GPU_PROPERTIES properties() const;
- XGL_PHYSICAL_GPU_PERFORMANCE performance() const;
- XGL_PHYSICAL_GPU_MEMORY_PROPERTIES memory_properties() const;
- std::vector<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_properties() const;
-
- // xglGetProcAddr()
- void *get_proc(const char *name) const { return xglGetProcAddr(gpu_, name); }
-
- // xglGetExtensionSupport()
- bool has_extension(const char *ext) const { return (xglGetExtensionSupport(gpu_, ext) == XGL_SUCCESS); }
- std::vector<const char *> extensions() const;
-
- // xglEnumerateLayers()
- std::vector<const char *> layers(std::vector<char> &buf) const;
-
- // xglGetMultiGpuCompatibility()
- XGL_GPU_COMPATIBILITY_INFO compatibility(const PhysicalGpu &other) const;
-
-private:
- XGL_PHYSICAL_GPU gpu_;
-};
-
-class BaseObject {
-public:
- const XGL_BASE_OBJECT &obj() const { return obj_; }
- bool initialized() const { return (obj_ != XGL_NULL_HANDLE); }
-
- // xglGetObjectInfo()
- uint32_t memory_allocation_count() const;
- std::vector<XGL_MEMORY_REQUIREMENTS> memory_requirements() const;
-
-protected:
- explicit BaseObject() : obj_(XGL_NULL_HANDLE), own_obj_(false) {}
- explicit BaseObject(XGL_BASE_OBJECT obj) : obj_(XGL_NULL_HANDLE), own_obj_(false) { init(obj); }
-
- void init(XGL_BASE_OBJECT obj, bool own);
- void init(XGL_BASE_OBJECT obj) { init(obj, true); }
-
- void reinit(XGL_BASE_OBJECT obj, bool own);
- void reinit(XGL_BASE_OBJECT obj) { reinit(obj, true); }
-
- bool own() const { return own_obj_; }
-
-private:
- // base objects are non-copyable
- BaseObject(const BaseObject &);
- BaseObject &operator=(const BaseObject &);
-
- XGL_BASE_OBJECT obj_;
- bool own_obj_;
-};
-
-class Object : public BaseObject {
-public:
- const XGL_OBJECT &obj() const { return reinterpret_cast<const XGL_OBJECT &>(BaseObject::obj()); }
-
- // xglBindObjectMemory()
- void bind_memory(uint32_t alloc_idx, const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
- void unbind_memory(uint32_t alloc_idx);
- void unbind_memory();
-
- // xglBindObjectMemoryRange()
- void bind_memory(uint32_t alloc_idx, XGL_GPU_SIZE offset, XGL_GPU_SIZE size,
- const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
-
- // Unless an object is initialized with init_no_mem(), memories are
- // automatically allocated and bound. These methods can be used to get
- // the memories (for xglQueueAddMemReference), or to map/unmap the primary memory.
- std::vector<XGL_GPU_MEMORY> memories() const;
-
- const void *map(XGL_FLAGS flags) const;
- void *map(XGL_FLAGS flags);
- const void *map() const { return map(0); }
- void *map() { return map(0); }
-
- void unmap() const;
-
-protected:
- explicit Object() : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) {}
- explicit Object(XGL_OBJECT obj) : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) { init(obj); }
- ~Object() { cleanup(); }
-
- void init(XGL_OBJECT obj, bool own);
- void init(XGL_OBJECT obj) { init(obj, true); }
-
- void reinit(XGL_OBJECT obj, bool own);
- void reinit(XGL_OBJECT obj) { init(obj, true); }
-
- // allocate and bind internal memories
- void alloc_memory(const Device &dev, bool for_linear_img, bool for_img);
- void alloc_memory(const Device &dev) { alloc_memory(dev, false, false); }
- void alloc_memory(const std::vector<XGL_GPU_MEMORY> &mems);
-
-private:
- void cleanup();
-
- uint32_t mem_alloc_count_;
- GpuMemory *internal_mems_;
- GpuMemory *primary_mem_;
-};
-
-class DynamicStateObject : public Object {
-public:
- const XGL_DYNAMIC_STATE_OBJECT &obj() const { return reinterpret_cast<const XGL_DYNAMIC_STATE_OBJECT &>(Object::obj()); }
-
-protected:
- explicit DynamicStateObject() {}
- explicit DynamicStateObject(XGL_DYNAMIC_STATE_OBJECT obj) : Object(obj) {}
-};
-
-template<typename T, class C>
-class DerivedObject : public C {
-public:
- const T &obj() const { return reinterpret_cast<const T &>(C::obj()); }
-
-protected:
- typedef T obj_type;
- typedef C base_type;
-
- explicit DerivedObject() {}
- explicit DerivedObject(T obj) : C(obj) {}
-};
-
-class Device : public DerivedObject<XGL_DEVICE, BaseObject> {
-public:
- explicit Device(XGL_PHYSICAL_GPU gpu) : gpu_(gpu) {}
- ~Device();
-
- // xglCreateDevice()
- void init(const XGL_DEVICE_CREATE_INFO &info);
- void init(bool enable_layers); // all queues, all extensions, etc
- void init() { init(false); };
-
- const PhysicalGpu &gpu() const { return gpu_; }
-
- // xglGetDeviceQueue()
- const std::vector<Queue *> &graphics_queues() { return queues_[GRAPHICS]; }
- const std::vector<Queue *> &compute_queues() { return queues_[COMPUTE]; }
- const std::vector<Queue *> &dma_queues() { return queues_[DMA]; }
- uint32_t graphics_queue_node_index_;
-
- struct Format {
- XGL_FORMAT format;
- XGL_IMAGE_TILING tiling;
- XGL_FLAGS features;
- };
- // xglGetFormatInfo()
- XGL_FORMAT_PROPERTIES format_properties(XGL_FORMAT format);
- const std::vector<Format> &formats() const { return formats_; }
-
- // xglDeviceWaitIdle()
- void wait();
-
- // xglWaitForFences()
- XGL_RESULT wait(const std::vector<const Fence *> &fences, bool wait_all, uint64_t timeout);
- XGL_RESULT wait(const Fence &fence) { return wait(std::vector<const Fence *>(1, &fence), true, (uint64_t) -1); }
-
- // xglBeginDescriptorPoolUpdate()
- // xglEndDescriptorPoolUpdate()
- void begin_descriptor_pool_update(XGL_DESCRIPTOR_UPDATE_MODE mode);
- void end_descriptor_pool_update(CmdBuffer &cmd);
-
-private:
- enum QueueIndex {
- GRAPHICS,
- COMPUTE,
- DMA,
- QUEUE_COUNT,
- };
-
- void init_queues();
- void init_formats();
-
- PhysicalGpu gpu_;
-
- std::vector<Queue *> queues_[QUEUE_COUNT];
- std::vector<Format> formats_;
-};
-
-class Queue : public DerivedObject<XGL_QUEUE, BaseObject> {
-public:
- explicit Queue(XGL_QUEUE queue) : DerivedObject(queue) {}
-
- // xglQueueSubmit()
- void submit(const std::vector<const CmdBuffer *> &cmds, Fence &fence);
- void submit(const CmdBuffer &cmd, Fence &fence);
- void submit(const CmdBuffer &cmd);
-
- // xglQueueAddMemReference()
- // xglQueueRemoveMemReference()
- void add_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs);
- void remove_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs);
-
- // xglQueueWaitIdle()
- void wait();
-
- // xglQueueSignalSemaphore()
- // xglQueueWaitSemaphore()
- void signal_semaphore(Semaphore &sem);
- void wait_semaphore(Semaphore &sem);
-};
-
-class GpuMemory : public DerivedObject<XGL_GPU_MEMORY, BaseObject> {
-public:
- ~GpuMemory();
-
- // xglAllocMemory()
- void init(const Device &dev, const XGL_MEMORY_ALLOC_INFO &info);
- // xglPinSystemMemory()
- void init(const Device &dev, size_t size, const void *data);
- // xglOpenSharedMemory()
- void init(const Device &dev, const XGL_MEMORY_OPEN_INFO &info);
- // xglOpenPeerMemory()
- void init(const Device &dev, const XGL_PEER_MEMORY_OPEN_INFO &info);
-
- void init(XGL_GPU_MEMORY mem) { BaseObject::init(mem, false); }
-
- // xglSetMemoryPriority()
- void set_priority(XGL_MEMORY_PRIORITY priority);
-
- // xglMapMemory()
- const void *map(XGL_FLAGS flags) const;
- void *map(XGL_FLAGS flags);
- const void *map() const { return map(0); }
- void *map() { return map(0); }
-
- // xglUnmapMemory()
- void unmap() const;
-
- static XGL_MEMORY_ALLOC_INFO alloc_info(const XGL_MEMORY_REQUIREMENTS &reqs,
- const XGL_MEMORY_ALLOC_INFO *next_info);
-};
-
-class Fence : public DerivedObject<XGL_FENCE, Object> {
-public:
- // xglCreateFence()
- void init(const Device &dev, const XGL_FENCE_CREATE_INFO &info);
-
- // xglGetFenceStatus()
- XGL_RESULT status() const { return xglGetFenceStatus(obj()); }
-
- static XGL_FENCE_CREATE_INFO create_info(XGL_FENCE_CREATE_FLAGS flags);
- static XGL_FENCE_CREATE_INFO create_info();
-};
-
-class Semaphore : public DerivedObject<XGL_SEMAPHORE, Object> {
-public:
- // xglCreateSemaphore()
- void init(const Device &dev, const XGL_SEMAPHORE_CREATE_INFO &info);
- // xglOpenSharedSemaphore()
- void init(const Device &dev, const XGL_SEMAPHORE_OPEN_INFO &info);
-
- static XGL_SEMAPHORE_CREATE_INFO create_info(uint32_t init_count, XGL_FLAGS flags);
-};
-
-class Event : public DerivedObject<XGL_EVENT, Object> {
-public:
- // xglCreateEvent()
- void init(const Device &dev, const XGL_EVENT_CREATE_INFO &info);
-
- // xglGetEventStatus()
- // xglSetEvent()
- // xglResetEvent()
- XGL_RESULT status() const { return xglGetEventStatus(obj()); }
- void set();
- void reset();
-
- static XGL_EVENT_CREATE_INFO create_info(XGL_FLAGS flags);
-};
-
-class QueryPool : public DerivedObject<XGL_QUERY_POOL, Object> {
-public:
- // xglCreateQueryPool()
- void init(const Device &dev, const XGL_QUERY_POOL_CREATE_INFO &info);
-
- // xglGetQueryPoolResults()
- XGL_RESULT results(uint32_t start, uint32_t count, size_t size, void *data);
-
- static XGL_QUERY_POOL_CREATE_INFO create_info(XGL_QUERY_TYPE type, uint32_t slot_count);
-};
-
-class Buffer : public DerivedObject<XGL_BUFFER, Object> {
-public:
- explicit Buffer() {}
- explicit Buffer(const Device &dev, const XGL_BUFFER_CREATE_INFO &info) { init(dev, info); }
- explicit Buffer(const Device &dev, XGL_GPU_SIZE size) { init(dev, size); }
-
- // xglCreateBuffer()
- void init(const Device &dev, const XGL_BUFFER_CREATE_INFO &info);
- void init(const Device &dev, XGL_GPU_SIZE size) { init(dev, create_info(size, 0)); }
- void init_no_mem(const Device &dev, const XGL_BUFFER_CREATE_INFO &info);
-
- static XGL_BUFFER_CREATE_INFO create_info(XGL_GPU_SIZE size, XGL_FLAGS usage);
-
- XGL_BUFFER_MEMORY_BARRIER buffer_memory_barrier(XGL_FLAGS output_mask, XGL_FLAGS input_mask,
- XGL_GPU_SIZE offset, XGL_GPU_SIZE size) const
- {
- XGL_BUFFER_MEMORY_BARRIER barrier = {};
- barrier.sType = XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER;
- barrier.buffer = obj();
- barrier.outputMask = output_mask;
- barrier.inputMask = input_mask;
- barrier.offset = offset;
- barrier.size = size;
- return barrier;
- }
-private:
- XGL_BUFFER_CREATE_INFO create_info_;
-};
-
-class BufferView : public DerivedObject<XGL_BUFFER_VIEW, Object> {
-public:
- // xglCreateBufferView()
- void init(const Device &dev, const XGL_BUFFER_VIEW_CREATE_INFO &info);
-};
-
-class Image : public DerivedObject<XGL_IMAGE, Object> {
-public:
- explicit Image() : format_features_(0) {}
- explicit Image(const Device &dev, const XGL_IMAGE_CREATE_INFO &info) : format_features_(0) { init(dev, info); }
-
- // xglCreateImage()
- void init(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
- void init_no_mem(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
- // xglOpenPeerImage()
- void init(const Device &dev, const XGL_PEER_IMAGE_OPEN_INFO &info, const XGL_IMAGE_CREATE_INFO &original_info);
-
- // xglBindImageMemoryRange()
- void bind_memory(uint32_t alloc_idx, const XGL_IMAGE_MEMORY_BIND_INFO &info,
- const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
-
- // xglGetImageSubresourceInfo()
- XGL_SUBRESOURCE_LAYOUT subresource_layout(const XGL_IMAGE_SUBRESOURCE &subres) const;
-
- bool transparent() const;
- bool copyable() const { return (format_features_ & XGL_FORMAT_IMAGE_COPY_BIT); }
-
- XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(XGL_IMAGE_ASPECT aspect) const { return subresource_range(create_info_, aspect); }
- XGL_EXTENT3D extent() const { return create_info_.extent; }
- XGL_EXTENT3D extent(uint32_t mip_level) const { return extent(create_info_.extent, mip_level); }
- XGL_FORMAT format() const {return create_info_.format;}
-
- XGL_IMAGE_MEMORY_BARRIER image_memory_barrier(XGL_FLAGS output_mask, XGL_FLAGS input_mask,
- XGL_IMAGE_LAYOUT old_layout,
- XGL_IMAGE_LAYOUT new_layout,
- const XGL_IMAGE_SUBRESOURCE_RANGE &range) const
- {
- XGL_IMAGE_MEMORY_BARRIER barrier = {};
- barrier.sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
- barrier.outputMask = output_mask;
- barrier.inputMask = input_mask;
- barrier.oldLayout = old_layout;
- barrier.newLayout = new_layout;
- barrier.image = obj();
- barrier.subresourceRange = range;
- return barrier;
- }
-
- static XGL_IMAGE_CREATE_INFO create_info();
- static XGL_IMAGE_SUBRESOURCE subresource(XGL_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice);
- static XGL_IMAGE_SUBRESOURCE subresource(const XGL_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice);
- static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(XGL_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
- uint32_t base_array_slice, uint32_t array_size);
- static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(const XGL_IMAGE_CREATE_INFO &info, XGL_IMAGE_ASPECT aspect);
- static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(const XGL_IMAGE_SUBRESOURCE &subres);
-
- static XGL_EXTENT2D extent(int32_t width, int32_t height);
- static XGL_EXTENT2D extent(const XGL_EXTENT2D &extent, uint32_t mip_level);
- static XGL_EXTENT2D extent(const XGL_EXTENT3D &extent);
-
- static XGL_EXTENT3D extent(int32_t width, int32_t height, int32_t depth);
- static XGL_EXTENT3D extent(const XGL_EXTENT3D &extent, uint32_t mip_level);
-
-private:
- void init_info(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
-
- XGL_IMAGE_CREATE_INFO create_info_;
- XGL_FLAGS format_features_;
-};
-
-class ImageView : public DerivedObject<XGL_IMAGE_VIEW, Object> {
-public:
- // xglCreateImageView()
- void init(const Device &dev, const XGL_IMAGE_VIEW_CREATE_INFO &info);
-};
-
-class ColorAttachmentView : public DerivedObject<XGL_COLOR_ATTACHMENT_VIEW, Object> {
-public:
- // xglCreateColorAttachmentView()
- void init(const Device &dev, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO &info);
-};
-
-class DepthStencilView : public DerivedObject<XGL_DEPTH_STENCIL_VIEW, Object> {
-public:
- // xglCreateDepthStencilView()
- void init(const Device &dev, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO &info);
-};
-
-class Shader : public DerivedObject<XGL_SHADER, Object> {
-public:
- // xglCreateShader()
- void init(const Device &dev, const XGL_SHADER_CREATE_INFO &info);
- XGL_RESULT init_try(const Device &dev, const XGL_SHADER_CREATE_INFO &info);
-
- static XGL_SHADER_CREATE_INFO create_info(size_t code_size, const void *code, XGL_FLAGS flags);
-};
-
-class Pipeline : public DerivedObject<XGL_PIPELINE, Object> {
-public:
- // xglCreateGraphicsPipeline()
- void init(const Device &dev, const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info);
- // xglCreateGraphicsPipelineDerivative()
- void init(const Device &dev, const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info, const XGL_PIPELINE basePipeline);
- // xglCreateComputePipeline()
- void init(const Device &dev, const XGL_COMPUTE_PIPELINE_CREATE_INFO &info);
- // xglLoadPipeline()
- void init(const Device&dev, size_t size, const void *data);
- // xglLoadPipelineDerivative()
- void init(const Device&dev, size_t size, const void *data, XGL_PIPELINE basePipeline);
-
- // xglStorePipeline()
- size_t store(size_t size, void *data);
-};
-
-class Sampler : public DerivedObject<XGL_SAMPLER, Object> {
-public:
- // xglCreateSampler()
- void init(const Device &dev, const XGL_SAMPLER_CREATE_INFO &info);
-};
-
-class DescriptorSetLayout : public DerivedObject<XGL_DESCRIPTOR_SET_LAYOUT, Object> {
-public:
- // xglCreateDescriptorSetLayout()
- void init(const Device &dev, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO &info);
-};
-
-class DescriptorSetLayoutChain : public DerivedObject<XGL_DESCRIPTOR_SET_LAYOUT_CHAIN, Object> {
-public:
- // xglCreateDescriptorSetLayoutChain()
- void init(const Device &dev, const std::vector<const DescriptorSetLayout *> &layouts);
-};
-
-class DescriptorPool : public DerivedObject<XGL_DESCRIPTOR_POOL, Object> {
-public:
- // xglCreateDescriptorPool()
- void init(const Device &dev, XGL_DESCRIPTOR_POOL_USAGE usage,
- uint32_t max_sets, const XGL_DESCRIPTOR_POOL_CREATE_INFO &info);
-
- // xglResetDescriptorPool()
- void reset();
-
- // xglAllocDescriptorSets()
- std::vector<DescriptorSet *> alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const std::vector<const DescriptorSetLayout *> &layouts);
- std::vector<DescriptorSet *> alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout, uint32_t count);
- DescriptorSet *alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout);
-
- // xglClearDescriptorSets()
- void clear_sets(const std::vector<DescriptorSet *> &sets);
- void clear_sets(DescriptorSet &set) { clear_sets(std::vector<DescriptorSet *>(1, &set)); }
-};
-
-class DescriptorSet : public DerivedObject<XGL_DESCRIPTOR_SET, Object> {
-public:
- explicit DescriptorSet(XGL_DESCRIPTOR_SET set) : DerivedObject(set) {}
-
- // xglUpdateDescriptors()
- void update(const std::vector<const void *> &update_array);
-
- static XGL_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER *samplers);
- static XGL_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER> &samplers);
-
- static XGL_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER_IMAGE_VIEW_INFO *textures);
- static XGL_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER_IMAGE_VIEW_INFO> &textures);
-
- static XGL_UPDATE_IMAGES update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const XGL_IMAGE_VIEW_ATTACH_INFO *views);
- static XGL_UPDATE_IMAGES update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<XGL_IMAGE_VIEW_ATTACH_INFO> &views);
-
- static XGL_UPDATE_BUFFERS update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const XGL_BUFFER_VIEW_ATTACH_INFO *views);
- static XGL_UPDATE_BUFFERS update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<XGL_BUFFER_VIEW_ATTACH_INFO> &views);
-
- static XGL_UPDATE_AS_COPY update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set);
-
- static XGL_BUFFER_VIEW_ATTACH_INFO attach_info(const BufferView &view);
- static XGL_IMAGE_VIEW_ATTACH_INFO attach_info(const ImageView &view, XGL_IMAGE_LAYOUT layout);
-};
-
-class DynamicVpStateObject : public DerivedObject<XGL_DYNAMIC_VP_STATE_OBJECT, DynamicStateObject> {
-public:
- // xglCreateDynamicViewportState()
- void init(const Device &dev, const XGL_DYNAMIC_VP_STATE_CREATE_INFO &info);
-};
-
-class DynamicRsStateObject : public DerivedObject<XGL_DYNAMIC_RS_STATE_OBJECT, DynamicStateObject> {
-public:
- // xglCreateDynamicRasterState()
- void init(const Device &dev, const XGL_DYNAMIC_RS_STATE_CREATE_INFO &info);
-};
-
-class DynamicCbStateObject : public DerivedObject<XGL_DYNAMIC_CB_STATE_OBJECT, DynamicStateObject> {
-public:
- // xglCreateDynamicColorBlendState()
- void init(const Device &dev, const XGL_DYNAMIC_CB_STATE_CREATE_INFO &info);
-};
-
-class DynamicDsStateObject : public DerivedObject<XGL_DYNAMIC_DS_STATE_OBJECT, DynamicStateObject> {
-public:
- // xglCreateDynamicDepthStencilState()
- void init(const Device &dev, const XGL_DYNAMIC_DS_STATE_CREATE_INFO &info);
-};
-
-class CmdBuffer : public DerivedObject<XGL_CMD_BUFFER, Object> {
-public:
- explicit CmdBuffer() {}
- explicit CmdBuffer(const Device &dev, const XGL_CMD_BUFFER_CREATE_INFO &info) { init(dev, info); }
-
- // xglCreateCommandBuffer()
- void init(const Device &dev, const XGL_CMD_BUFFER_CREATE_INFO &info);
-
- // xglBeginCommandBuffer()
- void begin(const XGL_CMD_BUFFER_BEGIN_INFO *info);
- void begin(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj);
- void begin();
-
- // xglEndCommandBuffer()
- // xglResetCommandBuffer()
- void end();
- void reset();
-
- static XGL_CMD_BUFFER_CREATE_INFO create_info(uint32_t queueNodeIndex);
-};
-
-inline const void *Object::map(XGL_FLAGS flags) const
-{
- return (primary_mem_) ? primary_mem_->map(flags) : NULL;
-}
-
-inline void *Object::map(XGL_FLAGS flags)
-{
- return (primary_mem_) ? primary_mem_->map(flags) : NULL;
-}
-
-inline void Object::unmap() const
-{
- if (primary_mem_)
- primary_mem_->unmap();
-}
-
-inline XGL_MEMORY_ALLOC_INFO GpuMemory::alloc_info(const XGL_MEMORY_REQUIREMENTS &reqs,
- const XGL_MEMORY_ALLOC_INFO *next_info)
-{
- XGL_MEMORY_ALLOC_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
- if (next_info != NULL)
- info.pNext = (void *) next_info;
-
- info.allocationSize = reqs.size;
- info.memProps = reqs.memProps;
- info.memType = reqs.memType;
- info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
- return info;
-}
-
-inline XGL_BUFFER_CREATE_INFO Buffer::create_info(XGL_GPU_SIZE size, XGL_FLAGS usage)
-{
- XGL_BUFFER_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
- info.size = size;
- info.usage = usage;
- return info;
-}
-
-inline XGL_FENCE_CREATE_INFO Fence::create_info(XGL_FENCE_CREATE_FLAGS flags)
-{
- XGL_FENCE_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
- info.flags = flags;
- return info;
-}
-
-inline XGL_FENCE_CREATE_INFO Fence::create_info()
-{
- XGL_FENCE_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
- return info;
-}
-
-inline XGL_SEMAPHORE_CREATE_INFO Semaphore::create_info(uint32_t init_count, XGL_FLAGS flags)
-{
- XGL_SEMAPHORE_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO;
- info.initialCount = init_count;
- info.flags = flags;
- return info;
-}
-
-inline XGL_EVENT_CREATE_INFO Event::create_info(XGL_FLAGS flags)
-{
- XGL_EVENT_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO;
- info.flags = flags;
- return info;
-}
-
-inline XGL_QUERY_POOL_CREATE_INFO QueryPool::create_info(XGL_QUERY_TYPE type, uint32_t slot_count)
-{
- XGL_QUERY_POOL_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
- info.queryType = type;
- info.slots = slot_count;
- return info;
-}
-
-inline XGL_IMAGE_CREATE_INFO Image::create_info()
-{
- XGL_IMAGE_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
- info.extent.width = 1;
- info.extent.height = 1;
- info.extent.depth = 1;
- info.mipLevels = 1;
- info.arraySize = 1;
- info.samples = 1;
- return info;
-}
-
-inline XGL_IMAGE_SUBRESOURCE Image::subresource(XGL_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice)
-{
- XGL_IMAGE_SUBRESOURCE subres = {};
- subres.aspect = aspect;
- subres.mipLevel = mip_level;
- subres.arraySlice = array_slice;
- return subres;
-}
-
-inline XGL_IMAGE_SUBRESOURCE Image::subresource(const XGL_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice)
-{
- return subresource(range.aspect, range.baseMipLevel + mip_level, range.baseArraySlice + array_slice);
-}
-
-inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(XGL_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
- uint32_t base_array_slice, uint32_t array_size)
-{
- XGL_IMAGE_SUBRESOURCE_RANGE range = {};
- range.aspect = aspect;
- range.baseMipLevel = base_mip_level;
- range.mipLevels = mip_levels;
- range.baseArraySlice = base_array_slice;
- range.arraySize = array_size;
- return range;
-}
-
-inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const XGL_IMAGE_CREATE_INFO &info, XGL_IMAGE_ASPECT aspect)
-{
- return subresource_range(aspect, 0, info.mipLevels, 0, info.arraySize);
-}
-
-inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const XGL_IMAGE_SUBRESOURCE &subres)
-{
- return subresource_range(subres.aspect, subres.mipLevel, 1, subres.arraySlice, 1);
-}
-
-inline XGL_EXTENT2D Image::extent(int32_t width, int32_t height)
-{
- XGL_EXTENT2D extent = {};
- extent.width = width;
- extent.height = height;
- return extent;
-}
-
-inline XGL_EXTENT2D Image::extent(const XGL_EXTENT2D &extent, uint32_t mip_level)
-{
- const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
- const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
- return Image::extent(width, height);
-}
-
-inline XGL_EXTENT2D Image::extent(const XGL_EXTENT3D &extent)
-{
- return Image::extent(extent.width, extent.height);
-}
-
-inline XGL_EXTENT3D Image::extent(int32_t width, int32_t height, int32_t depth)
-{
- XGL_EXTENT3D extent = {};
- extent.width = width;
- extent.height = height;
- extent.depth = depth;
- return extent;
-}
-
-inline XGL_EXTENT3D Image::extent(const XGL_EXTENT3D &extent, uint32_t mip_level)
-{
- const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
- const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
- const int32_t depth = (extent.depth >> mip_level) ? extent.depth >> mip_level : 1;
- return Image::extent(width, height, depth);
-}
-
-inline XGL_SHADER_CREATE_INFO Shader::create_info(size_t code_size, const void *code, XGL_FLAGS flags)
-{
- XGL_SHADER_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
- info.codeSize = code_size;
- info.pCode = code;
- info.flags = flags;
- return info;
-}
-
-inline XGL_BUFFER_VIEW_ATTACH_INFO DescriptorSet::attach_info(const BufferView &view)
-{
- XGL_BUFFER_VIEW_ATTACH_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
- info.view = view.obj();
- return info;
-}
-
-inline XGL_IMAGE_VIEW_ATTACH_INFO DescriptorSet::attach_info(const ImageView &view, XGL_IMAGE_LAYOUT layout)
-{
- XGL_IMAGE_VIEW_ATTACH_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
- info.view = view.obj();
- info.layout = layout;
- return info;
-}
-
-inline XGL_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER *samplers)
-{
- XGL_UPDATE_SAMPLERS info = {};
- info.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS;
- info.binding = binding;
- info.arrayIndex = index;
- info.count = count;
- info.pSamplers = samplers;
- return info;
-}
-
-inline XGL_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER> &samplers)
-{
- return update(binding, index, samplers.size(), &samplers[0]);
-}
-
-inline XGL_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER_IMAGE_VIEW_INFO *textures)
-{
- XGL_UPDATE_SAMPLER_TEXTURES info = {};
- info.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
- info.binding = binding;
- info.arrayIndex = index;
- info.count = count;
- info.pSamplerImageViews = textures;
- return info;
-}
-
-inline XGL_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER_IMAGE_VIEW_INFO> &textures)
-{
- return update(binding, index, textures.size(), &textures[0]);
-}
-
-inline XGL_UPDATE_IMAGES DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
- const XGL_IMAGE_VIEW_ATTACH_INFO *views)
-{
- XGL_UPDATE_IMAGES info = {};
- info.sType = XGL_STRUCTURE_TYPE_UPDATE_IMAGES;
- info.descriptorType = type;
- info.binding = binding;
- info.arrayIndex = index;
- info.count = count;
- info.pImageViews = views;
- return info;
-}
-
-inline XGL_UPDATE_IMAGES DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
- const std::vector<XGL_IMAGE_VIEW_ATTACH_INFO> &views)
-{
- return update(type, binding, index, views.size(), &views[0]);
-}
-
-inline XGL_UPDATE_BUFFERS DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
- const XGL_BUFFER_VIEW_ATTACH_INFO *views)
-{
- XGL_UPDATE_BUFFERS info = {};
- info.sType = XGL_STRUCTURE_TYPE_UPDATE_BUFFERS;
- info.descriptorType = type;
- info.binding = binding;
- info.arrayIndex = index;
- info.count = count;
- info.pBufferViews = views;
- return info;
-}
-
-inline XGL_UPDATE_BUFFERS DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
- const std::vector<XGL_BUFFER_VIEW_ATTACH_INFO> &views)
-{
- return update(type, binding, index, views.size(), &views[0]);
-}
-
-inline XGL_UPDATE_AS_COPY DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set)
-{
- XGL_UPDATE_AS_COPY info = {};
- info.sType = XGL_STRUCTURE_TYPE_UPDATE_AS_COPY;
- info.descriptorType = type;
- info.binding = binding;
- info.arrayElement = index;
- info.count = count;
- info.descriptorSet = set.obj();
- return info;
-}
-
-inline XGL_CMD_BUFFER_CREATE_INFO CmdBuffer::create_info(uint32_t queueNodeIndex)
-{
- XGL_CMD_BUFFER_CREATE_INFO info = {};
- info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
- info.queueNodeIndex = queueNodeIndex;
- return info;
-}
-
-}; // namespace xgl_testing
-
-#endif // XGLTESTBINDING_H
-// XGL tests
+// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
-#include "xgltestframework.h"
-#include "xglrenderframework.h"
+#include "vktestframework.h"
+#include "vkrenderframework.h"
#include "GL/freeglut_std.h"
//#include "ShaderLang.h"
#include "GlslangToSpv.h"
// Initialize GLSL to SPV compiler utility
glslang::InitializeProcess();
- xgl_testing::set_error_callback(test_error_callback);
+ vk_testing::set_error_callback(test_error_callback);
}
void TestEnvironment::TearDown()
void XglTestFramework::WritePPM( const char *basename, XglImage *image )
{
string filename;
- XGL_RESULT err;
+ VK_RESULT err;
int x, y;
XglImage displayImage(image->device());
- displayImage.init(image->extent().width, image->extent().height, image->format(), 0, XGL_LINEAR_TILING);
+ displayImage.init(image->extent().width, image->extent().height, image->format(), 0, VK_LINEAR_TILING);
displayImage.CopyImage(*image);
filename.append(basename);
filename.append(".ppm");
- const XGL_IMAGE_SUBRESOURCE sr = {
- XGL_IMAGE_ASPECT_COLOR, 0, 0
+ const VK_IMAGE_SUBRESOURCE sr = {
+ VK_IMAGE_ASPECT_COLOR, 0, 0
};
- XGL_SUBRESOURCE_LAYOUT sr_layout;
+ VK_SUBRESOURCE_LAYOUT sr_layout;
size_t data_size = sizeof(sr_layout);
- err = xglGetImageSubresourceInfo( image->image(), &sr,
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo( image->image(), &sr,
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&data_size, &sr_layout);
- ASSERT_XGL_SUCCESS( err );
+ ASSERT_VK_SUCCESS( err );
ASSERT_EQ(data_size, sizeof(sr_layout));
char *ptr;
const int *row = (const int *) ptr;
int swapped;
- if (displayImage.format() == XGL_FMT_B8G8R8A8_UNORM)
+ if (displayImage.format() == VK_FMT_B8G8R8A8_UNORM)
{
for (x = 0; x < displayImage.width(); x++) {
swapped = (*row & 0xff00ff00) | (*row & 0x000000ff) << 16 | (*row & 0x00ff0000) >> 16;
row++;
}
}
- else if (displayImage.format() == XGL_FMT_R8G8B8A8_UNORM)
+ else if (displayImage.format() == VK_FMT_R8G8B8A8_UNORM)
{
for (x = 0; x < displayImage.width(); x++) {
file.write((char *) row, 3);
void XglTestFramework::Show(const char *comment, XglImage *image)
{
- XGL_RESULT err;
+ VK_RESULT err;
- const XGL_IMAGE_SUBRESOURCE sr = {
- XGL_IMAGE_ASPECT_COLOR, 0, 0
+ const VK_IMAGE_SUBRESOURCE sr = {
+ VK_IMAGE_ASPECT_COLOR, 0, 0
};
- XGL_SUBRESOURCE_LAYOUT sr_layout;
+ VK_SUBRESOURCE_LAYOUT sr_layout;
size_t data_size = sizeof(sr_layout);
XglTestImageRecord record;
if (!m_show_images) return;
- err = xglGetImageSubresourceInfo( image->image(), &sr, XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo( image->image(), &sr, VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&data_size, &sr_layout);
- ASSERT_XGL_SUCCESS( err );
+ ASSERT_VK_SUCCESS( err );
ASSERT_EQ(data_size, sizeof(sr_layout));
char *ptr;
err = image->MapMemory( (void **) &ptr );
- ASSERT_XGL_SUCCESS( err );
+ ASSERT_VK_SUCCESS( err );
ptr += sr_layout.offset;
m_display_image = --m_images.end();
err = image->UnmapMemory();
- ASSERT_XGL_SUCCESS( err );
+ ASSERT_VK_SUCCESS( err );
}
}
}
-static xgl_testing::Environment *environment;
+static vk_testing::Environment *environment;
TestFrameworkXglPresent::TestFrameworkXglPresent() :
m_device(environment->default_device()),
m_queue(*m_device.graphics_queues()[0]),
- m_cmdbuf(m_device, xgl_testing::CmdBuffer::create_info(m_device.graphics_queue_node_index_))
+ m_cmdbuf(m_device, vk_testing::CmdBuffer::create_info(m_device.graphics_queue_node_index_))
{
m_quit = false;
m_pause = false;
void TestFrameworkXglPresent::Display()
{
- XGL_RESULT err;
+ VK_RESULT err;
- XGL_WSI_X11_PRESENT_INFO present = {};
+ VK_WSI_X11_PRESENT_INFO present = {};
present.destWindow = m_window;
present.srcImage = m_display_image->m_presentableImage;
m_display_image->m_title.size(),
m_display_image->m_title.c_str());
- err = xglWsiX11QueuePresent(m_queue.obj(), &present, NULL);
+ err = vkWsiX11QueuePresent(m_queue.obj(), &present, NULL);
assert(!err);
m_queue.wait();
void TestFrameworkXglPresent::CreatePresentableImages()
{
- XGL_RESULT err;
+ VK_RESULT err;
m_display_image = m_images.begin();
for (int x=0; x < m_images.size(); x++)
{
- XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image_info = {};
- presentable_image_info.format = XGL_FMT_B8G8R8A8_UNORM;
- presentable_image_info.usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
+ VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image_info = {};
+ presentable_image_info.format = VK_FMT_B8G8R8A8_UNORM;
+ presentable_image_info.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
presentable_image_info.extent.width = m_display_image->m_width;
presentable_image_info.extent.height = m_display_image->m_height;
presentable_image_info.flags = 0;
void *dest_ptr;
- err = xglWsiX11CreatePresentableImage(m_device.obj(), &presentable_image_info,
+ err = vkWsiX11CreatePresentableImage(m_device.obj(), &presentable_image_info,
&m_display_image->m_presentableImage, &m_display_image->m_presentableMemory);
assert(!err);
- xgl_testing::Buffer buf;
- buf.init(m_device, (XGL_GPU_SIZE) m_display_image->m_data_size);
+ vk_testing::Buffer buf;
+ buf.init(m_device, (VK_GPU_SIZE) m_display_image->m_data_size);
dest_ptr = buf.map();
memcpy(dest_ptr,m_display_image->m_data, m_display_image->m_data_size);
buf.unmap();
m_cmdbuf.begin();
- XGL_BUFFER_IMAGE_COPY region = {};
+ VK_BUFFER_IMAGE_COPY region = {};
region.imageExtent.height = m_display_image->m_height;
region.imageExtent.width = m_display_image->m_width;
region.imageExtent.depth = 1;
- xglCmdCopyBufferToImage(m_cmdbuf.obj(),
+ vkCmdCopyBufferToImage(m_cmdbuf.obj(),
buf.obj(),
- m_display_image->m_presentableImage, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ m_display_image->m_presentableImage, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, ®ion);
m_cmdbuf.end();
- xglQueueAddMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
- xglQueueAddMemReference(m_queue.obj(), buf.memories()[0]);
+ vkQueueAddMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
+ vkQueueAddMemReference(m_queue.obj(), buf.memories()[0]);
- XGL_CMD_BUFFER cmdBufs[1];
+ VK_CMD_BUFFER cmdBufs[1];
cmdBufs[0] = m_cmdbuf.obj();
- xglQueueSubmit(m_queue.obj(), 1, cmdBufs, NULL);
+ vkQueueSubmit(m_queue.obj(), 1, cmdBufs, NULL);
m_queue.wait();
- xglQueueRemoveMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
- xglQueueRemoveMemReference(m_queue.obj(), buf.memories()[0]);
+ vkQueueRemoveMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
+ vkQueueRemoveMemReference(m_queue.obj(), buf.memories()[0]);
if (m_display_image->m_width > m_width)
m_width = m_display_image->m_width;
{
std::list<XglTestImageRecord>::const_iterator iterator;
for (iterator = m_images.begin(); iterator != m_images.end(); ++iterator) {
- xglDestroyObject(iterator->m_presentableImage);
+ vkDestroyObject(iterator->m_presentableImage);
}
xcb_destroy_window(environment->m_connection, m_window);
}
{
if (m_images.size() == 0) return;
- environment = new xgl_testing::Environment();
+ environment = new vk_testing::Environment();
::testing::AddGlobalTestEnvironment(environment);
environment->X11SetUp();
{
- TestFrameworkXglPresent xglPresent;
+ TestFrameworkXglPresent vkPresent;
- xglPresent.InitPresentFramework(m_images);
- xglPresent.CreatePresentableImages();
- xglPresent.CreateMyWindow();
- xglPresent.Run();
- xglPresent.TearDown();
+ vkPresent.InitPresentFramework(m_images);
+ vkPresent.CreatePresentableImages();
+ vkPresent.CreateMyWindow();
+ vkPresent.Run();
+ vkPresent.TearDown();
}
environment->TearDown();
}
}
//
-// Convert XGL shader type to compiler's
+// Convert VK shader type to compiler's
//
-EShLanguage XglTestFramework::FindLanguage(const XGL_PIPELINE_SHADER_STAGE shader_type)
+EShLanguage XglTestFramework::FindLanguage(const VK_PIPELINE_SHADER_STAGE shader_type)
{
switch (shader_type) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
return EShLangVertex;
- case XGL_SHADER_STAGE_TESS_CONTROL:
+ case VK_SHADER_STAGE_TESS_CONTROL:
return EShLangTessControl;
- case XGL_SHADER_STAGE_TESS_EVALUATION:
+ case VK_SHADER_STAGE_TESS_EVALUATION:
return EShLangTessEvaluation;
- case XGL_SHADER_STAGE_GEOMETRY:
+ case VK_SHADER_STAGE_GEOMETRY:
return EShLangGeometry;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
return EShLangFragment;
- case XGL_SHADER_STAGE_COMPUTE:
+ case VK_SHADER_STAGE_COMPUTE:
return EShLangCompute;
default:
//
-// Compile a given string containing GLSL into SPV for use by XGL
+// Compile a given string containing GLSL into SPV for use by VK
// Return value of false means an error was encountered.
//
-bool XglTestFramework::GLSLtoSPV(const XGL_PIPELINE_SHADER_STAGE shader_type,
+bool XglTestFramework::GLSLtoSPV(const VK_PIPELINE_SHADER_STAGE shader_type,
const char *pshader,
std::vector<unsigned int> &spv)
{
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
# code_gen.py overview
# This script generates code based on input headers
-# Initially it's intended to support Mantle and XGL headers and
+# Initially it's intended to support Mantle and VK headers and
# generate wrappers functions that can be used to display
# structs in a human-readable txt format, as well as utility functions
# to print enum values as strings
self.typedef_fwd_dict[base_type] = targ_type.strip(';')
self.typedef_rev_dict[targ_type.strip(';')] = base_type
elif parse_enum:
- #if 'XGL_MAX_ENUM' not in line and '{' not in line:
- if True not in [ens in line for ens in ['{', 'XGL_MAX_ENUM', '_RANGE']]:
+ #if 'VK_MAX_ENUM' not in line and '{' not in line:
+ if True not in [ens in line for ens in ['{', 'VK_MAX_ENUM', '_RANGE']]:
self._add_enum(line, base_type, default_enum_val)
default_enum_val += 1
elif parse_struct:
self.struct_dict = in_struct_dict
self.include_headers = []
self.api = prefix
- self.header_filename = os.path.join(out_dir, self.api+"_struct_wrappers.h")
- self.class_filename = os.path.join(out_dir, self.api+"_struct_wrappers.cpp")
- self.string_helper_filename = os.path.join(out_dir, self.api+"_struct_string_helper.h")
- self.string_helper_no_addr_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr.h")
- self.string_helper_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_cpp.h")
- self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr_cpp.h")
- self.validate_helper_filename = os.path.join(out_dir, self.api+"_struct_validate_helper.h")
+ if prefix == "vulkan":
+ self.api_prefix = "vk"
+ else:
+ self.api_prefix = prefix
+ self.header_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.h")
+ self.class_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.cpp")
+ self.string_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper.h")
+ self.string_helper_no_addr_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr.h")
+ self.string_helper_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_cpp.h")
+ self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr_cpp.h")
+ self.validate_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_validate_helper.h")
self.no_addr = False
self.hfg = CommonFileGen(self.header_filename)
self.cfg = CommonFileGen(self.class_filename)
self.shg = CommonFileGen(self.string_helper_filename)
self.shcppg = CommonFileGen(self.string_helper_cpp_filename)
self.vhg = CommonFileGen(self.validate_helper_filename)
- self.size_helper_filename = os.path.join(out_dir, self.api+"_struct_size_helper.h")
- self.size_helper_c_filename = os.path.join(out_dir, self.api+"_struct_size_helper.c")
+ self.size_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.h")
+ self.size_helper_c_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.c")
self.size_helper_gen = CommonFileGen(self.size_helper_filename)
self.size_helper_c_gen = CommonFileGen(self.size_helper_c_filename)
#print(self.header_filename)
def _generateCppHeader(self):
header = []
header.append("//#includes, #defines, globals and such...\n")
- header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api))
+ header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api_prefix))
return "".join(header)
def _generateClassDefinition(self):
class_def = []
- if 'xgl' == self.api: # Mantle doesn't have pNext to worry about
+ if 'vk' == self.api: # Mantle doesn't have pNext to worry about
class_def.append(self._generateDynamicPrintFunctions())
for s in sorted(self.struct_dict):
class_def.append("\n// %s class definition" % self.get_class_name(s))
def _generateDynamicPrintFunctions(self):
dp_funcs = []
dp_funcs.append("\nvoid dynamic_display_full_txt(const void* pStruct, uint32_t indent)\n{\n // Cast to APP_INFO ptr initially just to pull sType off struct")
- dp_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
+ dp_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
dp_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
class_num = 0
return "\n".join(dp_funcs)
def _get_func_name(self, struct, mid_str):
- return "%s_%s_%s" % (self.api, mid_str, struct.lower().strip("_"))
+ return "%s_%s_%s" % (self.api_prefix, mid_str, struct.lower().strip("_"))
def _get_sh_func_name(self, struct):
return self._get_func_name(struct, 'print')
sh_funcs.append(" if (pStruct == NULL) {")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }")
- sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
+ sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' char indent[100];\n strcpy(indent, " ");\n strcat(indent, prefix);')
sh_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
sh_funcs.append(" if (pStruct == NULL) {\n")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }\n")
- sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
+ sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' string indent = " ";')
sh_funcs.append(' indent += prefix;')
sh_funcs.append(" switch (sType)\n {")
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
+ header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_string_helper.h"\n')
+ header.append('#include "vk_enum_string_helper.h"\n')
header.append('using namespace std;\n\n// Function Prototypes\n')
header.append("string dynamic_display(const void* pStruct, const string prefix);\n")
return "".join(header)
for s in sorted(self.struct_dict):
sh_funcs.append('uint32_t %s(const %s* pStruct)\n{' % (self._get_vh_func_name(s), typedef_fwd_dict[s]))
for m in sorted(self.struct_dict[s]):
- # TODO : Need to handle arrays of enums like in XGL_RENDER_PASS_CREATE_INFO struct
+ # TODO : Need to handle arrays of enums like in VK_RENDER_PASS_CREATE_INFO struct
if is_type(self.struct_dict[s][m]['type'], 'enum') and not self.struct_dict[s][m]['ptr']:
sh_funcs.append(' if (!validate_%s(pStruct->%s))\n return 0;' % (self.struct_dict[s][m]['type'], self.struct_dict[s][m]['name']))
# TODO : Need a little refinement to this code to make sure type of struct matches expected input (ptr, const...)
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_validate_helper' not in f:
+ if 'vk_enum_validate_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_validate_helper.h"\n\n// Function Prototypes\n')
+ header.append('#include "vk_enum_validate_helper.h"\n\n// Function Prototypes\n')
#header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
if not is_type(self.struct_dict[s][m]['type'], 'struct') and not 'char' in self.struct_dict[s][m]['type'].lower():
if 'ppMemBarriers' == self.struct_dict[s][m]['name']:
# TODO : For now be conservative and consider all memBarrier ptrs as largest possible struct
- sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(XGL_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
+ sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(VK_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
else:
sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(%s));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type'], self.struct_dict[s][m]['type']))
else: # This is an array of char* or array of struct ptrs
else:
sh_funcs.append('size_t get_dynamic_struct_size(const void* pStruct)\n{')
indent = ' '
- sh_funcs.append('%s// Just use XGL_APPLICATION_INFO as struct until actual type is resolved' % (indent))
- sh_funcs.append('%sXGL_APPLICATION_INFO* pNext = (XGL_APPLICATION_INFO*)pStruct;' % (indent))
+ sh_funcs.append('%s// Just use VK_APPLICATION_INFO as struct until actual type is resolved' % (indent))
+ sh_funcs.append('%sVK_APPLICATION_INFO* pNext = (VK_APPLICATION_INFO*)pStruct;' % (indent))
sh_funcs.append('%ssize_t structSize = 0;' % (indent))
if follow_chain:
sh_funcs.append('%swhile (pNext) {' % (indent))
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
if follow_chain:
- sh_funcs.append('%spNext = (XGL_APPLICATION_INFO*)pNext->pNext;' % (indent))
+ sh_funcs.append('%spNext = (VK_APPLICATION_INFO*)pNext->pNext;' % (indent))
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
sh_funcs.append('%sreturn structSize;\n}' % indent)
def __init__(self, struct_dict, prefix, out_dir):
self.struct_dict = struct_dict
self.api = prefix
- self.out_file = os.path.join(out_dir, self.api+"_struct_graphviz_helper.h")
+ if prefix == "vulkan":
+ self.api_prefix = "vk"
+ else:
+ self.api_prefix = prefix
+ self.out_file = os.path.join(out_dir, self.api_prefix+"_struct_graphviz_helper.h")
self.gvg = CommonFileGen(self.out_file)
def generate(self):
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- #header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
+ #header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("\nchar* dynamic_gv_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
def _get_gv_func_name(self, struct):
- return "%s_gv_print_%s" % (self.api, struct.lower().strip("_"))
+ return "%s_gv_print_%s" % (self.api_prefix, struct.lower().strip("_"))
# Return elements to create formatted string for given struct member
def _get_struct_gv_print_formatted(self, struct_member, pre_var_name="", postfix = "\\n", struct_var_name="pStruct", struct_ptr=True, print_array=False, port_label=""):
def _generateBody(self):
gv_funcs = []
array_func_list = [] # structs for which we'll generate an array version of their print function
- array_func_list.append('xgl_buffer_view_attach_info')
- array_func_list.append('xgl_image_view_attach_info')
- array_func_list.append('xgl_sampler_image_view_info')
- array_func_list.append('xgl_descriptor_type_count')
+ array_func_list.append('vk_buffer_view_attach_info')
+ array_func_list.append('vk_image_view_attach_info')
+ array_func_list.append('vk_sampler_image_view_info')
+ array_func_list.append('vk_descriptor_type_count')
# For first pass, generate prototype
for s in sorted(self.struct_dict):
gv_funcs.append('char* %s(const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
if s.lower().strip("_") in array_func_list:
- if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
+ if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
gv_funcs.append('char* %s_array(uint32_t count, const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
gv_funcs.append(" return str;\n}\n")
if s.lower().strip("_") in array_func_list:
ptr_array = False
- if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
+ if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
ptr_array = True
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName)\n{\n char* str;\n char tmpStr[1024];\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
# Add function to dynamically print out unknown struct
gv_funcs.append("char* dynamic_gv_display(const void* pStruct, const char* nodeName)\n{\n")
gv_funcs.append(" // Cast to APP_INFO ptr initially just to pull sType off struct\n")
- gv_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
+ gv_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
gv_funcs.append(" switch (sType)\n {\n")
for e in enum_type_dict:
if "_STRUCTURE_TYPE" in e:
struct_name = v.replace("_STRUCTURE_TYPE", "")
print_func_name = self._get_gv_func_name(struct_name)
# TODO : Hand-coded fixes for some exceptions
- #if 'XGL_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
- # struct_name = 'XGL_PIPELINE_CB_STATE'
- if 'XGL_SEMAPHORE_CREATE_INFO' in struct_name:
- struct_name = 'XGL_SEMAPHORE_CREATE_INFO'
+ #if 'VK_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
+ # struct_name = 'VK_PIPELINE_CB_STATE'
+ if 'VK_SEMAPHORE_CREATE_INFO' in struct_name:
+ struct_name = 'VK_SEMAPHORE_CREATE_INFO'
print_func_name = self._get_gv_func_name(struct_name)
- elif 'XGL_SEMAPHORE_OPEN_INFO' in struct_name:
- struct_name = 'XGL_SEMAPHORE_OPEN_INFO'
+ elif 'VK_SEMAPHORE_OPEN_INFO' in struct_name:
+ struct_name = 'VK_SEMAPHORE_OPEN_INFO'
print_func_name = self._get_gv_func_name(struct_name)
gv_funcs.append(' case %s:\n' % (v))
gv_funcs.append(' return %s((%s*)pStruct, nodeName);\n' % (print_func_name, struct_name))
#print(enum_val_dict)
#print(typedef_dict)
#print(struct_dict)
+ prefix = os.path.basename(opts.input_file).strip(".h")
+ if prefix == "vulkan":
+ prefix = "vk"
if (opts.abs_out_dir is not None):
- enum_sh_filename = os.path.join(opts.abs_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
+ enum_sh_filename = os.path.join(opts.abs_out_dir, prefix+"_enum_string_helper.h")
else:
- enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
+ enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, prefix+"_enum_string_helper.h")
enum_sh_filename = os.path.abspath(enum_sh_filename)
if not os.path.exists(os.path.dirname(enum_sh_filename)):
print("Creating output dir %s" % os.path.dirname(enum_sh_filename))
os.mkdir(os.path.dirname(enum_sh_filename))
if opts.gen_enum_string_helper:
print("Generating enum string helper to %s" % enum_sh_filename)
- enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), os.path.basename(opts.input_file).strip(".h")+"_enum_validate_helper.h")
+ enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), prefix+"_enum_validate_helper.h")
print("Generating enum validate helper to %s" % enum_vh_filename)
eg = EnumCodeGen(enum_type_dict, enum_val_dict, typedef_fwd_dict, os.path.basename(opts.input_file), enum_sh_filename, enum_vh_filename)
eg.generateStringHelper()
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
import xgl
def generate_get_proc_addr_check(name):
- return " if (!%s || %s[0] != 'x' || %s[1] != 'g' || %s[2] != 'l')\n" \
- " return NULL;" % ((name,) * 4)
+ return " if (!%s || %s[0] != 'v' || %s[1] != 'k')\n" \
+ " return NULL;" % ((name,) * 3)
class Subcommand(object):
def __init__(self, argv):
return """/* THIS FILE IS GENERATED. DO NOT EDIT. */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
def _generate_object_setup(self, proto):
method = "loader_init_data"
- cond = "res == XGL_SUCCESS"
+ cond = "res == VK_SUCCESS"
if "Get" in proto.name:
method = "loader_set_data"
for proto in self.protos:
if not self._is_dispatchable(proto):
continue
-
func = []
obj_setup = self._generate_object_setup(proto)
- func.append(qual + proto.c_func(prefix="xgl", attr="XGLAPI"))
+ func.append(qual + proto.c_func(prefix="vk", attr="VKAPI"))
func.append("{")
# declare local variables
- func.append(" const XGL_LAYER_DISPATCH_TABLE *disp;")
+ func.append(" const VK_LAYER_DISPATCH_TABLE *disp;")
if proto.ret != 'void' and obj_setup:
- func.append(" XGL_RESULT res;")
+ func.append(" VK_RESULT res;")
func.append("")
# active layers before dispatching CreateDevice
# get dispatch table and unwrap GPUs
for param in proto.params:
stmt = ""
- if param.ty == "XGL_PHYSICAL_GPU":
+ if param.ty == "VK_PHYSICAL_GPU":
stmt = "loader_unwrap_gpu(&%s);" % param.name
if param == proto.params[0]:
stmt = "disp = " + stmt
super().run()
def generate_header(self):
- return "\n".join(["#include <xgl.h>",
- "#include <xglLayer.h>",
+ return "\n".join(["#include <vulkan.h>",
+ "#include <vkLayer.h>",
"#include <string.h>",
"#include \"loader_platform.h\""])
stmts.append("table->%s = gpa; /* direct assignment */" %
proto.name)
else:
- stmts.append("table->%s = (xgl%sType) gpa(gpu, \"xgl%s\");" %
+ stmts.append("table->%s = (vk%sType) gpa(gpu, \"vk%s\");" %
(proto.name, proto.name, proto.name))
stmts.append("#endif")
func = []
- func.append("static inline void %s_initialize_dispatch_table(XGL_LAYER_DISPATCH_TABLE *table,"
+ func.append("static inline void %s_initialize_dispatch_table(VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
- func.append("%s xglGetProcAddrType gpa,"
+ func.append("%s vkGetProcAddrType gpa,"
% (" " * len(self.prefix)))
- func.append("%s XGL_PHYSICAL_GPU gpu)"
+ func.append("%s VK_PHYSICAL_GPU gpu)"
% (" " * len(self.prefix)))
func.append("{")
func.append(" %s" % "\n ".join(stmts))
lookups.append("#endif")
func = []
- func.append("static inline void *%s_lookup_dispatch_table(const XGL_LAYER_DISPATCH_TABLE *table,"
+ func.append("static inline void *%s_lookup_dispatch_table(const VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
func.append("%s const char *name)"
% (" " * len(self.prefix)))
func.append("{")
func.append(generate_get_proc_addr_check("name"))
func.append("")
- func.append(" name += 3;")
+ func.append(" name += 2;")
func.append(" %s" % "\n ".join(lookups))
func.append("")
func.append(" return NULL;")
self.prefix = self.argv[0]
self.qual = "static"
else:
- self.prefix = "xgl"
+ self.prefix = "vk"
self.qual = "ICD_EXPORT"
super().run()
return "#include \"icd.h\""
def _generate_stub_decl(self, proto):
- return proto.c_pretty_decl(self.prefix + proto.name, attr="XGLAPI")
+ return proto.c_pretty_decl(self.prefix + proto.name, attr="VKAPI")
def _generate_stubs(self):
stubs = []
for proto in self.protos:
decl = self._generate_stub_decl(proto)
if proto.ret != "void":
- stmt = " return XGL_ERROR_UNKNOWN;\n"
+ stmt = " return VK_ERROR_UNKNOWN;\n"
else:
stmt = ""
body.append("{")
body.append(generate_get_proc_addr_check(gpa_pname))
body.append("")
- body.append(" %s += 3;" % gpa_pname)
+ body.append(" %s += 2;" % gpa_pname)
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
class LayerInterceptProcSubcommand(Subcommand):
def run(self):
- self.prefix = "xgl"
+ self.prefix = "vk"
# we could get the list from argv if wanted
self.intercepted = [proto.name for proto in self.protos
super().run()
def generate_header(self):
- return "\n".join(["#include <string.h>", "#include \"xglLayer.h\""])
+ return "\n".join(["#include <string.h>", "#include \"vkLayer.h\""])
def generate_body(self):
lookups = []
body.append("{")
body.append(generate_get_proc_addr_check("name"))
body.append("")
- body.append(" name += 3;")
+ body.append(" name += 2;")
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
return """; THIS FILE IS GENERATED. DO NOT EDIT.
;;;; Begin Copyright Notice ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-; XGL
+; Vulkan
;
; Copyright (C) 2015 LunarG, Inc.
;
for proto in self.protos:
if self.exports and proto.name not in self.exports:
continue
- body.append(" xgl" + proto.name)
+ body.append(" vk" + proto.name)
return "\n".join(body)
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
import os
import xgl
-import xgl_helper
+import vk_helper
def generate_get_proc_addr_check(name):
- return " if (!%s || %s[0] != 'x' || %s[1] != 'g' || %s[2] != 'l')\n" \
- " return NULL;" % ((name,) * 4)
+ return " if (!%s || %s[0] != 'v' || %s[1] != 'k')\n" \
+ " return NULL;" % ((name,) * 3)
class Subcommand(object):
def __init__(self, argv):
return """/* THIS FILE IS GENERATED. DO NOT EDIT. */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
pass
# Return set of printf '%' qualifier and input to that qualifier
- def _get_printf_params(self, xgl_type, name, output_param, cpp=False):
+ def _get_printf_params(self, vk_type, name, output_param, cpp=False):
# TODO : Need ENUM and STRUCT checks here
- if xgl_helper.is_type(xgl_type, 'enum'):#"_TYPE" in xgl_type: # TODO : This should be generic ENUM check
- return ("%s", "string_%s(%s)" % (xgl_type.strip('const ').strip('*'), name))
- if "char*" == xgl_type:
+ if vk_helper.is_type(vk_type, 'enum'):#"_TYPE" in vk_type: # TODO : This should be generic ENUM check
+ return ("%s", "string_%s(%s)" % (vk_type.strip('const ').strip('*'), name))
+ if "char*" == vk_type:
return ("%s", name)
- if "uint64" in xgl_type:
- if '*' in xgl_type:
+ if "uint64" in vk_type:
+ if '*' in vk_type:
return ("%lu", "*%s" % name)
return ("%lu", name)
- if "size" in xgl_type:
- if '*' in xgl_type:
+ if "size" in vk_type:
+ if '*' in vk_type:
return ("%zu", "*%s" % name)
return ("%zu", name)
- if "float" in xgl_type:
- if '[' in xgl_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
+ if "float" in vk_type:
+ if '[' in vk_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
if cpp:
return ("[%i, %i, %i, %i]", '"[" << %s[0] << "," << %s[1] << "," << %s[2] << "," << %s[3] << "]"' % (name, name, name, name))
return ("[%f, %f, %f, %f]", "%s[0], %s[1], %s[2], %s[3]" % (name, name, name, name))
return ("%f", name)
- if "bool" in xgl_type or 'xcb_randr_crtc_t' in xgl_type:
+ if "bool" in vk_type or 'xcb_randr_crtc_t' in vk_type:
return ("%u", name)
- if True in [t in xgl_type for t in ["int", "FLAGS", "MASK", "xcb_window_t"]]:
- if '[' in xgl_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
+ if True in [t in vk_type for t in ["int", "FLAGS", "MASK", "xcb_window_t"]]:
+ if '[' in vk_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
if cpp:
return ("[%i, %i, %i, %i]", "%s[0] << %s[1] << %s[2] << %s[3]" % (name, name, name, name))
return ("[%i, %i, %i, %i]", "%s[0], %s[1], %s[2], %s[3]" % (name, name, name, name))
- if '*' in xgl_type:
+ if '*' in vk_type:
if 'pUserData' == name:
return ("%i", "((pUserData == 0) ? 0 : *(pUserData))")
return ("%i", "*(%s)" % name)
return ("%i", name)
# TODO : This is special-cased as there's only one "format" param currently and it's nice to expand it
- if "XGL_FORMAT" == xgl_type:
+ if "VK_FORMAT" == vk_type:
if cpp:
return ("%p", "&%s" % name)
- return ("{%s.channelFormat = %%s, %s.numericFormat = %%s}" % (name, name), "string_XGL_CHANNEL_FORMAT(%s.channelFormat), string_XGL_NUM_FORMAT(%s.numericFormat)" % (name, name))
+ return ("{%s.channelFormat = %%s, %s.numericFormat = %%s}" % (name, name), "string_VK_CHANNEL_FORMAT(%s.channelFormat), string_VK_NUM_FORMAT(%s.numericFormat)" % (name, name))
if output_param:
return ("%p", "(void*)*%s" % name)
- if xgl_helper.is_type(xgl_type, 'struct') and '*' not in xgl_type:
+ if vk_helper.is_type(vk_type, 'struct') and '*' not in vk_type:
return ("%p", "(void*)(&%s)" % name)
return ("%p", "(void*)(%s)" % name)
def _gen_layer_dbg_callback_register(self):
r_body = []
- r_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)')
+ r_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)')
r_body.append('{')
r_body.append(' // This layer intercepts callbacks')
- r_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));')
+ r_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));')
r_body.append(' if (!pNewDbgFuncNode)')
- r_body.append(' return XGL_ERROR_OUT_OF_MEMORY;')
+ r_body.append(' return VK_ERROR_OUT_OF_MEMORY;')
r_body.append(' pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;')
r_body.append(' pNewDbgFuncNode->pUserData = pUserData;')
r_body.append(' pNewDbgFuncNode->pNext = g_pDbgFunctionHead;')
r_body.append(' g_pDbgFunctionHead = pNewDbgFuncNode;')
r_body.append(' // force callbacks if DebugAction hasn\'t been set already other than initial value')
r_body.append(' if (g_actionIsDefault) {')
- r_body.append(' g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;')
+ r_body.append(' g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;')
r_body.append(' }')
- r_body.append(' XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);')
+ r_body.append(' VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);')
r_body.append(' return result;')
r_body.append('}')
return "\n".join(r_body)
def _gen_layer_dbg_callback_unregister(self):
ur_body = []
- ur_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)')
+ ur_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)')
ur_body.append('{')
- ur_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;')
- ur_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;')
+ ur_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;')
+ ur_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;')
ur_body.append(' while (pTrav) {')
ur_body.append(' if (pTrav->pfnMsgCallback == pfnMsgCallback) {')
ur_body.append(' pPrev->pNext = pTrav->pNext;')
ur_body.append(' if (g_pDbgFunctionHead == NULL)')
ur_body.append(' {')
ur_body.append(' if (g_actionIsDefault)')
- ur_body.append(' g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;')
+ ur_body.append(' g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;')
ur_body.append(' else')
- ur_body.append(' g_debugAction &= ~XGL_DBG_LAYER_ACTION_CALLBACK;')
+ ur_body.append(' g_debugAction &= ~VK_DBG_LAYER_ACTION_CALLBACK;')
ur_body.append(' }')
- ur_body.append(' XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);')
+ ur_body.append(' VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);')
ur_body.append(' return result;')
ur_body.append('}')
return "\n".join(ur_body)
def _gen_layer_get_extension_support(self, layer="Generic"):
ges_body = []
- ges_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)')
+ ges_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)')
ges_body.append('{')
- ges_body.append(' XGL_RESULT result;')
- ges_body.append(' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;')
+ ges_body.append(' VK_RESULT result;')
+ ges_body.append(' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;')
ges_body.append('')
ges_body.append(' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */')
ges_body.append(' if (!strncmp(pExtName, "%s", strlen("%s")))' % (layer, layer))
ges_body.append(' {')
- ges_body.append(' result = XGL_SUCCESS;')
+ ges_body.append(' result = VK_SUCCESS;')
ges_body.append(' } else if (nextTable.GetExtensionSupport != NULL)')
ges_body.append(' {')
- ges_body.append(' result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);')
+ ges_body.append(' result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);')
ges_body.append(' } else')
ges_body.append(' {')
- ges_body.append(' result = XGL_ERROR_INVALID_EXTENSION;')
+ ges_body.append(' result = VK_ERROR_INVALID_EXTENSION;')
ges_body.append(' }')
ges_body.append(' return result;')
ges_body.append('}')
funcs.append(intercept)
intercepted.append(proto)
- prefix="xgl"
+ prefix="vk"
lookups = []
for proto in intercepted:
if 'WsiX11' in proto.name:
body.append("{")
body.append(generate_get_proc_addr_check("name"))
body.append("")
- body.append(" name += 3;")
+ body.append(" name += 2;")
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
def _generate_extensions(self):
exts = []
- exts.append('uint64_t objTrackGetObjectCount(XGL_OBJECT_TYPE type)')
+ exts.append('uint64_t objTrackGetObjectCount(VK_OBJECT_TYPE type)')
exts.append('{')
- exts.append(' return (type == XGL_OBJECT_TYPE_ANY) ? numTotalObjs : numObjs[type];')
+ exts.append(' return (type == VK_OBJECT_TYPE_ANY) ? numTotalObjs : numObjs[type];')
exts.append('}')
exts.append('')
- exts.append('XGL_RESULT objTrackGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray)')
+ exts.append('VK_RESULT objTrackGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray)')
exts.append('{')
exts.append(" // This bool flags if we're pulling all objs or just a single class of objs")
- exts.append(' bool32_t bAllObjs = (type == XGL_OBJECT_TYPE_ANY);')
+ exts.append(' bool32_t bAllObjs = (type == VK_OBJECT_TYPE_ANY);')
exts.append(' // Check the count first thing')
exts.append(' uint64_t maxObjCount = (bAllObjs) ? numTotalObjs : numObjs[type];')
exts.append(' if (objCount > maxObjCount) {')
exts.append(' char str[1024];')
- exts.append(' sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_XGL_OBJECT_TYPE(type));')
- exts.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_OBJCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
- exts.append(' return XGL_ERROR_INVALID_VALUE;')
+ exts.append(' sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_VK_OBJECT_TYPE(type));')
+ exts.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_OBJCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
+ exts.append(' return VK_ERROR_INVALID_VALUE;')
exts.append(' }')
exts.append(' objNode* pTrav = (bAllObjs) ? pGlobalHead : pObjectHead[type];')
exts.append(' for (uint64_t i = 0; i < objCount; i++) {')
exts.append(' if (!pTrav) {')
exts.append(' char str[1024];')
- exts.append(' sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_XGL_OBJECT_TYPE(type), maxObjCount, i, objCount);')
- exts.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
- exts.append(' return XGL_ERROR_UNKNOWN;')
+ exts.append(' sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_VK_OBJECT_TYPE(type), maxObjCount, i, objCount);')
+ exts.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
+ exts.append(' return VK_ERROR_UNKNOWN;')
exts.append(' }')
exts.append(' memcpy(&pObjNodeArray[i], pTrav, sizeof(OBJTRACK_NODE));')
exts.append(' pTrav = (bAllObjs) ? pTrav->pNextGlobal : pTrav->pNextObj;')
exts.append(' }')
- exts.append(' return XGL_SUCCESS;')
+ exts.append(' return VK_SUCCESS;')
exts.append('}')
return "\n".join(exts)
def _generate_layer_gpa_function(self, extensions=[]):
func_body = []
- func_body.append("XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)\n"
+ func_body.append("VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)\n"
"{\n"
- " XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;\n"
+ " VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;\n"
" void* addr;\n"
" if (gpu == NULL)\n"
" return NULL;\n"
func_body.append(" else {\n"
" if (gpuw->pGPA == NULL)\n"
" return NULL;\n"
- " return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);\n"
+ " return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);\n"
" }\n"
"}\n")
return "\n".join(func_body)
- def _generate_layer_initialization(self, init_opts=False, prefix='xgl', lockname=None):
- func_body = ["#include \"xgl_dispatch_table_helper.h\""]
+ def _generate_layer_initialization(self, init_opts=False, prefix='vk', lockname=None):
+ func_body = ["#include \"vk_dispatch_table_helper.h\""]
func_body.append('static void init%s(void)\n'
'{\n' % self.layer_name)
if init_opts:
func_body.append(' getLayerOptionEnum("%sReportLevel", (uint32_t *) &g_reportingLevel);' % self.layer_name)
func_body.append(' g_actionIsDefault = getLayerOptionEnum("%sDebugAction", (uint32_t *) &g_debugAction);' % self.layer_name)
func_body.append('')
- func_body.append(' if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)')
+ func_body.append(' if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)')
func_body.append(' {')
func_body.append(' strOpt = getLayerOption("%sLogFilename");' % self.layer_name)
func_body.append(' if (strOpt)')
func_body.append(' g_logFile = stdout;')
func_body.append(' }')
func_body.append('')
- func_body.append(' xglGetProcAddrType fpNextGPA;\n'
+ func_body.append(' vkGetProcAddrType fpNextGPA;\n'
' fpNextGPA = pCurObj->pGPA;\n'
' assert(fpNextGPA);\n')
- func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);")
+ func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);")
if lockname is not None:
func_body.append(" if (!%sLockInitialized)" % lockname)
func_body.append(" {")
func_body.append("}\n")
return "\n".join(func_body)
- def _generate_layer_initialization_with_lock(self, prefix='xgl'):
- func_body = ["#include \"xgl_dispatch_table_helper.h\""]
+ def _generate_layer_initialization_with_lock(self, prefix='vk'):
+ func_body = ["#include \"vk_dispatch_table_helper.h\""]
func_body.append('static void init%s(void)\n'
'{\n'
- ' xglGetProcAddrType fpNextGPA;\n'
+ ' vkGetProcAddrType fpNextGPA;\n'
' fpNextGPA = pCurObj->pGPA;\n'
' assert(fpNextGPA);\n' % self.layer_name);
- func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);\n")
+ func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);\n")
func_body.append(" if (!printLockInitialized)")
func_body.append(" {")
func_body.append(" // TODO/TBD: Need to delete this mutex sometime. How???")
class LayerFuncsSubcommand(Subcommand):
def generate_header(self):
- return '#include <xglLayer.h>\n#include "loader.h"'
+ return '#include <vkLayer.h>\n#include "loader.h"'
def generate_body(self):
return self._generate_dispatch_entrypoints("static")
class GenericLayerSubcommand(Subcommand):
def generate_header(self):
- return '#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"\n#include "xglLayer.h"\n//The following is #included again to catch certain OS-specific functions being used:\n#include "loader_platform.h"\n\n#include "layers_config.h"\n#include "layers_msg.h"\n\nstatic XGL_LAYER_DISPATCH_TABLE nextTable;\nstatic XGL_BASE_LAYER_OBJECT *pCurObj;\n\nstatic LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);'
+ return '#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"\n#include "vkLayer.h"\n//The following is #included again to catch certain OS-specific functions being used:\n#include "loader_platform.h"\n\n#include "layers_config.h"\n#include "layers_msg.h"\n\nstatic VK_LAYER_DISPATCH_TABLE nextTable;\nstatic VK_BASE_LAYER_OBJECT *pCurObj;\n\nstatic LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);'
def generate_intercept(self, proto, qual):
if proto.name in [ 'DbgRegisterMsgCallback', 'DbgUnregisterMsgCallback' , 'GetExtensionSupport']:
# use default version
return None
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
funcs = []
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' char str[1024];\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' sprintf(str, "At start of layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' sprintf(str, "Completed layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
' fflush(stdout);\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, proto.name, self.layer_name, ret_val, c_call, proto.name, stmt, self.layer_name))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' char str[1024];'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' sprintf(str, "At start of layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' sprintf(str, "Completed layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
' fflush(stdout);\n'
'%s'
'}' % (qual, decl, proto.params[0].name, proto.name, self.layer_name, ret_val, c_call, proto.name, stmt))
def generate_body(self):
self.layer_name = "Generic"
body = [self._generate_layer_initialization(True),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
return "\n".join(header_txt)
def generate_intercept(self, proto, qual):
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
elif 'Create' in proto.name or 'Alloc' in proto.name or 'MapMemory' in proto.name:
create_params = -1
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
f_open = ''
f_close = ''
if 'CreateDevice' in proto.name:
file_mode = "w"
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n pOutFile = fopen(outFileName, "%s");\n ' % (file_mode)
- log_func = 'fprintf(pOutFile, "t{%%u} xgl%s(' % proto.name
+ log_func = 'fprintf(pOutFile, "t{%%u} vk%s(' % proto.name
f_close = '\n fclose(pOutFile);\n loader_platform_thread_unlock_mutex(&printLock);'
else:
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n '
- log_func = 'printf("t{%%u} xgl%s(' % proto.name
+ log_func = 'printf("t{%%u} vk%s(' % proto.name
f_close = '\n loader_platform_thread_unlock_mutex(&printLock);'
print_vals = ', getTIDIndex()'
pindex = 0
sp_param_dict[pindex] = prev_count_name
elif 'pDescriptorSets' == p.name and proto.params[-1].name == 'pCount':
sp_param_dict[pindex] = '*pCount'
- elif 'Wsi' not in proto.name and xgl_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
+ elif 'Wsi' not in proto.name and vk_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
sp_param_dict[pindex] = 'index'
pindex += 1
if p.name.endswith('Count'):
log_func = log_func.strip(', ')
if proto.ret != "void":
log_func += ') = %s\\n"'
- print_vals += ', string_XGL_RESULT(result)'
+ print_vals += ', string_VK_RESULT(result)'
else:
log_func += ')\\n"'
log_func = '%s%s);' % (log_func, print_vals)
for sp_index in sorted(sp_param_dict):
# TODO : Clean this if/else block up, too much duplicated code
if 'index' == sp_param_dict[sp_index]:
- cis_print_func = 'xgl_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
+ cis_print_func = 'vk_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
var_name = proto.params[sp_index].name
if proto.params[sp_index].name != 'color':
log_func += '\n if (%s) {' % (proto.params[sp_index].name)
if proto.params[sp_index].name != 'color':
log_func += '\n }'
else: # should have a count value stored to iterate over array
- if xgl_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
- cis_print_func = 'pTmpStr = xgl_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
+ if vk_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
+ cis_print_func = 'pTmpStr = vk_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
else:
cis_print_func = 'pTmpStr = (char*)malloc(32);\n sprintf(pTmpStr, " %%p", %s[i]);' % proto.params[sp_index].name
if not i_decl:
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, self.layer_name, ret_val, c_call,f_open, log_func, f_close, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' result = nextTable.%s;\n'
' %s %s %s\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, c_call, f_open, log_func, f_close, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), f_open, log_func, f_close, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
def generate_body(self):
self.layer_name = "APIDump"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_cpp.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_cpp.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
return "\n".join(header_txt)
def generate_intercept(self, proto, qual):
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
elif 'Create' in proto.name or 'Alloc' in proto.name or 'MapMemory' in proto.name:
create_params = -1
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
f_open = ''
f_close = ''
if 'CreateDevice' in proto.name:
file_mode = "w"
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n pOutFile = fopen(outFileName, "%s");\n ' % (file_mode)
- log_func = 'fprintf(pOutFile, "t{%%u} xgl%s(' % proto.name
+ log_func = 'fprintf(pOutFile, "t{%%u} vk%s(' % proto.name
f_close = '\n fclose(pOutFile);\n loader_platform_thread_unlock_mutex(&printLock);'
else:
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n '
- log_func = 'cout << "t{" << getTIDIndex() << "} xgl%s(' % proto.name
+ log_func = 'cout << "t{" << getTIDIndex() << "} vk%s(' % proto.name
f_close = '\n loader_platform_thread_unlock_mutex(&printLock);'
pindex = 0
prev_count_name = ''
sp_param_dict[pindex] = prev_count_name
elif 'pDescriptorSets' == p.name and proto.params[-1].name == 'pCount':
sp_param_dict[pindex] = '*pCount'
- elif 'Wsi' not in proto.name and xgl_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
+ elif 'Wsi' not in proto.name and vk_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
sp_param_dict[pindex] = 'index'
pindex += 1
if p.name.endswith('Count'):
prev_count_name = ''
log_func = log_func.strip(', ')
if proto.ret != "void":
- log_func += ') = " << string_XGL_RESULT((XGL_RESULT)result) << endl'
- #print_vals += ', string_XGL_RESULT_CODE(result)'
+ log_func += ') = " << string_VK_RESULT((VK_RESULT)result) << endl'
+ #print_vals += ', string_VK_RESULT_CODE(result)'
else:
log_func += ')\\n"'
log_func += ';'
log_func += '\n string tmp_str;'
for sp_index in sp_param_dict:
if 'index' == sp_param_dict[sp_index]:
- cis_print_func = 'xgl_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
+ cis_print_func = 'vk_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
var_name = proto.params[sp_index].name
if proto.params[sp_index].name != 'color':
log_func += '\n if (%s) {' % (proto.params[sp_index].name)
else: # We have a count value stored to iterate over an array
print_cast = ''
print_func = ''
- if xgl_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
+ if vk_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
print_cast = '&'
- print_func = 'xgl_print_%s' % proto.params[sp_index].ty.strip('const ').strip('*').lower()
- #cis_print_func = 'tmp_str = xgl_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
+ print_func = 'vk_print_%s' % proto.params[sp_index].ty.strip('const ').strip('*').lower()
+ #cis_print_func = 'tmp_str = vk_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
# TODO : Need to display this address as a string
else:
print_cast = '(void*)'
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, self.layer_name, ret_val, c_call,f_open, log_func, f_close, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' result = nextTable.%s;\n'
' %s %s %s\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, c_call, f_open, log_func, f_close, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), f_open, log_func, f_close, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
def generate_body(self):
self.layer_name = "APIDumpCpp"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
header_txt.append(' assert(maxTID < MAX_TID);')
header_txt.append(' return retVal;')
header_txt.append('}\n')
- header_txt.append('static FILE* pOutFile;\nstatic char* outFileName = "xgl_apidump.txt";')
+ header_txt.append('static FILE* pOutFile;\nstatic char* outFileName = "vk_apidump.txt";')
return "\n".join(header_txt)
def generate_body(self):
self.layer_name = "APIDumpFile"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_no_addr.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_no_addr.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
self.layer_name = "APIDumpNoAddr"
self.no_addr = True
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_no_addr_cpp.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_no_addr_cpp.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
self.layer_name = "APIDumpNoAddrCpp"
self.no_addr = True
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
def generate_header(self):
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"')
- header_txt.append('#include "object_track.h"\n\nstatic XGL_LAYER_DISPATCH_TABLE nextTable;\nstatic XGL_BASE_LAYER_OBJECT *pCurObj;')
+ header_txt.append('#include "object_track.h"\n\nstatic VK_LAYER_DISPATCH_TABLE nextTable;\nstatic VK_BASE_LAYER_OBJECT *pCurObj;')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
header_txt.append('#include "layers_config.h"')
header_txt.append(' struct _objNode *pNextObj;')
header_txt.append(' struct _objNode *pNextGlobal;')
header_txt.append('} objNode;')
- header_txt.append('static objNode *pObjectHead[XGL_NUM_OBJECT_TYPE] = {0};')
+ header_txt.append('static objNode *pObjectHead[VK_NUM_OBJECT_TYPE] = {0};')
header_txt.append('static objNode *pGlobalHead = NULL;')
- header_txt.append('static uint64_t numObjs[XGL_NUM_OBJECT_TYPE] = {0};')
+ header_txt.append('static uint64_t numObjs[VK_NUM_OBJECT_TYPE] = {0};')
header_txt.append('static uint64_t numTotalObjs = 0;')
header_txt.append('static uint32_t maxMemReferences = 0;')
header_txt.append('// Debug function to print global list and each individual object list')
header_txt.append(' objNode* pTrav = pGlobalHead;')
header_txt.append(' printf("=====GLOBAL OBJECT LIST (%lu total objs):\\n", numTotalObjs);')
header_txt.append(' while (pTrav) {')
- header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextGlobal %p\\n", (void*)pTrav, string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextGlobal);')
+ header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextGlobal %p\\n", (void*)pTrav, string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextGlobal);')
header_txt.append(' pTrav = pTrav->pNextGlobal;')
header_txt.append(' }')
- header_txt.append(' for (uint32_t i = 0; i < XGL_NUM_OBJECT_TYPE; i++) {')
+ header_txt.append(' for (uint32_t i = 0; i < VK_NUM_OBJECT_TYPE; i++) {')
header_txt.append(' pTrav = pObjectHead[i];')
header_txt.append(' if (pTrav) {')
- header_txt.append(' printf("=====%s OBJECT LIST (%lu objs):\\n", string_XGL_OBJECT_TYPE(pTrav->obj.objType), numObjs[i]);')
+ header_txt.append(' printf("=====%s OBJECT LIST (%lu objs):\\n", string_VK_OBJECT_TYPE(pTrav->obj.objType), numObjs[i]);')
header_txt.append(' while (pTrav) {')
- header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextObj %p\\n", (void*)pTrav, string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextObj);')
+ header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextObj %p\\n", (void*)pTrav, string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextObj);')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('}')
- header_txt.append('static void ll_insert_obj(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_insert_obj(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : CREATE %s object %p", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : CREATE %s object %p", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' objNode* pNewObjNode = (objNode*)malloc(sizeof(objNode));')
header_txt.append(' pNewObjNode->obj.pObj = pObj;')
header_txt.append(' pNewObjNode->obj.objType = objType;')
header_txt.append(' // increment obj counts')
header_txt.append(' numObjs[objType]++;')
header_txt.append(' numTotalObjs++;')
- header_txt.append(' //sprintf(str, "OBJ_STAT : %lu total objs & %lu %s objs.", numTotalObjs, numObjs[objType], string_XGL_OBJECT_TYPE(objType));')
+ header_txt.append(' //sprintf(str, "OBJ_STAT : %lu total objs & %lu %s objs.", numTotalObjs, numObjs[objType], string_VK_OBJECT_TYPE(objType));')
header_txt.append(' if (0) ll_print_lists();')
header_txt.append('}')
header_txt.append('// Traverse global list and return type for given object')
- header_txt.append('static XGL_OBJECT_TYPE ll_get_obj_type(XGL_OBJECT object) {')
+ header_txt.append('static VK_OBJECT_TYPE ll_get_obj_type(VK_OBJECT object) {')
header_txt.append(' objNode *pTrav = pGlobalHead;')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == object)')
header_txt.append(' }')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Attempting look-up on obj %p but it is NOT in the global list!", (void*)object);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, OBJTRACK_MISSING_OBJECT, "OBJTRACK", str);')
- header_txt.append(' return XGL_OBJECT_TYPE_UNKNOWN;')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, OBJTRACK_MISSING_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' return VK_OBJECT_TYPE_UNKNOWN;')
header_txt.append('}')
header_txt.append('#if 0')
- header_txt.append('static uint64_t ll_get_obj_uses(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static uint64_t ll_get_obj_uses(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' return 0;')
header_txt.append('}')
header_txt.append('#endif')
- header_txt.append('static void ll_increment_use_count(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_increment_use_count(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' pTrav->obj.numUses++;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : USING %s object %p (%lu total uses)", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj, pTrav->obj.numUses);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : USING %s object %p (%lu total uses)", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj, pTrav->obj.numUses);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' // If we do not find obj, insert it and then increment count')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('')
header_txt.append(' ll_insert_obj(pObj, objType);')
header_txt.append(' ll_increment_use_count(pObj, objType);')
header_txt.append('// We usually do not know Obj type when we destroy it so have to fetch')
header_txt.append('// Type from global list w/ ll_destroy_obj()')
header_txt.append('// and then do the full removal from both lists w/ ll_remove_obj_type()')
- header_txt.append('static void ll_remove_obj_type(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_remove_obj_type(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' objNode *pPrev = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' assert(numObjs[objType] > 0);')
header_txt.append(' numObjs[objType]--;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : DESTROY %s object %p", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : DESTROY %s object %p", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' pPrev = pTrav;')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('// Parse global list to find obj type, then remove obj from obj type list, finally')
header_txt.append('// remove obj from global list')
header_txt.append(' assert(numTotalObjs > 0);')
header_txt.append(' numTotalObjs--;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ_STAT Removed %s obj %p that was used %lu times (%lu total objs remain & %lu %s objs).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses, numTotalObjs, numObjs[pTrav->obj.objType], string_XGL_OBJECT_TYPE(pTrav->obj.objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ_STAT Removed %s obj %p that was used %lu times (%lu total objs remain & %lu %s objs).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses, numTotalObjs, numObjs[pTrav->obj.objType], string_VK_OBJECT_TYPE(pTrav->obj.objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' free(pTrav);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Unable to remove obj %p. Was it created? Has it already been destroyed?", pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_DESTROY_OBJECT_FAILED, "OBJTRACK", str);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_DESTROY_OBJECT_FAILED, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('// Set selected flag state for an object node')
- header_txt.append('static void set_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
+ header_txt.append('static void set_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
header_txt.append(' if (pObj != NULL) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' }')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append(' }');
header_txt.append('}')
header_txt.append('')
header_txt.append('// Track selected state for an object node')
- header_txt.append('static void track_object_status(void* pObj, XGL_STATE_BIND_POINT stateBindPoint) {')
- header_txt.append(' objNode *pTrav = pObjectHead[XGL_OBJECT_TYPE_CMD_BUFFER];')
+ header_txt.append('static void track_object_status(void* pObj, VK_STATE_BIND_POINT stateBindPoint) {')
+ header_txt.append(' objNode *pTrav = pObjectHead[VK_OBJECT_TYPE_CMD_BUFFER];')
header_txt.append('')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
- header_txt.append(' if (stateBindPoint == XGL_STATE_BIND_VIEWPORT) {')
+ header_txt.append(' if (stateBindPoint == VK_STATE_BIND_VIEWPORT) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_VIEWPORT_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_RASTER) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_RASTER) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_RASTER_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_COLOR_BLEND) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_COLOR_BLEND) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_COLOR_BLEND_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_DEPTH_STENCIL) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_DEPTH_STENCIL) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_DEPTH_STENCIL_BOUND;')
header_txt.append(' }')
header_txt.append(' return;')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Unable to track status for non-existent Command Buffer object %p", pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('')
header_txt.append('// Reset selected flag state for an object node')
- header_txt.append('static void reset_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
+ header_txt.append('static void reset_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' }')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('')
header_txt.append('// Check object status for selected flag state')
- header_txt.append('static bool32_t validate_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_mask, OBJECT_STATUS status_flag, XGL_DBG_MSG_TYPE error_level, OBJECT_TRACK_ERROR error_code, char* fail_msg) {')
+ header_txt.append('static bool32_t validate_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_mask, OBJECT_STATUS status_flag, VK_DBG_MSG_TYPE error_level, OBJECT_TRACK_ERROR error_code, char* fail_msg) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' if ((pTrav->obj.status & status_mask) != status_flag) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJECT VALIDATION WARNING: %s object %p: %s", string_XGL_OBJECT_TYPE(objType), (void*)pObj, fail_msg);')
- header_txt.append(' layerCbMsg(error_level, XGL_VALIDATION_LEVEL_0, pObj, 0, error_code, "OBJTRACK", str);')
- header_txt.append(' return XGL_FALSE;')
+ header_txt.append(' sprintf(str, "OBJECT VALIDATION WARNING: %s object %p: %s", string_VK_OBJECT_TYPE(objType), (void*)pObj, fail_msg);')
+ header_txt.append(' layerCbMsg(error_level, VK_VALIDATION_LEVEL_0, pObj, 0, error_code, "OBJTRACK", str);')
+ header_txt.append(' return VK_FALSE;')
header_txt.append(' }')
- header_txt.append(' return XGL_TRUE;')
+ header_txt.append(' return VK_TRUE;')
header_txt.append(' }')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
- header_txt.append(' if (objType != XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {')
+ header_txt.append(' if (objType != VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to obtain status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to obtain status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append(' }')
- header_txt.append(' return XGL_FALSE;')
+ header_txt.append(' return VK_FALSE;')
header_txt.append('}')
header_txt.append('')
header_txt.append('static void validate_draw_state_flags(void* pObj) {')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_VIEWPORT_BOUND, OBJSTATUS_VIEWPORT_BOUND, XGL_DBG_MSG_ERROR, OBJTRACK_VIEWPORT_NOT_BOUND, "Viewport object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_RASTER_BOUND, OBJSTATUS_RASTER_BOUND, XGL_DBG_MSG_ERROR, OBJTRACK_RASTER_NOT_BOUND, "Raster object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_COLOR_BLEND_BOUND, OBJSTATUS_COLOR_BLEND_BOUND, XGL_DBG_MSG_UNKNOWN, OBJTRACK_COLOR_BLEND_NOT_BOUND, "Color-blend object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_DEPTH_STENCIL_BOUND, OBJSTATUS_DEPTH_STENCIL_BOUND, XGL_DBG_MSG_UNKNOWN, OBJTRACK_DEPTH_STENCIL_NOT_BOUND, "Depth-stencil object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_VIEWPORT_BOUND, OBJSTATUS_VIEWPORT_BOUND, VK_DBG_MSG_ERROR, OBJTRACK_VIEWPORT_NOT_BOUND, "Viewport object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_RASTER_BOUND, OBJSTATUS_RASTER_BOUND, VK_DBG_MSG_ERROR, OBJTRACK_RASTER_NOT_BOUND, "Raster object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_COLOR_BLEND_BOUND, OBJSTATUS_COLOR_BLEND_BOUND, VK_DBG_MSG_UNKNOWN, OBJTRACK_COLOR_BLEND_NOT_BOUND, "Color-blend object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_DEPTH_STENCIL_BOUND, OBJSTATUS_DEPTH_STENCIL_BOUND, VK_DBG_MSG_UNKNOWN, OBJTRACK_DEPTH_STENCIL_NOT_BOUND, "Depth-stencil object not bound to this command buffer");')
header_txt.append('}')
header_txt.append('')
- header_txt.append('static void validate_memory_mapping_status(const XGL_GPU_MEMORY* pMemRefs, uint32_t numRefs) {')
+ header_txt.append('static void validate_memory_mapping_status(const VK_GPU_MEMORY* pMemRefs, uint32_t numRefs) {')
header_txt.append(' uint32_t i;')
header_txt.append(' for (i = 0; i < numRefs; i++) {')
header_txt.append(' if (pMemRefs[i]) {')
header_txt.append(' // If mem reference is in a presentable image memory list, skip the check of the GPU_MEMORY list')
- header_txt.append(' if (!validate_status((void *)pMemRefs[i], XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY, OBJSTATUS_NONE, OBJSTATUS_NONE, XGL_DBG_MSG_UNKNOWN, OBJTRACK_NONE, NULL) == XGL_TRUE)')
+ header_txt.append(' if (!validate_status((void *)pMemRefs[i], VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY, OBJSTATUS_NONE, OBJSTATUS_NONE, VK_DBG_MSG_UNKNOWN, OBJTRACK_NONE, NULL) == VK_TRUE)')
header_txt.append(' {')
- header_txt.append(' validate_status((void *)pMemRefs[i], XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED, OBJSTATUS_NONE, XGL_DBG_MSG_ERROR, OBJTRACK_GPU_MEM_MAPPED, "A Mapped Memory Object was referenced in a command buffer");')
+ header_txt.append(' validate_status((void *)pMemRefs[i], VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED, OBJSTATUS_NONE, VK_DBG_MSG_ERROR, OBJTRACK_GPU_MEM_MAPPED, "A Mapped Memory Object was referenced in a command buffer");')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('static void validate_mem_ref_count(uint32_t numRefs) {')
header_txt.append(' if (maxMemReferences == 0) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "xglQueueSubmit called before calling xglGetGpuInfo");')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_GETGPUINFO_NOT_CALLED, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "vkQueueSubmit called before calling vkGetGpuInfo");')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_GETGPUINFO_NOT_CALLED, "OBJTRACK", str);')
header_txt.append(' } else {')
header_txt.append(' if (numRefs > maxMemReferences) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "xglQueueSubmit Memory reference count (%d) exceeds allowable GPU limit (%d)", numRefs, maxMemReferences);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_MEMREFCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "vkQueueSubmit Memory reference count (%d) exceeds allowable GPU limit (%d)", numRefs, maxMemReferences);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_MEMREFCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('}')
header_txt.append('')
header_txt.append('static void setGpuQueueInfoState(void *pData) {')
- header_txt.append(' maxMemReferences = ((XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *)pData)->maxMemReferences;')
+ header_txt.append(' maxMemReferences = ((VK_PHYSICAL_GPU_QUEUE_PROPERTIES *)pData)->maxMemReferences;')
header_txt.append('}')
return "\n".join(header_txt)
if proto.name in [ 'DbgRegisterMsgCallback', 'DbgUnregisterMsgCallback' ]:
# use default version
return None
- obj_type_mapping = {base_t : base_t.replace("XGL_", "XGL_OBJECT_TYPE_") for base_t in xgl.object_type_list}
+ obj_type_mapping = {base_t : base_t.replace("VK_", "VK_OBJECT_TYPE_") for base_t in xgl.object_type_list}
# For the various "super-types" we have to use function to distinguish sub type
- for obj_type in ["XGL_BASE_OBJECT", "XGL_OBJECT", "XGL_DYNAMIC_STATE_OBJECT"]:
+ for obj_type in ["VK_BASE_OBJECT", "VK_OBJECT", "VK_DYNAMIC_STATE_OBJECT"]:
obj_type_mapping[obj_type] = "ll_get_obj_type(object)"
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
p0_type = proto.params[0].ty.strip('*').strip('const ')
create_line = ''
using_line += ' ll_increment_use_count((void*)%s, %s);\n' % (param0_name, obj_type_mapping[p0_type])
using_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
if 'QueueSubmit' in proto.name:
- using_line += ' set_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);\n'
+ using_line += ' set_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);\n'
using_line += ' // TODO: Fix for updated memory reference mechanism\n'
using_line += ' // validate_memory_mapping_status(pMemRefs, memRefCount);\n'
using_line += ' // validate_mem_ref_count(memRefCount);\n'
elif 'GetFenceStatus' in proto.name:
using_line += ' // Warn if submitted_flag is not set\n'
- using_line += ' validate_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED, OBJSTATUS_FENCE_IS_SUBMITTED, XGL_DBG_MSG_ERROR, OBJTRACK_INVALID_FENCE, "Status Requested for Unsubmitted Fence");\n'
+ using_line += ' validate_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED, OBJSTATUS_FENCE_IS_SUBMITTED, VK_DBG_MSG_ERROR, OBJTRACK_INVALID_FENCE, "Status Requested for Unsubmitted Fence");\n'
elif 'EndCommandBuffer' in proto.name:
- using_line += ' reset_status((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |\n'
+ using_line += ' reset_status((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |\n'
using_line += ' OBJSTATUS_RASTER_BOUND |\n'
using_line += ' OBJSTATUS_COLOR_BLEND_BOUND |\n'
using_line += ' OBJSTATUS_DEPTH_STENCIL_BOUND));\n'
elif 'CmdDraw' in proto.name:
using_line += ' validate_draw_state_flags((void *)cmdBuffer);\n'
elif 'MapMemory' in proto.name:
- using_line += ' set_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
+ using_line += ' set_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
elif 'UnmapMemory' in proto.name:
- using_line += ' reset_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
+ using_line += ' reset_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
if 'AllocDescriptor' in proto.name: # Allocates array of DSs
create_line = ' for (uint32_t i = 0; i < *pCount; i++) {\n'
create_line += ' loader_platform_thread_lock_mutex(&objLock);\n'
- create_line += ' ll_insert_obj((void*)pDescriptorSets[i], XGL_OBJECT_TYPE_DESCRIPTOR_SET);\n'
+ create_line += ' ll_insert_obj((void*)pDescriptorSets[i], VK_OBJECT_TYPE_DESCRIPTOR_SET);\n'
create_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
create_line += ' }\n'
elif 'CreatePresentableImage' in proto.name:
create_line = ' loader_platform_thread_lock_mutex(&objLock);\n'
create_line += ' ll_insert_obj((void*)*%s, %s);\n' % (proto.params[-2].name, obj_type_mapping[proto.params[-2].ty.strip('*').strip('const ')])
- create_line += ' ll_insert_obj((void*)*pMem, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n'
- # create_line += ' ll_insert_obj((void*)*%s, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n' % (obj_type_mapping[proto.params[-1].ty.strip('*').strip('const ')])
+ create_line += ' ll_insert_obj((void*)*pMem, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n'
+ # create_line += ' ll_insert_obj((void*)*%s, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n' % (obj_type_mapping[proto.params[-1].ty.strip('*').strip('const ')])
create_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
elif 'Create' in proto.name or 'Alloc' in proto.name:
create_line = ' loader_platform_thread_lock_mutex(&objLock);\n'
using_line = ''
if 'DestroyDevice' in proto.name:
destroy_line += ' // Report any remaining objects in LL\n objNode *pTrav = pGlobalHead;\n while (pTrav) {\n'
- destroy_line += ' if (pTrav->obj.objType == XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {\n'
+ destroy_line += ' if (pTrav->obj.objType == VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {\n'
destroy_line += ' objNode *pDel = pTrav;\n'
destroy_line += ' pTrav = pTrav->pNextGlobal;\n'
destroy_line += ' ll_destroy_obj((void*)(pDel->obj.pObj));\n'
destroy_line += ' } else {\n'
destroy_line += ' char str[1024];\n'
- destroy_line += ' sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses);\n'
- destroy_line += ' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, OBJTRACK_OBJECT_LEAK, "OBJTRACK", str);\n'
+ destroy_line += ' sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses);\n'
+ destroy_line += ' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, OBJTRACK_OBJECT_LEAK, "OBJTRACK", str);\n'
destroy_line += ' pTrav = pTrav->pNextGlobal;\n'
destroy_line += ' }\n'
destroy_line += ' }\n'
ret_val = ''
stmt = ''
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' %s'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, using_line, self.layer_name, ret_val, c_call, create_line, destroy_line, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls this early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")) ||\n'
' !strncmp(pExtName, "objTrackGetObjectCount", strlen("objTrackGetObjectCount")) ||\n'
' !strncmp(pExtName, "objTrackGetObjects", strlen("objTrackGetObjects")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' %s'
' result = nextTable.%s;\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, using_line, c_call, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
'%s'
'%s'
'}' % (qual, decl, using_line, ret_val, proto.c_call(), create_line, destroy_line, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
gpu_state = ''
if 'GetGpuInfo' in proto.name:
- gpu_state = ' if (infoType == XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES) {\n'
+ gpu_state = ' if (infoType == VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES) {\n'
gpu_state += ' if (pData != NULL) {\n'
gpu_state += ' setGpuQueueInfoState(pData);\n'
gpu_state += ' }\n'
gpu_state += ' }\n'
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
'%s'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
def generate_body(self):
self.layer_name = "ObjectTracker"
body = [self._generate_layer_initialization(True, lockname='obj'),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_extensions(),
self._generate_layer_gpa_function(extensions=['objTrackGetObjectCount', 'objTrackGetObjects'])]
print("Available subcommands are: %s" % " ".join(subcommands))
exit(1)
- hfp = xgl_helper.HeaderFileParser(sys.argv[2])
+ hfp = vk_helper.HeaderFileParser(sys.argv[2])
hfp.parse()
- xgl_helper.enum_val_dict = hfp.get_enum_val_dict()
- xgl_helper.enum_type_dict = hfp.get_enum_type_dict()
- xgl_helper.struct_dict = hfp.get_struct_dict()
- xgl_helper.typedef_fwd_dict = hfp.get_typedef_fwd_dict()
- xgl_helper.typedef_rev_dict = hfp.get_typedef_rev_dict()
- xgl_helper.types_dict = hfp.get_types_dict()
+ vk_helper.enum_val_dict = hfp.get_enum_val_dict()
+ vk_helper.enum_type_dict = hfp.get_enum_type_dict()
+ vk_helper.struct_dict = hfp.get_struct_dict()
+ vk_helper.typedef_fwd_dict = hfp.get_typedef_fwd_dict()
+ vk_helper.typedef_rev_dict = hfp.get_typedef_rev_dict()
+ vk_helper.types_dict = hfp.get_types_dict()
subcmd = subcommands[sys.argv[1]](sys.argv[2:])
subcmd.run()
-"""XGL API description"""
+"""VK API description"""
# Copyright (C) 2014 LunarG, Inc.
#
return "%s %s%s(%s)" % format_vals
def c_pretty_decl(self, name, attr=""):
- """Return a named declaration in C, with xgl.h formatting."""
+ """Return a named declaration in C, with vulkan.h formatting."""
plist = []
for param in self.params:
idx = param.ty.find("[")
return "%s(%s)" % (self.name, self.c_params(need_type=False))
def object_in_params(self):
- """Return the params that are simple XGL objects and are inputs."""
+ """Return the params that are simple VK objects and are inputs."""
return [param for param in self.params if param.ty in objects]
def object_out_params(self):
- """Return the params that are simple XGL objects and are outputs."""
+ """Return the params that are simple VK objects and are outputs."""
return [param for param in self.params
if param.dereferenced_type() in objects]
return "\n".join(lines)
-# XGL core API
+# VK core API
core = Extension(
- name="XGL_CORE",
- headers=["xgl.h", "xglDbg.h"],
+ name="VK_CORE",
+ headers=["vulkan.h", "xglDbg.h"],
objects=[
- "XGL_INSTANCE",
- "XGL_PHYSICAL_GPU",
- "XGL_BASE_OBJECT",
- "XGL_DEVICE",
- "XGL_QUEUE",
- "XGL_GPU_MEMORY",
- "XGL_OBJECT",
- "XGL_BUFFER",
- "XGL_BUFFER_VIEW",
- "XGL_IMAGE",
- "XGL_IMAGE_VIEW",
- "XGL_COLOR_ATTACHMENT_VIEW",
- "XGL_DEPTH_STENCIL_VIEW",
- "XGL_SHADER",
- "XGL_PIPELINE",
- "XGL_SAMPLER",
- "XGL_DESCRIPTOR_SET",
- "XGL_DESCRIPTOR_SET_LAYOUT",
- "XGL_DESCRIPTOR_SET_LAYOUT_CHAIN",
- "XGL_DESCRIPTOR_POOL",
- "XGL_DYNAMIC_STATE_OBJECT",
- "XGL_DYNAMIC_VP_STATE_OBJECT",
- "XGL_DYNAMIC_RS_STATE_OBJECT",
- "XGL_DYNAMIC_CB_STATE_OBJECT",
- "XGL_DYNAMIC_DS_STATE_OBJECT",
- "XGL_CMD_BUFFER",
- "XGL_FENCE",
- "XGL_SEMAPHORE",
- "XGL_EVENT",
- "XGL_QUERY_POOL",
- "XGL_FRAMEBUFFER",
- "XGL_RENDER_PASS",
+ "VK_INSTANCE",
+ "VK_PHYSICAL_GPU",
+ "VK_BASE_OBJECT",
+ "VK_DEVICE",
+ "VK_QUEUE",
+ "VK_GPU_MEMORY",
+ "VK_OBJECT",
+ "VK_BUFFER",
+ "VK_BUFFER_VIEW",
+ "VK_IMAGE",
+ "VK_IMAGE_VIEW",
+ "VK_COLOR_ATTACHMENT_VIEW",
+ "VK_DEPTH_STENCIL_VIEW",
+ "VK_SHADER",
+ "VK_PIPELINE",
+ "VK_SAMPLER",
+ "VK_DESCRIPTOR_SET",
+ "VK_DESCRIPTOR_SET_LAYOUT",
+ "VK_DESCRIPTOR_SET_LAYOUT_CHAIN",
+ "VK_DESCRIPTOR_POOL",
+ "VK_DYNAMIC_STATE_OBJECT",
+ "VK_DYNAMIC_VP_STATE_OBJECT",
+ "VK_DYNAMIC_RS_STATE_OBJECT",
+ "VK_DYNAMIC_CB_STATE_OBJECT",
+ "VK_DYNAMIC_DS_STATE_OBJECT",
+ "VK_CMD_BUFFER",
+ "VK_FENCE",
+ "VK_SEMAPHORE",
+ "VK_EVENT",
+ "VK_QUERY_POOL",
+ "VK_FRAMEBUFFER",
+ "VK_RENDER_PASS",
],
protos=[
- Proto("XGL_RESULT", "CreateInstance",
- [Param("const XGL_INSTANCE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_INSTANCE*", "pInstance")]),
+ Proto("VK_RESULT", "CreateInstance",
+ [Param("const VK_INSTANCE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_INSTANCE*", "pInstance")]),
- Proto("XGL_RESULT", "DestroyInstance",
- [Param("XGL_INSTANCE", "instance")]),
+ Proto("VK_RESULT", "DestroyInstance",
+ [Param("VK_INSTANCE", "instance")]),
- Proto("XGL_RESULT", "EnumerateGpus",
- [Param("XGL_INSTANCE", "instance"),
+ Proto("VK_RESULT", "EnumerateGpus",
+ [Param("VK_INSTANCE", "instance"),
Param("uint32_t", "maxGpus"),
Param("uint32_t*", "pGpuCount"),
- Param("XGL_PHYSICAL_GPU*", "pGpus")]),
+ Param("VK_PHYSICAL_GPU*", "pGpus")]),
- Proto("XGL_RESULT", "GetGpuInfo",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("XGL_PHYSICAL_GPU_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "GetGpuInfo",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("VK_PHYSICAL_GPU_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
Proto("void*", "GetProcAddr",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("const char*", "pName")]),
- Proto("XGL_RESULT", "CreateDevice",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("const XGL_DEVICE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DEVICE*", "pDevice")]),
+ Proto("VK_RESULT", "CreateDevice",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("const VK_DEVICE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DEVICE*", "pDevice")]),
- Proto("XGL_RESULT", "DestroyDevice",
- [Param("XGL_DEVICE", "device")]),
+ Proto("VK_RESULT", "DestroyDevice",
+ [Param("VK_DEVICE", "device")]),
- Proto("XGL_RESULT", "GetExtensionSupport",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ Proto("VK_RESULT", "GetExtensionSupport",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("const char*", "pExtName")]),
- Proto("XGL_RESULT", "EnumerateLayers",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ Proto("VK_RESULT", "EnumerateLayers",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("size_t", "maxLayerCount"),
Param("size_t", "maxStringSize"),
Param("size_t*", "pOutLayerCount"),
Param("char* const*", "pOutLayers"),
Param("void*", "pReserved")]),
- Proto("XGL_RESULT", "GetDeviceQueue",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "GetDeviceQueue",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "queueNodeIndex"),
Param("uint32_t", "queueIndex"),
- Param("XGL_QUEUE*", "pQueue")]),
+ Param("VK_QUEUE*", "pQueue")]),
- Proto("XGL_RESULT", "QueueSubmit",
- [Param("XGL_QUEUE", "queue"),
+ Proto("VK_RESULT", "QueueSubmit",
+ [Param("VK_QUEUE", "queue"),
Param("uint32_t", "cmdBufferCount"),
- Param("const XGL_CMD_BUFFER*", "pCmdBuffers"),
- Param("XGL_FENCE", "fence")]),
+ Param("const VK_CMD_BUFFER*", "pCmdBuffers"),
+ Param("VK_FENCE", "fence")]),
- Proto("XGL_RESULT", "QueueAddMemReference",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "QueueAddMemReference",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "QueueRemoveMemReference",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "QueueRemoveMemReference",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "QueueWaitIdle",
- [Param("XGL_QUEUE", "queue")]),
+ Proto("VK_RESULT", "QueueWaitIdle",
+ [Param("VK_QUEUE", "queue")]),
- Proto("XGL_RESULT", "DeviceWaitIdle",
- [Param("XGL_DEVICE", "device")]),
+ Proto("VK_RESULT", "DeviceWaitIdle",
+ [Param("VK_DEVICE", "device")]),
- Proto("XGL_RESULT", "AllocMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_MEMORY_ALLOC_INFO*", "pAllocInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
+ Proto("VK_RESULT", "AllocMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_MEMORY_ALLOC_INFO*", "pAllocInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
- Proto("XGL_RESULT", "FreeMemory",
- [Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "FreeMemory",
+ [Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "SetMemoryPriority",
- [Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_MEMORY_PRIORITY", "priority")]),
+ Proto("VK_RESULT", "SetMemoryPriority",
+ [Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_MEMORY_PRIORITY", "priority")]),
- Proto("XGL_RESULT", "MapMemory",
- [Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_FLAGS", "flags"),
+ Proto("VK_RESULT", "MapMemory",
+ [Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_FLAGS", "flags"),
Param("void**", "ppData")]),
- Proto("XGL_RESULT", "UnmapMemory",
- [Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "UnmapMemory",
+ [Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "PinSystemMemory",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "PinSystemMemory",
+ [Param("VK_DEVICE", "device"),
Param("const void*", "pSysMem"),
Param("size_t", "memSize"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "GetMultiGpuCompatibility",
- [Param("XGL_PHYSICAL_GPU", "gpu0"),
- Param("XGL_PHYSICAL_GPU", "gpu1"),
- Param("XGL_GPU_COMPATIBILITY_INFO*", "pInfo")]),
-
- Proto("XGL_RESULT", "OpenSharedMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_MEMORY_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "OpenSharedSemaphore",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SEMAPHORE_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_SEMAPHORE*", "pSemaphore")]),
-
- Proto("XGL_RESULT", "OpenPeerMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_PEER_MEMORY_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "OpenPeerImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_PEER_IMAGE_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_IMAGE*", "pImage"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "DestroyObject",
- [Param("XGL_OBJECT", "object")]),
-
- Proto("XGL_RESULT", "GetObjectInfo",
- [Param("XGL_BASE_OBJECT", "object"),
- Param("XGL_OBJECT_INFO_TYPE", "infoType"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "GetMultiGpuCompatibility",
+ [Param("VK_PHYSICAL_GPU", "gpu0"),
+ Param("VK_PHYSICAL_GPU", "gpu1"),
+ Param("VK_GPU_COMPATIBILITY_INFO*", "pInfo")]),
+
+ Proto("VK_RESULT", "OpenSharedMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_MEMORY_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "OpenSharedSemaphore",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SEMAPHORE_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_SEMAPHORE*", "pSemaphore")]),
+
+ Proto("VK_RESULT", "OpenPeerMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_PEER_MEMORY_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "OpenPeerImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_PEER_IMAGE_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_IMAGE*", "pImage"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "DestroyObject",
+ [Param("VK_OBJECT", "object")]),
+
+ Proto("VK_RESULT", "GetObjectInfo",
+ [Param("VK_BASE_OBJECT", "object"),
+ Param("VK_OBJECT_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "BindObjectMemory",
- [Param("XGL_OBJECT", "object"),
+ Proto("VK_RESULT", "BindObjectMemory",
+ [Param("VK_OBJECT", "object"),
Param("uint32_t", "allocationIdx"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "offset")]),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "offset")]),
- Proto("XGL_RESULT", "BindObjectMemoryRange",
- [Param("XGL_OBJECT", "object"),
+ Proto("VK_RESULT", "BindObjectMemoryRange",
+ [Param("VK_OBJECT", "object"),
Param("uint32_t", "allocationIdx"),
- Param("XGL_GPU_SIZE", "rangeOffset"),
- Param("XGL_GPU_SIZE", "rangeSize"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "memOffset")]),
+ Param("VK_GPU_SIZE", "rangeOffset"),
+ Param("VK_GPU_SIZE", "rangeSize"),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "memOffset")]),
- Proto("XGL_RESULT", "BindImageMemoryRange",
- [Param("XGL_IMAGE", "image"),
+ Proto("VK_RESULT", "BindImageMemoryRange",
+ [Param("VK_IMAGE", "image"),
Param("uint32_t", "allocationIdx"),
- Param("const XGL_IMAGE_MEMORY_BIND_INFO*", "bindInfo"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "memOffset")]),
+ Param("const VK_IMAGE_MEMORY_BIND_INFO*", "bindInfo"),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "memOffset")]),
- Proto("XGL_RESULT", "CreateFence",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_FENCE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_FENCE*", "pFence")]),
+ Proto("VK_RESULT", "CreateFence",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_FENCE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_FENCE*", "pFence")]),
- Proto("XGL_RESULT", "ResetFences",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "ResetFences",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "fenceCount"),
- Param("XGL_FENCE*", "pFences")]),
+ Param("VK_FENCE*", "pFences")]),
- Proto("XGL_RESULT", "GetFenceStatus",
- [Param("XGL_FENCE", "fence")]),
+ Proto("VK_RESULT", "GetFenceStatus",
+ [Param("VK_FENCE", "fence")]),
- Proto("XGL_RESULT", "WaitForFences",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "WaitForFences",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "fenceCount"),
- Param("const XGL_FENCE*", "pFences"),
+ Param("const VK_FENCE*", "pFences"),
Param("bool32_t", "waitAll"),
Param("uint64_t", "timeout")]),
- Proto("XGL_RESULT", "CreateSemaphore",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SEMAPHORE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SEMAPHORE*", "pSemaphore")]),
+ Proto("VK_RESULT", "CreateSemaphore",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SEMAPHORE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SEMAPHORE*", "pSemaphore")]),
- Proto("XGL_RESULT", "QueueSignalSemaphore",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_SEMAPHORE", "semaphore")]),
+ Proto("VK_RESULT", "QueueSignalSemaphore",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_SEMAPHORE", "semaphore")]),
- Proto("XGL_RESULT", "QueueWaitSemaphore",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_SEMAPHORE", "semaphore")]),
+ Proto("VK_RESULT", "QueueWaitSemaphore",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_SEMAPHORE", "semaphore")]),
- Proto("XGL_RESULT", "CreateEvent",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_EVENT_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_EVENT*", "pEvent")]),
+ Proto("VK_RESULT", "CreateEvent",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_EVENT_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_EVENT*", "pEvent")]),
- Proto("XGL_RESULT", "GetEventStatus",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "GetEventStatus",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "SetEvent",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "SetEvent",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "ResetEvent",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "ResetEvent",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "CreateQueryPool",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_QUERY_POOL_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_QUERY_POOL*", "pQueryPool")]),
+ Proto("VK_RESULT", "CreateQueryPool",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_QUERY_POOL_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_QUERY_POOL*", "pQueryPool")]),
- Proto("XGL_RESULT", "GetQueryPoolResults",
- [Param("XGL_QUERY_POOL", "queryPool"),
+ Proto("VK_RESULT", "GetQueryPoolResults",
+ [Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "startQuery"),
Param("uint32_t", "queryCount"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "GetFormatInfo",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_FORMAT", "format"),
- Param("XGL_FORMAT_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "GetFormatInfo",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_FORMAT", "format"),
+ Param("VK_FORMAT_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "CreateBuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_BUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_BUFFER*", "pBuffer")]),
-
- Proto("XGL_RESULT", "CreateBufferView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_BUFFER_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_BUFFER_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_IMAGE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE*", "pImage")]),
-
- Proto("XGL_RESULT", "GetImageSubresourceInfo",
- [Param("XGL_IMAGE", "image"),
- Param("const XGL_IMAGE_SUBRESOURCE*", "pSubresource"),
- Param("XGL_SUBRESOURCE_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "CreateBuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_BUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_BUFFER*", "pBuffer")]),
+
+ Proto("VK_RESULT", "CreateBufferView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_BUFFER_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_BUFFER_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_IMAGE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE*", "pImage")]),
+
+ Proto("VK_RESULT", "GetImageSubresourceInfo",
+ [Param("VK_IMAGE", "image"),
+ Param("const VK_IMAGE_SUBRESOURCE*", "pSubresource"),
+ Param("VK_SUBRESOURCE_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "CreateImageView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_IMAGE_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateColorAttachmentView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_COLOR_ATTACHMENT_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateDepthStencilView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DEPTH_STENCIL_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateShader",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SHADER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SHADER*", "pShader")]),
-
- Proto("XGL_RESULT", "CreateGraphicsPipeline",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "CreateGraphicsPipelineDerivative",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE", "basePipeline"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "CreateComputePipeline",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_COMPUTE_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "StorePipeline",
- [Param("XGL_PIPELINE", "pipeline"),
+ Proto("VK_RESULT", "CreateImageView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_IMAGE_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateColorAttachmentView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_COLOR_ATTACHMENT_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateDepthStencilView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DEPTH_STENCIL_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DEPTH_STENCIL_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateShader",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SHADER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SHADER*", "pShader")]),
+
+ Proto("VK_RESULT", "CreateGraphicsPipeline",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "CreateGraphicsPipelineDerivative",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE", "basePipeline"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "CreateComputePipeline",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_COMPUTE_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "StorePipeline",
+ [Param("VK_PIPELINE", "pipeline"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "LoadPipeline",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "LoadPipeline",
+ [Param("VK_DEVICE", "device"),
Param("size_t", "dataSize"),
Param("const void*", "pData"),
- Param("XGL_PIPELINE*", "pPipeline")]),
+ Param("VK_PIPELINE*", "pPipeline")]),
- Proto("XGL_RESULT", "LoadPipelineDerivative",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "LoadPipelineDerivative",
+ [Param("VK_DEVICE", "device"),
Param("size_t", "dataSize"),
Param("const void*", "pData"),
- Param("XGL_PIPELINE", "basePipeline"),
- Param("XGL_PIPELINE*", "pPipeline")]),
+ Param("VK_PIPELINE", "basePipeline"),
+ Param("VK_PIPELINE*", "pPipeline")]),
- Proto("XGL_RESULT", "CreateSampler",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SAMPLER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SAMPLER*", "pSampler")]),
+ Proto("VK_RESULT", "CreateSampler",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SAMPLER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SAMPLER*", "pSampler")]),
- Proto("XGL_RESULT", "CreateDescriptorSetLayout",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayout")]),
+ Proto("VK_RESULT", "CreateDescriptorSetLayout",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayout")]),
- Proto("XGL_RESULT", "CreateDescriptorSetLayoutChain",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "CreateDescriptorSetLayoutChain",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "setLayoutArrayCount"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayoutArray"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT_CHAIN*", "pLayoutChain")]),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayoutArray"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT_CHAIN*", "pLayoutChain")]),
- Proto("XGL_RESULT", "BeginDescriptorPoolUpdate",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DESCRIPTOR_UPDATE_MODE", "updateMode")]),
+ Proto("VK_RESULT", "BeginDescriptorPoolUpdate",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DESCRIPTOR_UPDATE_MODE", "updateMode")]),
- Proto("XGL_RESULT", "EndDescriptorPoolUpdate",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_CMD_BUFFER", "cmd")]),
+ Proto("VK_RESULT", "EndDescriptorPoolUpdate",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_CMD_BUFFER", "cmd")]),
- Proto("XGL_RESULT", "CreateDescriptorPool",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DESCRIPTOR_POOL_USAGE", "poolUsage"),
+ Proto("VK_RESULT", "CreateDescriptorPool",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DESCRIPTOR_POOL_USAGE", "poolUsage"),
Param("uint32_t", "maxSets"),
- Param("const XGL_DESCRIPTOR_POOL_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DESCRIPTOR_POOL*", "pDescriptorPool")]),
+ Param("const VK_DESCRIPTOR_POOL_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DESCRIPTOR_POOL*", "pDescriptorPool")]),
- Proto("XGL_RESULT", "ResetDescriptorPool",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool")]),
+ Proto("VK_RESULT", "ResetDescriptorPool",
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool")]),
- Proto("XGL_RESULT", "AllocDescriptorSets",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool"),
- Param("XGL_DESCRIPTOR_SET_USAGE", "setUsage"),
+ Proto("VK_RESULT", "AllocDescriptorSets",
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool"),
+ Param("VK_DESCRIPTOR_SET_USAGE", "setUsage"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayouts"),
- Param("XGL_DESCRIPTOR_SET*", "pDescriptorSets"),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayouts"),
+ Param("VK_DESCRIPTOR_SET*", "pDescriptorSets"),
Param("uint32_t*", "pCount")]),
Proto("void", "ClearDescriptorSets",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool"),
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET*", "pDescriptorSets")]),
+ Param("const VK_DESCRIPTOR_SET*", "pDescriptorSets")]),
Proto("void", "UpdateDescriptors",
- [Param("XGL_DESCRIPTOR_SET", "descriptorSet"),
+ [Param("VK_DESCRIPTOR_SET", "descriptorSet"),
Param("uint32_t", "updateCount"),
Param("const void**", "ppUpdateArray")]),
- Proto("XGL_RESULT", "CreateDynamicViewportState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_VP_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_VP_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicViewportState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_VP_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_VP_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicRasterState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_RS_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_RS_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicRasterState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_RS_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_RS_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicColorBlendState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_CB_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_CB_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicColorBlendState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_CB_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_CB_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicDepthStencilState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_DS_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_DS_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicDepthStencilState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_DS_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_DS_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateCommandBuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_CMD_BUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_CMD_BUFFER*", "pCmdBuffer")]),
+ Proto("VK_RESULT", "CreateCommandBuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_CMD_BUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_CMD_BUFFER*", "pCmdBuffer")]),
- Proto("XGL_RESULT", "BeginCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_CMD_BUFFER_BEGIN_INFO*", "pBeginInfo")]),
+ Proto("VK_RESULT", "BeginCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_CMD_BUFFER_BEGIN_INFO*", "pBeginInfo")]),
- Proto("XGL_RESULT", "EndCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ Proto("VK_RESULT", "EndCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
- Proto("XGL_RESULT", "ResetCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ Proto("VK_RESULT", "ResetCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
Proto("void", "CmdBindPipeline",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
- Param("XGL_PIPELINE", "pipeline")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ Param("VK_PIPELINE", "pipeline")]),
Proto("void", "CmdBindDynamicStateObject",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_STATE_BIND_POINT", "stateBindPoint"),
- Param("XGL_DYNAMIC_STATE_OBJECT", "state")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_STATE_BIND_POINT", "stateBindPoint"),
+ Param("VK_DYNAMIC_STATE_OBJECT", "state")]),
Proto("void", "CmdBindDescriptorSets",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT_CHAIN", "layoutChain"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT_CHAIN", "layoutChain"),
Param("uint32_t", "layoutChainSlot"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET*", "pDescriptorSets"),
+ Param("const VK_DESCRIPTOR_SET*", "pDescriptorSets"),
Param("const uint32_t*", "pUserData")]),
Proto("void", "CmdBindVertexBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "binding")]),
Proto("void", "CmdBindIndexBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
- Param("XGL_INDEX_TYPE", "indexType")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
+ Param("VK_INDEX_TYPE", "indexType")]),
Proto("void", "CmdDraw",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "firstVertex"),
Param("uint32_t", "vertexCount"),
Param("uint32_t", "firstInstance"),
Param("uint32_t", "instanceCount")]),
Proto("void", "CmdDrawIndexed",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "firstIndex"),
Param("uint32_t", "indexCount"),
Param("int32_t", "vertexOffset"),
Param("uint32_t", "instanceCount")]),
Proto("void", "CmdDrawIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "count"),
Param("uint32_t", "stride")]),
Proto("void", "CmdDrawIndexedIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "count"),
Param("uint32_t", "stride")]),
Proto("void", "CmdDispatch",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "x"),
Param("uint32_t", "y"),
Param("uint32_t", "z")]),
Proto("void", "CmdDispatchIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset")]),
Proto("void", "CmdCopyBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_COPY*", "pRegions")]),
Proto("void", "CmdCopyImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdBlitImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_IMAGE_BLIT*", "pRegions")]),
+ Param("const VK_IMAGE_BLIT*", "pRegions")]),
Proto("void", "CmdCopyBufferToImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdCopyImageToBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_BUFFER", "destBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_BUFFER", "destBuffer"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdCloneImageData",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout")]),
Proto("void", "CmdUpdateBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset"),
- Param("XGL_GPU_SIZE", "dataSize"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset"),
+ Param("VK_GPU_SIZE", "dataSize"),
Param("const uint32_t*", "pData")]),
Proto("void", "CmdFillBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset"),
- Param("XGL_GPU_SIZE", "fillSize"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset"),
+ Param("VK_GPU_SIZE", "fillSize"),
Param("uint32_t", "data")]),
Proto("void", "CmdClearColorImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "image"),
- Param("XGL_IMAGE_LAYOUT", "imageLayout"),
- Param("XGL_CLEAR_COLOR", "color"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "image"),
+ Param("VK_IMAGE_LAYOUT", "imageLayout"),
+ Param("VK_CLEAR_COLOR", "color"),
Param("uint32_t", "rangeCount"),
- Param("const XGL_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
+ Param("const VK_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
Proto("void", "CmdClearDepthStencil",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "image"),
- Param("XGL_IMAGE_LAYOUT", "imageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "image"),
+ Param("VK_IMAGE_LAYOUT", "imageLayout"),
Param("float", "depth"),
Param("uint32_t", "stencil"),
Param("uint32_t", "rangeCount"),
- Param("const XGL_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
+ Param("const VK_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
Proto("void", "CmdResolveImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "rectCount"),
- Param("const XGL_IMAGE_RESOLVE*", "pRects")]),
+ Param("const VK_IMAGE_RESOLVE*", "pRects")]),
Proto("void", "CmdSetEvent",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_EVENT", "event"),
- Param("XGL_PIPE_EVENT", "pipeEvent")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_EVENT", "event"),
+ Param("VK_PIPE_EVENT", "pipeEvent")]),
Proto("void", "CmdResetEvent",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_EVENT", "event"),
- Param("XGL_PIPE_EVENT", "pipeEvent")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_EVENT", "event"),
+ Param("VK_PIPE_EVENT", "pipeEvent")]),
Proto("void", "CmdWaitEvents",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_EVENT_WAIT_INFO*", "pWaitInfo")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_EVENT_WAIT_INFO*", "pWaitInfo")]),
Proto("void", "CmdPipelineBarrier",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_PIPELINE_BARRIER*", "pBarrier")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_PIPELINE_BARRIER*", "pBarrier")]),
Proto("void", "CmdBeginQuery",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "slot"),
- Param("XGL_FLAGS", "flags")]),
+ Param("VK_FLAGS", "flags")]),
Proto("void", "CmdEndQuery",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "slot")]),
Proto("void", "CmdResetQueryPool",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "startQuery"),
Param("uint32_t", "queryCount")]),
Proto("void", "CmdWriteTimestamp",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_TIMESTAMP_TYPE", "timestampType"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_TIMESTAMP_TYPE", "timestampType"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset")]),
Proto("void", "CmdInitAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
Param("const uint32_t*", "pData")]),
Proto("void", "CmdLoadAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_GPU_SIZE", "srcOffset")]),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_GPU_SIZE", "srcOffset")]),
Proto("void", "CmdSaveAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset")]),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset")]),
- Proto("XGL_RESULT", "CreateFramebuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_FRAMEBUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_FRAMEBUFFER*", "pFramebuffer")]),
+ Proto("VK_RESULT", "CreateFramebuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_FRAMEBUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_FRAMEBUFFER*", "pFramebuffer")]),
- Proto("XGL_RESULT", "CreateRenderPass",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_RENDER_PASS_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_RENDER_PASS*", "pRenderPass")]),
+ Proto("VK_RESULT", "CreateRenderPass",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_RENDER_PASS_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_RENDER_PASS*", "pRenderPass")]),
Proto("void", "CmdBeginRenderPass",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_RENDER_PASS_BEGIN*", "pRenderPassBegin")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_RENDER_PASS_BEGIN*", "pRenderPassBegin")]),
Proto("void", "CmdEndRenderPass",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_RENDER_PASS", "renderPass")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_RENDER_PASS", "renderPass")]),
- Proto("XGL_RESULT", "DbgSetValidationLevel",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_VALIDATION_LEVEL", "validationLevel")]),
+ Proto("VK_RESULT", "DbgSetValidationLevel",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_VALIDATION_LEVEL", "validationLevel")]),
- Proto("XGL_RESULT", "DbgRegisterMsgCallback",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback"),
+ Proto("VK_RESULT", "DbgRegisterMsgCallback",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback"),
Param("void*", "pUserData")]),
- Proto("XGL_RESULT", "DbgUnregisterMsgCallback",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback")]),
+ Proto("VK_RESULT", "DbgUnregisterMsgCallback",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback")]),
- Proto("XGL_RESULT", "DbgSetMessageFilter",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "DbgSetMessageFilter",
+ [Param("VK_DEVICE", "device"),
Param("int32_t", "msgCode"),
- Param("XGL_DBG_MSG_FILTER", "filter")]),
+ Param("VK_DBG_MSG_FILTER", "filter")]),
- Proto("XGL_RESULT", "DbgSetObjectTag",
- [Param("XGL_BASE_OBJECT", "object"),
+ Proto("VK_RESULT", "DbgSetObjectTag",
+ [Param("VK_BASE_OBJECT", "object"),
Param("size_t", "tagSize"),
Param("const void*", "pTag")]),
- Proto("XGL_RESULT", "DbgSetGlobalOption",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_GLOBAL_OPTION", "dbgOption"),
+ Proto("VK_RESULT", "DbgSetGlobalOption",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_GLOBAL_OPTION", "dbgOption"),
Param("size_t", "dataSize"),
Param("const void*", "pData")]),
- Proto("XGL_RESULT", "DbgSetDeviceOption",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DBG_DEVICE_OPTION", "dbgOption"),
+ Proto("VK_RESULT", "DbgSetDeviceOption",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DBG_DEVICE_OPTION", "dbgOption"),
Param("size_t", "dataSize"),
Param("const void*", "pData")]),
Proto("void", "CmdDbgMarkerBegin",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("const char*", "pMarker")]),
Proto("void", "CmdDbgMarkerEnd",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
],
)
wsi_x11 = Extension(
- name="XGL_WSI_X11",
+ name="VK_WSI_X11",
headers=["xglWsiX11Ext.h"],
objects=[],
protos=[
- Proto("XGL_RESULT", "WsiX11AssociateConnection",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("const XGL_WSI_X11_CONNECTION_INFO*", "pConnectionInfo")]),
+ Proto("VK_RESULT", "WsiX11AssociateConnection",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("const VK_WSI_X11_CONNECTION_INFO*", "pConnectionInfo")]),
- Proto("XGL_RESULT", "WsiX11GetMSC",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "WsiX11GetMSC",
+ [Param("VK_DEVICE", "device"),
Param("xcb_window_t", "window"),
Param("xcb_randr_crtc_t", "crtc"),
Param("uint64_t*", "pMsc")]),
- Proto("XGL_RESULT", "WsiX11CreatePresentableImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE*", "pImage"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
+ Proto("VK_RESULT", "WsiX11CreatePresentableImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE*", "pImage"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
- Proto("XGL_RESULT", "WsiX11QueuePresent",
- [Param("XGL_QUEUE", "queue"),
- Param("const XGL_WSI_X11_PRESENT_INFO*", "pPresentInfo"),
- Param("XGL_FENCE", "fence")]),
+ Proto("VK_RESULT", "WsiX11QueuePresent",
+ [Param("VK_QUEUE", "queue"),
+ Param("const VK_WSI_X11_PRESENT_INFO*", "pPresentInfo"),
+ Param("VK_FENCE", "fence")]),
],
)
extensions = [core, wsi_x11]
object_root_list = [
- "XGL_INSTANCE",
- "XGL_PHYSICAL_GPU",
- "XGL_BASE_OBJECT"
+ "VK_INSTANCE",
+ "VK_PHYSICAL_GPU",
+ "VK_BASE_OBJECT"
]
object_base_list = [
- "XGL_DEVICE",
- "XGL_QUEUE",
- "XGL_GPU_MEMORY",
- "XGL_OBJECT"
+ "VK_DEVICE",
+ "VK_QUEUE",
+ "VK_GPU_MEMORY",
+ "VK_OBJECT"
]
object_list = [
- "XGL_BUFFER",
- "XGL_BUFFER_VIEW",
- "XGL_IMAGE",
- "XGL_IMAGE_VIEW",
- "XGL_COLOR_ATTACHMENT_VIEW",
- "XGL_DEPTH_STENCIL_VIEW",
- "XGL_SHADER",
- "XGL_PIPELINE",
- "XGL_SAMPLER",
- "XGL_DESCRIPTOR_SET",
- "XGL_DESCRIPTOR_SET_LAYOUT",
- "XGL_DESCRIPTOR_SET_LAYOUT_CHAIN",
- "XGL_DESCRIPTOR_POOL",
- "XGL_DYNAMIC_STATE_OBJECT",
- "XGL_CMD_BUFFER",
- "XGL_FENCE",
- "XGL_SEMAPHORE",
- "XGL_EVENT",
- "XGL_QUERY_POOL",
- "XGL_FRAMEBUFFER",
- "XGL_RENDER_PASS"
+ "VK_BUFFER",
+ "VK_BUFFER_VIEW",
+ "VK_IMAGE",
+ "VK_IMAGE_VIEW",
+ "VK_COLOR_ATTACHMENT_VIEW",
+ "VK_DEPTH_STENCIL_VIEW",
+ "VK_SHADER",
+ "VK_PIPELINE",
+ "VK_SAMPLER",
+ "VK_DESCRIPTOR_SET",
+ "VK_DESCRIPTOR_SET_LAYOUT",
+ "VK_DESCRIPTOR_SET_LAYOUT_CHAIN",
+ "VK_DESCRIPTOR_POOL",
+ "VK_DYNAMIC_STATE_OBJECT",
+ "VK_CMD_BUFFER",
+ "VK_FENCE",
+ "VK_SEMAPHORE",
+ "VK_EVENT",
+ "VK_QUERY_POOL",
+ "VK_FRAMEBUFFER",
+ "VK_RENDER_PASS"
]
object_dynamic_state_list = [
- "XGL_DYNAMIC_VP_STATE_OBJECT",
- "XGL_DYNAMIC_RS_STATE_OBJECT",
- "XGL_DYNAMIC_CB_STATE_OBJECT",
- "XGL_DYNAMIC_DS_STATE_OBJECT"
+ "VK_DYNAMIC_VP_STATE_OBJECT",
+ "VK_DYNAMIC_RS_STATE_OBJECT",
+ "VK_DYNAMIC_CB_STATE_OBJECT",
+ "VK_DYNAMIC_DS_STATE_OBJECT"
]
object_type_list = object_root_list + object_base_list + object_list + object_dynamic_state_list
-object_parent_list = ["XGL_BASE_OBJECT", "XGL_OBJECT", "XGL_DYNAMIC_STATE_OBJECT"]
+object_parent_list = ["VK_BASE_OBJECT", "VK_OBJECT", "VK_DYNAMIC_STATE_OBJECT"]
headers = []
objects = []
with open(filename, "r") as fp:
for line in fp:
line = line.strip()
- if line.startswith("XGL_DEFINE"):
+ if line.startswith("VK_DEFINE"):
begin = line.find("(") + 1
end = line.find(",")
# extract the object type
# parse proto_lines to protos
protos = []
for line in proto_lines:
- first, rest = line.split(" (XGLAPI *xgl")
+ first, rest = line.split(" (VKAPI *vk")
second, third = rest.split("Type)(")
# get the return type, no space before "*"
protos.append(Proto(proto_ret, proto_name, params))
# make them an extension and print
- ext = Extension("XGL_CORE",
- headers=["xgl.h", "xglDbg.h"],
+ ext = Extension("VK_CORE",
+ headers=["vulkan.h", "xglDbg.h"],
objects=object_lines,
protos=protos)
print("core =", str(ext))
print("")
- print("typedef struct _XGL_LAYER_DISPATCH_TABLE")
+ print("typedef struct _VK_LAYER_DISPATCH_TABLE")
print("{")
for proto in ext.protos:
- print(" xgl%sType %s;" % (proto.name, proto.name))
- print("} XGL_LAYER_DISPATCH_TABLE;")
+ print(" vk%sType %s;" % (proto.name, proto.name))
+ print("} VK_LAYER_DISPATCH_TABLE;")
if __name__ == "__main__":
- parse_xgl_h("include/xgl.h")
+ parse_xgl_h("include/vulkan.h")