Fix (likely) typo in glGetTextureSubImage height calculation
authorIan Romanick <ian.d.romanick@intel.com>
Thu, 14 Dec 2017 23:56:59 +0000 (15:56 -0800)
committerAlexander Galazin <Alexander.Galazin@arm.com>
Fri, 15 Dec 2017 20:03:16 +0000 (15:03 -0500)
The old code used "test_textures[i].id == GL_TEXTURE_1D", but
test_texturess::id contains the texture ID, not the texture target.  As
a result, the GL_TEXTURE_1D test always got the error that it expected,
but it got it for the wrong reason.

Components: OpenGL

VK-GL-CTS issue: 912

Affects:
KHR-GL46.get_texture_sub_image.errors_test

Change-Id: I4d3de1ccbd3b0cc554af23ceae16d590c82e81d4

external/openglcts/modules/gl/gl4cGetTextureSubImageTests.cpp

index 47a9fc6..57c3487 100644 (file)
@@ -520,7 +520,7 @@ bool gl4cts::GetTextureSubImage::Errors::testTwoDimmensionalTextureErrors()
        for (glw::GLuint i = 0; i < test_textures_size; ++i)
        {
                m_gl_GetTextureSubImage(test_textures[i].id, 0, 0, 0, 1, s_texture_data_width,
-                                                               (test_textures[i].id == GL_TEXTURE_1D) ? 1 : s_texture_data_height, 2, GL_RGBA,
+                                                               (test_textures[i].id == m_texture_1D) ? 1 : s_texture_data_height, 2, GL_RGBA,
                                                                GL_UNSIGNED_BYTE, s_destination_buffer_size, m_destination_buffer);
 
                glw::GLint error_value   = gl.getError();