Draw a Circle Open Gl
The graphics pipeline
By learning OpenGL, you've decided that you desire to practise all of the difficult work yourself. That inevitably means that you'll be thrown in the deep, but once y'all empathize the essentials, y'all'll see that doing things the hard way doesn't have to be so difficult subsequently all. To top that all, the exercises at the stop of this chapter will testify you the sheer amount of control y'all have over the rendering process by doing things the modern style!
The graphics pipeline covers all of the steps that follow each other up on processing the input data to get to the final output paradigm. I'll explain these steps with aid of the following analogy.
Information technology all begins with the vertices, these are the points from which shapes similar triangles will later be constructed. Each of these points is stored with certain attributes and it's up to you to decide what kind of attributes you desire to store. Commonly used attributes are 3D position in the earth and texture coordinates.
The vertex shader is a modest program running on your graphics carte that processes every one of these input vertices individually. This is where the perspective transformation takes place, which projects vertices with a 3D earth position onto your 2D screen! It also passes important attributes like color and texture coordinates further down the pipeline.
After the input vertices accept been transformed, the graphics card will form triangles, lines or points out of them. These shapes are called primitives because they form the basis of more than complex shapes. In that location are some additional cartoon modes to cull from, like triangle strips and line strips. These reduce the number of vertices you need to pass if you want to create objects where each next primitive is connected to the last one, like a continuous line consisting of several segments.
The following step, the geometry shader, is completely optional and was simply recently introduced. Dissimilar the vertex shader, the geometry shader can output more than data than comes in. It takes the primitives from the shape assembly stage every bit input and tin either laissez passer a primitive through down to the residuum of the pipeline, alter it first, completely discard it or even supplant it with other archaic(s). Since the advice betwixt the GPU and the rest of the PC is relatively slow, this stage can assistance yous reduce the amount of data that needs to be transferred. With a voxel game for example, y'all could pass vertices equally point vertices, along with an attribute for their world position, color and material and the actual cubes can be produced in the geometry shader with a point as input!
After the concluding list of shapes is composed and converted to screen coordinates, the rasterizer turns the visible parts of the shapes into pixel-sized fragments. The vertex attributes coming from the vertex shader or geometry shader are interpolated and passed equally input to the fragment shader for each fragment. Equally you lot can see in the epitome, the colors are smoothly interpolated over the fragments that make upward the triangle, even though merely iii points were specified.
The fragment shader processes each individual fragment along with its interpolated attributes and should output the final color. This is usually done by sampling from a texture using the interpolated texture coordinate vertex attributes or simply outputting a color. In more advanced scenarios, there could also be calculations related to lighting and shadowing and special effects in this programme. The shader besides has the ability to discard sure fragments, which means that a shape will exist see-through there.
Finally, the end result is composed from all these shape fragments by blending them together and performing depth and stencil testing. All y'all need to know about these terminal two right now, is that they allow you to utilize additional rules to throw away certain fragments and let others pass. For example, if ane triangle is obscured past some other triangle, the fragment of the closer triangle should end upward on the screen.
Now that you know how your graphics bill of fare turns an assortment of vertices into an image on the screen, let's go to piece of work!
Vertex input
The outset thing you have to decide on is what data the graphics card is going to need to draw your scene correctly. Equally mentioned in a higher place, this information comes in the form of vertex attributes. You lot're free to come up with any kind of attribute you desire, but it all inevitably begins with the earth position. Whether y'all're doing 2D graphics or 3D graphics, this is the attribute that will determine where the objects and shapes end up on your screen in the stop.
Device coordinates
When your vertices take been processed by the pipeline outlined higher up, their coordinates will take been transformed into device coordinates. Device X and Y coordinates are mapped to the screen between -1 and 1.

>
Just similar a graph, the center has coordinates
(0,0)and the y centrality is positive above the heart. This seems unnatural because graphics applications usually take(0,0)in the peak-left corner and(width,summit)in the bottom-correct corner, merely it's an excellent way to simplify 3D calculations and to stay resolution contained.
The triangle above consists of iii vertices positioned at (0,0.5), (0.v,-0.v) and (-0.5,-0.v) in clockwise order. It is clear that the only variation between the vertices here is the position, then that's the only attribute we need. Since we're passing the device coordinates directly, an 10 and Y coordinate suffices for the position.
OpenGL expects you to transport all of your vertices in a single assortment, which may be confusing at first. To sympathise the format of this assortment, permit'south see what information technology would await like for our triangle.
bladder vertices[] = { 0.0f, 0.5f, // Vertex ane (X, Y) 0.5f, -0.5f, // Vertex ii (X, Y) -0.5f, -0.5f // Vertex three (Ten, Y) }; Equally you can run across, this array should simply exist a list of all vertices with their attributes packed together. The order in which the attributes appear doesn't thing, as long as it'due south the same for each vertex. The gild of the vertices doesn't have to be sequential (i.eastward. the society in which shapes are formed), merely this requires u.s.a. to provide actress information in the form of an element buffer. This will be discussed at the end of this affiliate as it would just complicate things for at present.
The next step is to upload this vertex data to the graphics card. This is important because the memory on your graphics card is much faster and you lot won't accept to send the information again every fourth dimension your scene needs to be rendered (well-nigh 60 times per 2nd).
This is done by creating a Vertex Buffer Object (VBO):
GLuint vbo; glGenBuffers(1, &vbo); // Generate 1 buffer The memory is managed by OpenGL, and so instead of a pointer y'all go a positive number as a reference to it. GLuint is only a cantankerous-platform substitute for unsigned int, just like GLint is one for int. You will demand this number to brand the VBO agile and to destroy it when you're done with it.
To upload the actual information to information technology y'all first accept to make it the agile object by calling glBindBuffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo); As hinted past the GL_ARRAY_BUFFER enum value there are other types of buffers, simply they are not of import right now. This statement makes the VBO we just created the active array buffer. Now that it's active we can copy the vertex data to information technology.
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); Find that this function doesn't refer to the id of our VBO, but instead to the active array buffer. The second parameter specifies the size in bytes. The final parameter is very important and its value depends on the usage of the vertex data. I'll outline the ones related to cartoon hither:
-
GL_STATIC_DRAW: The vertex data will exist uploaded once and drawn many times (e.grand. the earth). -
GL_DYNAMIC_DRAW: The vertex data will be created once, inverse from time to fourth dimension, just drawn many times more than that. -
GL_STREAM_DRAW: The vertex information will be uploaded once and drawn one time.
This usage value will determine in what kind of memory the data is stored on your graphics carte for the highest efficiency. For example, VBOs with GL_STREAM_DRAW as blazon may store their data in retentiveness that allows faster writing in favour of slightly slower cartoon.
The vertices with their attributes have been copied to the graphics card now, merely they're not quite ready to be used yet. Remember that we can make up any kind of aspect we want and in whatsoever gild, and then now comes the moment where y'all accept to explain to the graphics carte how to handle these attributes. This is where y'all'll see how flexible modern OpenGL actually is.
Shaders
Every bit discussed before, in that location are iii shader stages your vertex information will laissez passer through. Each shader stage has a strictly divers purpose and in older versions of OpenGL, you could only slightly tweak what happened and how information technology happened. With modern OpenGL, it's up to us to instruct the graphics card what to do with the data. This is why it'due south possible to decide per application what attributes each vertex should have. Y'all'll accept to implement both the vertex and fragment shader to get something on the screen, the geometry shader is optional and is discussed later.
Shaders are written in a C-way linguistic communication called GLSL (OpenGL Shading Language). OpenGL will compile your program from source at runtime and re-create information technology to the graphics card. Each version of OpenGL has its own version of the shader language with availability of a certain feature gear up and we will be using GLSL ane.50. This version number may seem a chip off when nosotros're using OpenGL three.2, simply that's because shaders were only introduced in OpenGL two.0 equally GLSL 1.10. Starting from OpenGL 3.3, this trouble was solved and the GLSL version is the same as the OpenGL version.
Vertex shader
The vertex shader is a program on the graphics card that processes each vertex and its attributes as they announced in the vertex assortment. Its duty is to output the final vertex position in device coordinates and to output any information the fragment shader requires. That'due south why the 3D transformation should take place here. The fragment shader depends on attributes similar the color and texture coordinates, which will normally be passed from input to output without any calculations.
Remember that our vertex position is already specified as device coordinates and no other attributes exist, and then the vertex shader will exist fairly bare bones.
#version 150 core in vec2 position; void chief() { gl_Position = vec4(position, 0.0, 1.0); } The #version preprocessor directive is used to point that the code that follows is GLSL one.50 code using OpenGL's core profile. Adjacent, we specify that there is only one attribute, the position. Apart from the regular C types, GLSL has built-in vector and matrix types identified by vec* and mat* identifiers. The type of the values within these constructs is ever a float. The number after vec specifies the number of components (x, y, z, w) and the number later mat specifies the number of rows /columns. Since the position attribute consists of only an X and Y coordinate, vec2 is perfect.
You can be quite creative when working with these vertex types. In the instance in a higher place a shortcut was used to fix the first two components of the
vec4to those ofvec2. These two lines are equal:gl_Position = vec4(position, 0.0, one.0); gl_Position = vec4(position.x, position.y, 0.0, i.0);When you're working with colors, you can likewise access the individual components with
r,thou,bandainstead ofx,y,zandwest. This makes no departure and can aid with clarity.
The concluding position of the vertex is assigned to the special gl_Position variable, because the position is needed for primitive assembly and many other built-in processes. For these to function correctly, the concluding value w needs to have a value of one.0f. Other than that, you lot're gratis to exercise anything you want with the attributes and we'll see how to output those when we add together color to the triangle later in this chapter.
Fragment shader
The output from the vertex shader is interpolated over all the pixels on the screen covered past a primitive. These pixels are called fragments and this is what the fragment shader operates on. Just similar the vertex shader information technology has ane mandatory output, the final color of a fragment. It's up to yous to write the code for calculating this color from vertex colors, texture coordinates and any other data coming from the vertex shader.
Our triangle but consists of white pixels, and so the fragment shader simply outputs that colour every time:
#version 150 core out vec4 outColor; void principal() { outColor = vec4(1.0, 1.0, 1.0, 1.0); } You lot'll immediately discover that we're not using some born variable for outputting the colour, say gl_FragColor. This is because a fragment shader tin can in fact output multiple colors and we'll run across how to handle this when really loading these shaders. The outColor variable uses the type vec4, because each color consists of a ruby, green, blue and alpha component. Colors in OpenGL are generally represented as floating point numbers between 0.0 and 1.0 instead of the common 0 and 255.
Compiling shaders
Compiling shaders is easy one time you have loaded the source code (either from file or as a hard-coded string). You can hands include your shader source in the C++ lawmaking through C++11 raw string literals:
const char* vertexSource = R"glsl( #version 150 core in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } )glsl"; Just similar vertex buffers, creating a shader itself starts with creating a shader object and loading data into it.
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, Nada); Unlike VBOs, y'all tin simply laissez passer a reference to shader functions instead of making it active or anything similar that. The glShaderSource function can take multiple source strings in an assortment, but you'll unremarkably accept your source code in i char array. The last parameter can incorporate an array of source lawmaking string lengths, passing Naught simply makes information technology stop at the null terminator.
All that's left is compiling the shader into code that can be executed by the graphics card at present:
glCompileShader(vertexShader); Be enlightened that if the shader fails to compile, e.g. because of a syntax error, glGetError will not study an mistake! Come across the block below for info on how to debug shaders.
Checking if a shader compiled successfully
GLint status; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &condition);If
statusis equal toGL_TRUE, then your shader was compiled successfully.Retrieving the compile log
char buffer[512]; glGetShaderInfoLog(vertexShader, 512, Naught, buffer);This will store the first 511 bytes + cypher terminator of the compile log in the specified buffer. The log may also report useful warnings even when compiling was successful, then information technology's useful to check information technology out from time to time when you develop your shaders.
The fragment shader is compiled in exactly the same style:
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, ane, &fragmentSource, Nothing); glCompileShader(fragmentShader); Once again, be sure to cheque if your shader was compiled successfully, because it will salvage you lot from a headache later on on.
Combining shaders into a program
Upward until now the vertex and fragment shaders have been 2 separate objects. While they've been programmed to piece of work together, they aren't actually connected yet. This connectedness is made past creating a program out of these two shaders.
GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); Since a fragment shader is allowed to write to multiple framebuffers, you need to explicitly specify which output is written to which framebuffer. This needs to happen before linking the program. However, since this is 0 past default and at that place's merely ane output right now, the following line of lawmaking is not necessary:
glBindFragDataLocation(shaderProgram, 0, "outColor"); Use
glDrawBufferswhen rendering to multiple framebuffers, because only the commencement output will be enabled by default.
Afterward attaching both the fragment and vertex shaders, the connexion is made by linking the plan. It is allowed to make changes to the shaders later on they've been added to a programme (or multiple programs!), merely the actual event will not change until a programme has been linked again. It is also possible to attach multiple shaders for the same stage (e.chiliad. fragment) if they're parts forming the whole shader together. A shader object can be deleted with glDeleteShader, but it volition not really be removed before information technology has been detached from all programs with glDetachShader.
glLinkProgram(shaderProgram); To really get-go using the shaders in the program, you just have to call:
glUseProgram(shaderProgram); Just like a vertex buffer, only one program can be active at a time.
Making the link between vertex data and attributes
Although we have our vertex data and shaders now, OpenGL withal doesn't know how the attributes are formatted and ordered. Yous first need to retrieve a reference to the position input in the vertex shader:
GLint posAttrib = glGetAttribLocation(shaderProgram, "position"); The location is a number depending on the order of the input definitions. The first and just input position in this example will always have location 0.
With the reference to the input, you tin can specify how the data for that input is retrieved from the assortment:
glVertexAttribPointer(posAttrib, ii, GL_FLOAT, GL_FALSE, 0, 0); The first parameter references the input. The 2d parameter specifies the number of values for that input, which is the same as the number of components of the vec. The third parameter specifies the type of each component and the fourth parameter specifies whether the input values should be normalized betwixt -1.0 and 1.0 (or 0.0 and 1.0 depending on the format) if they aren't floating bespeak numbers.
The concluding two parameters are arguably the most important here every bit they specify how the aspect is laid out in the vertex array. The get-go number specifies the stride, or how many bytes are between each position attribute in the array. The value 0 ways that there is no data in between. This is currently the case as the position of each vertex is immediately followed by the position of the next vertex. The final parameter specifies the offset, or how many bytes from the beginning of the array the attribute occurs. Since there are no other attributes, this is 0 as well.
Information technology is important to know that this function will store not but the stride and the offset, but also the VBO that is currently bound to GL_ARRAY_BUFFER. That ways that you don't have to explicitly bind the right VBO when the actual drawing functions are called. This besides implies that you tin can use a dissimilar VBO for each attribute.
Don't worry if you don't fully understand this however, as nosotros'll see how to change this to add together more attributes presently plenty.
glEnableVertexAttribArray(posAttrib); Last, simply not least, the vertex attribute array needs to exist enabled.
Vertex Array Objects
You tin imagine that existent graphics programs apply many different shaders and vertex layouts to take care of a broad diverseness of needs and special effects. Changing the agile shader program is piece of cake enough with a phone call to glUseProgram, but it would be quite inconvenient if y'all had to set upwardly all of the attributes again every time.
Luckily, OpenGL solves that trouble with Vertex Array Objects (VAO). VAOs store all of the links between the attributes and your VBOs with raw vertex data.
A VAO is created in the same style as a VBO:
GLuint vao; glGenVertexArrays(i, &vao); To commencement using it, merely bind it:
glBindVertexArray(vao); As soon as you've bound a certain VAO, every time you call glVertexAttribPointer, that information will be stored in that VAO. This makes switching betwixt different vertex data and vertex formats every bit like shooting fish in a barrel equally binding a dissimilar VAO! Only think that a VAO doesn't store any vertex data by itself, it just references the VBOs yous've created and how to call up the attribute values from them.
Since only calls after binding a VAO stick to it, make sure that you've created and spring the VAO at the start of your programme. Any vertex buffers and chemical element buffers bound before information technology volition exist ignored.
Drawing
Now that you've loaded the vertex data, created the shader programs and linked the information to the attributes, you're ready to draw the triangle. The VAO that was used to store the aspect data is already bound, and so you don't have to worry virtually that. All that'southward left is to but call glDrawArrays in your main loop:
glDrawArrays(GL_TRIANGLES, 0, three); The showtime parameter specifies the kind of primitive (unremarkably indicate, line or triangle), the second parameter specifies how many vertices to skip at the kickoff and the last parameter specifies the number of vertices (non primitives!) to process.
When you run your program now, yous should encounter the following:
If yous don't meet annihilation, make sure that the shaders have compiled correctly, that the program has linked correctly, that the aspect array has been enabled, that the VAO has been leap before specifying the attributes, that your vertex data is correct and that glGetError returns 0. If you can't find the problem, try comparing your code to this sample.
Uniforms
Right now the white colour of the triangle has been hard-coded into the shader lawmaking, but what if you wanted to change information technology after compiling the shader? As it turns out, vertex attributes are non the just way to pass data to shader programs. In that location is some other way to pass data to the shaders called uniforms. These are essentially global variables, having the same value for all vertices and/or fragments. To demonstrate how to use these, permit's make it possible to alter the color of the triangle from the programme itself.
Past making the colour in the fragment shader a uniform, it volition end upwardly looking like this:
#version 150 core uniform vec3 triangleColor; out vec4 outColor; void primary() { outColor = vec4(triangleColor, 1.0); } The last component of the output color is transparency, which is not very interesting right now. If you run your plan now you lot'll meet that the triangle is black, because the value of triangleColor hasn't been prepare yet.
Irresolute the value of a uniform is just like setting vertex attributes, you first accept to take hold of the location:
GLint uniColor = glGetUniformLocation(shaderProgram, "triangleColor"); The values of uniforms are changed with whatever of the glUniformXY functions, where X is the number of components and Y is the blazon. Common types are f (float), d (double) and i (integer).
glUniform3f(uniColor, 1.0f, 0.0f, 0.0f); If you run your program now, you'll run across that the triangle is red. To make things a trivial more heady, attempt varying the color with the time past doing something like this in your primary loop:
auto t_start = std::chrono::high_resolution_clock::at present(); ... auto t_now = std::chrono::high_resolution_clock::at present(); bladder time = std::chrono::duration_cast<std::chrono::elapsing<float>>(t_now - t_start).count(); glUniform3f(uniColor, (sin(time * 4.0f) + 1.0f) / 2.0f, 0.0f, 0.0f); Although this example may not be very exciting, it does demonstrate that uniforms are essential for decision-making the behaviour of shaders at runtime. Vertex attributes on the other hand are platonic for describing a single vertex.
See the code if you have any trouble getting this to piece of work.
Calculation some more colors
Although uniforms accept their place, color is something we'd rather similar to specify per corner of the triangle! Let'due south add a colour attribute to the vertices to reach this.
We'll first have to add the extra attributes to the vertex data. Transparency isn't actually relevant, so we'll only add the red, green and blue components:
bladder vertices[] = { 0.0f, 0.5f, ane.0f, 0.0f, 0.0f, // Vertex 1: Red 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, // Vertex 2: Greenish -0.5f, -0.5f, 0.0f, 0.0f, 1.0f // Vertex 3: Blue }; So we take to alter the vertex shader to accept information technology as input and pass it to the fragment shader:
#version 150 core in vec2 position; in vec3 color; out vec3 Colour; void principal() { Color = color; gl_Position = vec4(position, 0.0, 1.0); } And Colour is added as input to the fragment shader:
#version 150 cadre in vec3 Color; out vec4 outColor; void primary() { outColor = vec4(Color, one.0); } Brand sure that the output of the vertex shader and the input of the fragment shader have the same name, or the shaders will not exist linked properly.
At present, we just demand to modify the attribute arrow code a bit to conform for the new Ten, Y, R, Thou, B attribute order.
GLint posAttrib = glGetAttribLocation(shaderProgram, "position"); glEnableVertexAttribArray(posAttrib); glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 5*sizeof(bladder), 0); GLint colAttrib = glGetAttribLocation(shaderProgram, "color"); glEnableVertexAttribArray(colAttrib); glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE, 5*sizeof(float), (void*)(2*sizeof(float))); The fifth parameter is set to five*sizeof(float) at present, because each vertex consists of 5 floating signal attribute values. The offset of 2*sizeof(float) for the color attribute is at that place because each vertex starts with 2 floating indicate values for the position that it has to skip over.
And we're done!
You should now have a reasonable understanding of vertex attributes and shaders. If you lot ran into bug, ask in the comments or have a expect at the altered source lawmaking.
Element buffers
Correct at present, the vertices are specified in the order in which they are drawn. If y'all wanted to add another triangle, yous would have to add three boosted vertices to the vertex array. In that location is a way to control the gild, which likewise enables you to reuse existing vertices. This tin salvage you a lot of retentivity when working with real 3D models later on on, considering each betoken is usually occupied by a corner of iii triangles!
An element array is filled with unsigned integers referring to vertices jump to GL_ARRAY_BUFFER. If nosotros just want to draw them in the social club they are in now, it'll expect like this:
GLuint elements[] = { 0, 1, 2 }; They are loaded into video memory through a VBO only like the vertex information:
GLuint ebo; glGenBuffers(1, &ebo); ... glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); The only thing that differs is the target, which is GL_ELEMENT_ARRAY_BUFFER this time.
To really make utilize of this buffer, you'll accept to modify the depict command:
glDrawElements(GL_TRIANGLES, three, GL_UNSIGNED_INT, 0); The beginning parameter is the aforementioned equally with glDrawArrays, but the other ones all refer to the chemical element buffer. The second parameter specifies the number of indices to draw, the third parameter specifies the blazon of the element data and the last parameter specifies the offset. The just real difference is that you're talking about indices instead of vertices now.
To meet how an element buffer can exist beneficial, let's try drawing a rectangle using two triangles. We'll start by doing it without an chemical element buffer.
float vertices[] = { -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, // Superlative-left 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, // Top-right 0.5f, -0.5f, 0.0f, 0.0f, ane.0f, // Lesser-right 0.5f, -0.5f, 0.0f, 0.0f, i.0f, // Bottom-correct -0.5f, -0.5f, ane.0f, 1.0f, ane.0f, // Bottom-left -0.5f, 0.5f, ane.0f, 0.0f, 0.0f // Elevation-left }; Past calling glDrawArrays instead of glDrawElements like before, the element buffer will simply be ignored:
glDrawArrays(GL_TRIANGLES, 0, six); The rectangle is rendered as it should, just the repetition of vertex data is a waste of memory. Using an element buffer allows you lot to reuse information:
float vertices[] = { -0.5f, 0.5f, ane.0f, 0.0f, 0.0f, // Top-left 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, // Top-correct 0.5f, -0.5f, 0.0f, 0.0f, i.0f, // Bottom-right -0.5f, -0.5f, 1.0f, ane.0f, 1.0f // Lesser-left }; ... GLuint elements[] = { 0, ane, 2, two, 3, 0 }; ... glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); The chemical element buffer still specifies 6 vertices to form ii triangles like before, but now we're able to reuse vertices! This may not seem like much of a big deal at this point, simply when your graphics awarding loads many models into the relatively small graphics memory, chemical element buffers volition exist an important surface area of optimization.
If you run into trouble, have a look at the total source code.
This chapter has covered all of the cadre principles of drawing things with OpenGL and it's absolutely essential that you have a proficient understanding of them before continuing. Therefore I advise you to do the exercises below before diving into textures.
Exercises
- Modify the vertex shader so that the triangle is upside downwards. (Solution)
- Invert the colors of the triangle by altering the fragment shader. (Solution)
- Change the program and then that each vertex has just one colour value, determining the shade of gray. (Solution)
Source: https://open.gl/drawing
Belum ada Komentar untuk "Draw a Circle Open Gl"
Posting Komentar