Informatika | Grafika » Prof Emmanuel Agu - Computer graphics

Alapadatok

Év, oldalszám:2017, 844 oldal

Nyelv:angol

Letöltések száma:19

Feltöltve:2017. augusztus 16.

Méret:17 MB

Intézmény:
-

Megjegyzés:
Worcester Polytechnic Institute

Csatolmány:-

Letöltés PDF-ben:Kérlek jelentkezz be!



Értékelések

Nincs még értékelés. Legyél Te az első!


Tartalmi kivonat

Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 1: Introduction to Computer Graphics Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet What is Computer Graphics (CG)?    Computer graphics: algorithms, mathematics, data structures . that computer uses to generate PRETTY PICTURES Techniques (e.g draw a line, polygon) evolved over years Built into programmable libraries Computer‐Generated! Not a picture! Source: http://www.doksinet Photorealistic Vs Real‐Time Graphics Not this Class This Class • Photo‐realistic: E.g ray tracing slow: may take days to render • Real Time graphics: Milliseconds to render (30 FPS) But lower image quality Source: http://www.doksinet Uses of Computer Graphics: Entertainment  Entertainment: games Movies Courtesy: Super Mario Galaxy 2 Courtesy: Spiderman Source: http://www.doksinet Uses of Computer Graphics  Image processing:  alter images,

remove noise, super‐impose images Original Image Sobel Filter Source: http://www.doksinet Uses of Computer Graphics  Monitor large systems or plants Simulators Courtesy: Dataviews.de Courtesy: Evans and Sutherland Source: http://www.doksinet Uses of Computer Graphics  Computer‐aided design: Display math functions E.g matlab Courtesy: cadalog.com Source: http://www.doksinet Uses of Computer Graphics  Scientific analysis and visualization:  molecular biology, weather, matlab, Mandelbrot set Courtesy: Human Brain Project, Denmark Source: http://www.doksinet 2D Vs. 3D  2‐Dimensional (2D)     Flat Objects no notion of distance from viewer Only (x,y) color values on screen • • 3‐Dimensional (3D)   Objects have distances from viewer (x,y,z) values on screen This class covers both 2D & 3D! Also interaction: Clicking, dragging Source: http://www.doksinet About This Course  Computer Graphics has many aspects

  Computer Scientists create/program graphics tools (e.g Maya, photoshop) Artists use CG tools/packages to create pretty pictures Source: http://www.doksinet About This Course     Most hobbyists follow artist path. Not much math! This Course: Computer Graphics for computer scientists!!! Teaches concepts, uses OpenGL as concrete example Course is NOT    just about programming OpenGL a comprehensive course in OpenGL. (Only parts of OpenGL covered) about using packages like Maya, Photoshop Source: http://www.doksinet About This Course  Class is concerned with:      This course is a lot of work. Requires:     How to build/program graphics tools Underlying mathematics Underlying data structures Underlying algorithms Lots of coding in C/C++ Shader programming Lots of math, linear algebra, matrices We shall combine:   Programmer’s view: Program OpenGL APIs Under the hood: Learn OpenGL internals (graphics

algorithms, math, implementation) Source: http://www.doksinet Course Text   Interactive Computer Graphics: A Top‐Down Approach with Shader‐based OpenGL by Angel and Shreiner (6th edition), 2012 Buy 6th edition . NOT 7th edition!!! Source: http://www.doksinet Syllabus Summary   2 Exams (50%), 4 Projects (50%) Projects:      Develop OpenGL/GLSL code on any platform, must port to Zoolab machine May discuss projects but turn in individual projects Class website: http://web.cswpiedu/~emmanuel/courses/cs4731/A14/ Cheating: Immediate ‘F’ in the course Advice:    Come to class Read the text Understand concepts before coding Source: http://www.doksinet Elements of 2D Graphics     Polylines Text Filled regions Raster images (pictures) Source: http://www.doksinet Elements of 2D Graphics   Polyline: connected sequence of straight lines Straight lines connect vertices (corners) blow-up vertex Source:

http://www.doksinet Polyline Attributes    Color Thickness Stippling of edges (dash pattern) Source: http://www.doksinet Text  Devices have:   text mode graphics mode. Big Text Little Text  Graphics mode: Text is drawn  Text mode: Text not drawn uses character generator  Text attributes: Font, color, size, spacing, and orientation Shadow Text Rotated TextOutlined text SMALLCAPS Source: http://www.doksinet Filled Regions   Filled region: shape filled with some color or pattern Example: polygons B A C D Source: http://www.doksinet Raster Images  Raster image (picture) consists of 2D matrix of small cells (pixels, for “picture elements”), in different colors or grayscale. Middle image: magnified showing pixels (squares) Source: http://www.doksinet Computer Graphics Tools  Hardware tools     Output devices: Video monitors, printers Input devices: Mouse/trackball, pen/drawing tablet, keyboard Graphics

cards/accelerators (GPUs) Software tools (low level)      Operating system Editor Compiler Debugger Graphics Library (OpenGL) Source: http://www.doksinet Graphics Processing Unit (GPU)    OpenGL implemented in hardware => FAST!! Programmable: as shaders Located either on PC motherboard (Intel) or Separate graphics card (Nvidia or ATI) GPU on PC motherboard GPU on separate PCI express card Source: http://www.doksinet Computer Graphics Libraries   Functions to draw line, circle, image, etc Previously device‐dependent     Different OS => different graphics library Tedious! Difficult to port (e.g move program Windows to Linux) Error Prone Now device‐independent libraries   APIs: OpenGL, DirectX Working OpenGL program minimal changes to move from Windows to Linux, etc Source: http://www.doksinet OpenGL Basics    OpenGL’s function is Rendering (or drawing) Rendering? – Convert geometric/mathematical

object descriptions into images OpenGL can render:    2D and 3D Geometric primitives (lines, dots, etc) Bitmap images (pictures, .bmp, jpg, etc) OpenGL Program OpenGL Source: http://www.doksinet GL Utility Toolkit (GLUT)  OpenGL does NOT manage drawing window  OpenGL     Window system independent Concerned only with drawing (2D, 3D, images, etc) No window management (create, resize, etc), very portable GLUT:    Minimal window management Interfaces with different windowing systems Easy porting between windowing systems. Fast prototyping GLUT OpenGL Source: http://www.doksinet GL Utility Toolkit (GLUT)  No bells and whistles     No sliders No dialog boxes No elaborate menus, etc To add bells and whistles, use system’s API or GLUI:    X window system Apple: AGL Microsoft :WGL, etc GLUT (minimal) Slider Dialog box Source: http://www.doksinet OpenGL Basics  Low‐level graphics rendering API 

Maximal portability   Display device independent (Monitor type, etc)  Operating system independent (Unix, Windows, etc)  Window system independent based (Windows, X, etc) OpenGL programs behave same on different devices, OS Source: http://www.doksinet Simplified OpenGL Pipeline   Vertices go in, sequence of steps (vertex processor, clipper, rasterizer, fragment processor) image rendered This class: learn algorithms and order of these steps Vertex Shader Converts 3D to 2D Fragment (Pixel) Shader Source: http://www.doksinet OpenGL Programming Interface  Programmer view of OpenGL?   Application Programmer Interface (API) Writes OpenGL Application programs. Eg glDrawArrays(GL LINE LOOP, 0, N); glFlush( ); Source: http://www.doksinet Framebuffer  Dedicated memory location:   Draw in framebuffer => shows up on screen Located either on CPU (software) or GPU (hardware) x logical address y x scan controller geometric position

y 0 at (639, 0) 639 0 pixel at address [x,y] frame buffer spot at (x,y) convert pixel 479 value to color display surface y at (639, 479) x Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics (6th edition), Chapter 1 Hill and Kelley, Computer Graphics using OpenGL (3rd edition), Chapter 1 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 2: Introduction to OpenGL/GLUT (Part 1) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Recall: OpenGL/GLUT Basics    OpenGL’s function – Rendering (2D, 3D drawings or images) OpenGL does not manage drawing window GLUT: minimal window management GLUT OpenGL Source: http://www.doksinet OpenGL/GLUT Installation  OpenGL: Specific version (e.g 43)already on your graphics card   Just need to check your graphics card, OpenGL version GLUT: software that needs to be installed  already

installed in zoolab machines GLUT: install it! OpenGL: already on graphics card Source: http://www.doksinet glInfo: Finding out about your Graphics Card   Software tool to find out OpenGL version and extensions your graphics card supports This class? Need graphics card that supports OpenGL 4.3 or later Source: http://www.doksinet OpenGL Extension Wrangler Library (GLEW)     OpenGL extensions: allows individual card manufacturers to implement new features Example: If card manufacturer maker implements new cool features after OpenGL version 4.5 released, make available as extension to OpenGL 4.5 GLEW: easy access to OpenGL extensions available on a particular graphics card We install GLEW as well. Access to extensions on zoolab cards Source: http://www.doksinet Windows Installation of GLUT, GLEW  Install Visual Studio (e.g 2010)  Download freeglut 32‐bit (GLUT implementation)   http://freeglut.sourceforgenet/ Download 32‐bit GLEW

 Check graphics card http://glew.sourceforgenet/ Install GLUT, GLEW   Unzip => .lib, h, dll files E.g download freeglut 281, files:    freeglut.dll glut.h freeglut.lib Source: http://www.doksinet Windows Installation of GLUT, GLEW  E.g download freeglut 281, files:    freeglut.dll glut.h freeglut.lib Check graphics card Install GLUT, GLEW  Install files:     Put .dll files (for GLUT and GLEW) in C:windowssystem Put .h files in c:Visual Studioinclude directory Put .lib files in c:Visual Studiolib directory Note: If you have multiple versions of Visual Studio, use include directory of the highest Visual Studio version   E.g if you have Visual Studio 2008 + Visual Studio 2010 Use include, lib directories of Visual Studio 2010 Source: http://www.doksinet OpenGL Program?  Usually has 3 files:  Main .cpp file: containing your main function   Does initialization, generates/loads geometry to be drawn 2

shader files:   Vertex shader: functions to manipulate (e.g move) vertices Fragment shader: functions to manipulate pixels/fragments (e.g change color) .cpp program (contains main( ) ) Image Source: http://www.doksinet Getting Started: Writing .cpp In Visual studio 1. 2. 3. Create empty project Create blank console application (C program) Include glew.h and gluth at top of your program Create VS Solution #include <glew.h> #include <GL/glut.h> GLUT, GLEW includes Note: GL/ is sub‐directory of compiler include/ directory   OpenGL drawing functions in gl.h glut.h contains GLUT functions, also includes glh Source: http://www.doksinet Getting Started: More #includes  Most OpenGL applications use standard C library (e.g printf), so #include <glew.h> #include <GL/glut.h> #include <stdlib.h> #include <stdio.h> Source: http://www.doksinet OpenGL/GLUT Program Structure  Open window (GLUT)   Configure display mode,

window position/size Register input callback functions (GLUT)        Render, resize, input: keyboard, mouse, etc My initialization  Set background color, clear color, etc Generate points to be drawn Initialize shader stuff Initialize GLEW Register GLUT callbacks glutMainLoop( )  GLUT, GLEW includes Create GLUT Window Register callback fns My Inialializations Inialialize GLEW Waits here infinitely till event GLUT main loop Source: http://www.doksinet GLUT: Opening a window  GLUT used to create and open window  glutInit(&argc, argv);  Initializes GLUT  glutInitDisplayMode(GLUT SINGLE | GLUT RGB);  sets display mode (e.g single framebuffer with RGB colors)  glutInitWindowSize(640,480);  sets window size (Width x Height) in pixels  glutInitPosition(100,150);  sets location of upper left corner of window   glutCreateWindow(“my first attempt”);  open window with title “my first attempt” Then

also initialize GLEW  glewInit( ); Source: http://www.doksinet OpenGL Skeleton void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); 150 // then register callback functions, // do my initialization // . wait in glutMainLoop for events } 100 my first attempt 480 640 Source: http://www.doksinet Sequential Vs Event‐driven   OpenGL programs are event‐driven Sequential program     Start at main( ) Perform actions 1, 2, 3. N End Event‐driven program    Start at main( ) Initialize Wait in infinite loop    Wait till defined event occurs Event occurs => Take defined actions What is World’s most famous event‐driven program? Source: http://www.doksinet OpenGL:

Event‐driven    Program only responds to events Do nothing until event occurs Example Events:     Programmer defines:    mouse clicks, keyboard stroke window resize Events that program should respond to Actions to be taken when event occurs System (Windows):  Receives event, maintains event queue Left mouse click  Keyboard ‘h’ key takes programmer‐defined actions Source: http://www.doksinet OpenGL: Event‐driven  How in OpenGL?    Example: Programmer 1. 2.  Programmer registers callback functions (event handler) Callback function called when event occurs Declare function myMouse, to be called on mouse click Register it: glutMouseFunc(myMouse); When OS receives mouse click, calls callback function myMouse Mouse click Event myMouse Callback function Source: http://www.doksinet GLUT Callback Functions    Register callbacks for all events your program will react to No registered callback = no

action Example: if no registered keyboard callback function, hitting keyboard keys generates NO RESPONSE!! Source: http://www.doksinet GLUT Callback Functions  GLUT Callback functions in skeleton      glutDisplayFunc(myDisplay): Image to be drawn initially glutReshapeFunc(myReshape): called when window is reshaped glutMouseFunc(myMouse): called when mouse button is pressed glutKeyboardFunc(mykeyboard): called when keyboard is pressed or released glutMainLoop( ):   program draws initial picture (by calling myDisplay function once) Enters infinite loop till event Source: http://www.doksinet OpenGL Skeleton void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions

glutDisplayFunc(myDisplay); --Next how to draw in myDisplay glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit( ); glutMainLoop( ); } Source: http://www.doksinet Example: Draw in function myDisplay  Task: Draw red triangle on white background  Rendering steps: 1. 2. 3. 4. 5. Generate triangle corners (3 vertices) Store 3 vertices into an array Create GPU buffer for vertices Move 3 vertices from CPU to GPU buffer Draw 3 points from array on GPU using glDrawArray Source: http://www.doksinet Example: Retained Mode Graphics  Rendering steps: 1. 2. 3. 4. 5.  1. Generate triangle corners (3 vertices) Store 3 vertices into an array Create GPU buffer for vertices Move array of 3 vertices from CPU to GPU buffer Draw 3 points from array on GPU using glDrawArray Simplified Execution model: Generate 3 triangle corners 4. Move array of 3 vertices from CPU to GPU buffer 3. Create GPU buffers for vertices 2. Store 3 vertices in

array Application Program (on CPU) GPU 5. Draw points using glDrawArrays Rendered vertices Source: http://www.doksinet 1. Generate triangle corners (3 vertices) 2. Store 3 vertices into an array point2 points[3]; // generate 3 triangle vertices + store in array void generateGeometry( void ){ points[0] = point2( -0.5, -05 ); points[1] = point2( 0.0, 05 ); points[2] = point2( 0.5, -05 ); } x y (-0.5, -05) (0.0, 05) (0.5, -05) Source: http://www.doksinet Declare some Types for Points, vectors  Useful to declare types    point2 for (x,y) locations vec3 for (x,y,z) vector coordinates Put declarations in header file vec.h #include “vec.h” E.g  Declares (x, y, z) coordinates of a vector vec3 vector1; Can also do typedefs typedef (x, y) coordinates of a point typedef vec2 point2;  Note: You will be given file Angel.h, which includes vech Source: http://www.doksinet OpenGL Skeleton: Where are we? void main(int argc, char* argv){

glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); glewInit( ); generateGeometry( ); glutMainLoop( ); } // generate 3 triangle vertices + store in array void generateGeometry( void ){ points[0] = point2( -0.5, -05 ); points[1] = point2( 0.0, 05 ); points[2] = point2( 0.5, -05 ); } Source: http://www.doksinet 3. Create GPU Buffer for Vertices      Rendering from GPU memory significantly faster. Move data there Fast GPU (off‐screen) memory for data called Vertex Buffer Objects (VBO) Array of VBOs (called Vertex Array Object (VAO)) usually created Example use: vertex positions in VBO 1, color info in VBO 2, etc VBO So, first create the vertex array object VAO

GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); VBO VBO // create VAO // make VAO active Source: http://www.doksinet 3. Create GPU Buffer for Vertices  Next, create a buffer object in two steps 1. Create VBO and give it name (unique ID number) GLuint buffer; glGenBuffers(1, &buffer); // create one buffer object Number of Buffer Objects to return 2. Make created VBO currently active one glBindBuffer(GL ARRAY BUFFER, buffer); Data is array of values Source: http://www.doksinet 4. Move points GPU memory 3. Move points generated earlier to VBO glBufferData(GL ARRAY BUFFER, buffer, sizeof(points), points, GL STATIC DRAW ); //data is array Data to be transferred to GPU memory (generated earlier)   GL STATIC DRAW: buffer object data will not be changed. Specified once by application and used many times to draw GL DYNAMIC DRAW: buffer object data will be changed. Specified repeatedly and used many times to draw Source:

http://www.doksinet Put it Together: 3. Create GPU Buffer for Vertices 4. Move points GPU memory void initGPUBuffers( void { // Create a vertex GLuint vao; glGenVertexArrays( glBindVertexArray( ) array object 1, &vao ); vao ); VBO VAO VBO VBO // Create and initialize a buffer object GLuint buffer; glGenBuffers( 1, &buffer ); glBindBuffer( GL ARRAY BUFFER, buffer ); glBufferData( GL ARRAY BUFFER, sizeof(points), points, GL STATIC DRAW ); } Source: http://www.doksinet OpenGL Skeleton: Where are we? void main(int argc, char* argv){ glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); glewInit( ); generateGeometry( ); initGPUBuffers( ); void initGPUBuffers( void ) { //

Create a vertex array object GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); // Create and initialize a buffer object GLuint buffer; glGenBuffers( 1, &buffer ); glBindBuffer( GL ARRAY BUFFER, buffer ); glBufferData( GL ARRAY BUFFER, sizeof(points), points, GL STATIC DRAW ); glutMainLoop( ); } } Source: http://www.doksinet 5. Draw points (from VBO) glDrawArrays(GL POINTS, 0, N); Render buffered data as points  Starting index Number of points to be rendered Display function using glDrawArrays: void mydisplay(void){ glClear(GL COLOR BUFFER BIT); // clear screen glDrawArrays(GL LINE LOOP, 0, 3); // draw the points glFlush( ); // force rendering to show } Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 2 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Chapter 2 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 3: Introduction to OpenGL/GLUT

(Part 2) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Recall: OpenGL/GLUT Basics  OpenGL: Specific version (e.g 43)already on your graphics card   Just need to check your graphics card, OpenGL version GLUT: software that needs to be installed  already installed in zoolab machines GLUT: install it! OpenGL: already on graphics card Source: http://www.doksinet Recall: OpenGL Skeleton void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); 150 // then register callback functions, // do my initialization // . wait in glutMainLoop for events } 100 my first attempt 480 640 Source: http://www.doksinet Recall: Drawing 3 dots  Rendering steps: 1. 2.

3. 4. 5.  1. Generate triangle corners (3 vertices) Store 3 vertices into an array Create GPU buffer for vertices Move array of 3 vertices from CPU to GPU buffer Draw 3 points from array on GPU using glDrawArray Simplified Execution model: Generate 3 triangle corners 4. Move array of 3 vertices from CPU to GPU buffer 3. Create GPU buffers for vertices 2. Store 3 vertices in array Application Program (on CPU) GPU 5. Draw points using glDrawArrays Rendered vertices Source: http://www.doksinet Recall: OpenGL Skeleton: Where are we? void main(int argc, char* argv){ glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); glewInit( ); generateGeometry( ); glutMainLoop( ); }

(-0.5, -05) (0.0, 05) (0.5, -05) // generate 3 triangle vertices + store in array void generateGeometry( void ){ points[0] = point2( -0.5, -05 ); points[1] = point2( 0.0, 05 ); points[2] = point2( 0.5, -05 ); } Source: http://www.doksinet Recall: OpenGL Skeleton: Where are we? void main(int argc, char* argv){ glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); glewInit( ); generateGeometry( ); initGPUBuffers( ); VAO VBO VBO void initGPUBuffers( void ) { // Create a vertex array object GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); // Create and initialize a buffer object GLuint buffer; glGenBuffers( 1, &buffer ); glBindBuffer( GL ARRAY BUFFER,

buffer ); glBufferData( GL ARRAY BUFFER, sizeof(points), points, GL STATIC DRAW ); glutMainLoop( ); } VBO } Source: http://www.doksinet Recall: OpenGL Program?  OpenGL program has 3 files:   Main .cpp file: generates picture (eg 3 dots) 3 dots need to pass through 2 shader files:    Vertex shader: functions to manipulate vertices Fragment shader: functions to manipulate pixels/fragments (e.g change color) How to pass 3 dots from main program to vertex shader? .cpp program (contains main( ) ) Image Source: http://www.doksinet OpenGL Program: Shader Setup  OpenGL programs now have 3 parts:    Main OpenGL program (.cpp file), vertex shader (eg vshader1glsl), and fragment shader (e.g fshader1glsl) in same Windows directory In main program, need to link names of vertex, fragment shader initShader( ) is homegrown shader initialization function. More later GLuint = program; GLuint program = InitShader( "vshader1.glsl",

fshader1glsl"); glUseProgram(program); Main Program Vertex shader Fragment Shader initShader( ) Homegrown, connects main Program to shader files More on this later!! Source: http://www.doksinet Vertex Attributes   Want to make 3 dots (vertices) accessible as variable vPosition in vertex shader First declare vPosition in vertex shader, get its address .cpp program (contains main( ) )   in vec4 vPosition Compiler puts all variables declared in shader into a table Need to find location of vPosition in table of variables Variable Variable 1 Location of vPosition vPosition Variable N GLuint loc = glGetAttribLocation( program, "vPosition" ); Source: http://www.doksinet Vertex Attributes   Want to make 3 dots (vertices) accessible as variable vPosition in vertex shader First declare vPosition in vertex shader, get its address .cpp program (contains main( ) ) in vec4 vPosition Get location of vertex attribute vPosition GLuint loc =

glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(0) ); Enable vertex array attribute at location of vPosition Specify vertex array attribute at location of vPosition Source: http://www.doksinet glVertexAttribPointer   Data now in VBO on GPU, but need to specify meta format (using glVertexAttribPointer) Vertices are packed as array of values E.g 3 dots stored in array on VBO Vertices stored in array x y x y x y x y x y -0.5 -0.5 x y dot 1 vertex 1 vertex 2 . 0.0VBO 0.5 x y dot 2 0.5 -0.5 x y dot 3 Padding between Consecutive vertices glVertexAttribPointer( loc, 2, GL FLOAT, GL FALSE, 0,BUFFER OFFSET(0) ); Location of vPosition in table of variables 2 (x,y) floats per vertex Data starts at offset from start of array Data not normalized to 0-1 range Source: http://www.doksinet Put it Together: Shader Set up void shaderSetup( void ) { //

Load shaders and use the resulting shader program program = InitShader( "vshader1.glsl", "fshader1glsl" ); glUseProgram( program ); // Initialize vertex position attribute from vertex shader GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(0) ); // sets white as color used to clear screen glClearColor( 1.0, 10, 10, 10 ); } Source: http://www.doksinet OpenGL Skeleton: Where are we? void main(int argc, char* argv){ glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); void shaderSetup( void ) glutReshapeFunc(myReshape); { // Load shaders and use the resulting shader program glutMouseFunc(myMouse); program = InitShader(

"vshader1.glsl", "fshader1glsl" glutKeyboardFunc(myKeyboard); glUseProgram( program ); // Initialize vertex position attribute from vertex shader GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(0) ); glewInit( ); generateGeometry( ); initGPUBuffers( ); void shaderSetup( ); glutMainLoop( ); } ); // sets white as color used to clear screen glClearColor( 1.0, 10, 10, 10 ); } Source: http://www.doksinet Vertex Shader    We write a simple “pass‐through” shader (does nothing) Simply sets output vertex position = input position gl Position is built in variable (already declared) in vec4 vPosition void main( ) { gl Position = vPosition; } output vertex position input vertex position Source: http://www.doksinet Execution Model 1. Vertex data Moved to GPU (glBufferData) GPU Graphics Hardware (not programmable) Application

Program (on CPU) Figures out which Pixels on screen Colored to draw dots Vertex Vertex Shader Vertex Shader Shader 2. glDrawArrays 3. Vertex shader invoked on each vertex on GPU Rendered Vertices Source: http://www.doksinet Fragment Shader   We write a simple fragment shader (sets color to red) gl FragColor is built in variable (already declared) void main( ) { gl FragColor = vec(1.0, 00, 00, 10); } Set each drawn fragment color to red Source: http://www.doksinet Execution Model OpenGL Program Application Graphics Hardware (not programmable) Figures out pixels to to be olored to draw 2 dots 1. Fragments corresponding to Rendered vertices Fragment Fragment Shader Fragment Shader Shader 2. Fragment shader invoked on each fragment on GPU Frame Buffer 3.Rendered Fragment Color Source: http://www.doksinet Recall: OpenGL Skeleton void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize

toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); --Next how to draw in myDisplay glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit( ); glutMainLoop( ); } Source: http://www.doksinet Recall: Draw points (from VBO) glDrawArrays(GL POINTS, 0, N); Render buffered data as points  Starting index Number of points to be rendered Display function using glDrawArrays: void mydisplay(void){ glClear(GL COLOR BUFFER BIT); // clear screen glDrawArrays(GL LINE LOOP, 0, 3); // draw the points glFlush( ); // force rendering to show } Source: http://www.doksinet Other possible arguments to glDrawArrays instead of GL LINE LOOP? glDrawArrays(GL POINTS, .) – draws dots glDrawArrays((GL LINES, ) – Connect vertex pairs to draw lines Source: http://www.doksinet

glDrawArrays( ) Parameters glDrawArrays(GL LINE STRIP,.) glDrawArrays(GL POLYGON,) – polylines glDrawArrays(GL LINE LOOP) – Close loop of polylines (Like GL LINE STRIP but closed) – convex filled polygon Source: http://www.doksinet glDrawArrays( ) Parameters  Triangles: Connect 3 vertices   GL TRIANGLES, GL TRIANGLE STRIP, GL TRIANGLE FAN Quad: Connect 4 vertices  GL QUADS, GL QUAD STRIP Source: http://www.doksinet Triangulation  Generally OpenGL breaks polygons down into triangles which are then rendered. Example d glDrawArrays(GL POLYGON,.) – convex filled polygon c b a Source: http://www.doksinet Previously: Generated 3 Points to be Drawn  Stored points in array points[ ], moved to GPU, draw using glDrawArray 0.0, 05 point2 points[NumPoints]; points[0] = point2( -0.5, -05 ); points[1] = point2( 0.0, 05 ); points[2] = point2( 0.5, -05 );   -0.5, -05 0.5, -05 Once drawing steps are set up, can generate more complex sequence

of points algorithmically, drawing steps don’t change Next: example of more algorithm to generate more complex point sequences Source: http://www.doksinet Sierpinski Gasket Program   Any sequence of points put into array points[ ] will be drawn Can generate interesting sequence of points   Put in array points[ ], draw!! Sierpinski Gasket: Popular fractal Source: http://www.doksinet Sierpinski Gasket Start with initial triangle with corners (x1, y1, 0), (x2, y2, 0) and (x3, y3, 0) 1. 2. 3. 4. 5. 6. Pick initial point p = (x, y, 0) at random inside a triangle Select on of 3 vertices at random Find q, halfway between p and randomly selected vertex Draw dot at q Replace p with q Return to step 2 Source: http://www.doksinet Actual Sierpinski Code #include “vec.h” // include point types and operations #include <stdlib.h> // includes random number generator void Sierpinksi( ) { const int NumPoints = 5000; vec2 points[NumPoints]; // Specifiy the vertices

for a triangle vec2 vertices[3] = { vec2( -1.0, -10 ), vec2( 00, 10 ), vec2( 10, -10 ) }; Source: http://www.doksinet Actual Sierpinski Code // An arbitrary initial point inside the triangle points[0] = point2(0.25, 050); // compute and store N-1 new points for ( int i = 1; i < NumPoints; ++i ) { int j = rand() % 3; // pick a vertex at random // Compute the point halfway between the selected vertex // and the previous point points[i] = ( points[i - 1] + vertices[j] ) / 2.0; } Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 2 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Chapter 2 Source: http://www.doksinet Computer Graphics (4731) Lecture 4: 2D Graphics Systems (Drawing Polylines, tiling, & Aspect Ratio) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Screen Coordinate System •Screen: 2D coordinate system (WxH) •2D

Regular Cartesian Grid •Origin (0,0): lower left corner (OpenGL convention) y •Horizontal axis – x •Vertical axis – y •Pixel positions: grid intersections x (0,0) (2,2) Source: http://www.doksinet Screen Coordinate System (0,0) is lower left corner of OpenGL Window. NOT lower left corner of entire desktop OpenGL’s (0,0) Source: http://www.doksinet Defining a Viewport    Can draw to any rectangle (sub‐area of screen) Viewport: Area of screen we want to draw to To define viewport glViewport(left, bottom, width, height) or glViewport(V.L, VB, VR – VL, VT – VB) or glViewport(180, 260, (410 – 180), (480 – 260) ) V.T 480 V.B 260 V.L V.R 180 410 Source: http://www.doksinet Recall: OpenGL Skeleton void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480);

glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit( ); glutMainLoop( ); } void mydisplay(void){ glClear(GL COLOR BUFFER BIT); glDrawArrays(GL LINE LOOP, 0, 3); glFlush( ); } Note: default viewport is entire created window Source: http://www.doksinet Example: Changing Viewport How to change viewport to: Bottom left corner at (100,80) Width changes to 700, height changes to 300?? void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape);

glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit( ); glutMainLoop( ); } void mydisplay(void){ glClear(GL COLOR BUFFER BIT); glViewport(100,80,700,300); glDrawArrays(GL LINE LOOP, 0, 3); glFlush( ); } Note: Set desired viewport, then draw Source: http://www.doksinet Tiling: Changing Viewport in a Loop   Problem: Want to tile Triangle file on screen Solution: change viewport in loop, draw tiles One world triangle Multiple tiled viewports Source: http://www.doksinet Tiling Triangle Code Snippet   Set viewport, draw into tile in a loop Code snippet: float w, h; w = width / 6; h = height / 6; for (int k=0; k<6; k++) { for (int m=0; m<6; m++) { glViewport(k * w, m h, w, h); glDrawArrays(GL LINE LOOP, 0, NumPoints); } } Source: http://www.doksinet Example: Tiling, Changing Viewport void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit

glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); void mydisplay(void){ glewInit( ); glClear(GL COLOR BUFFER BIT); float w, h; // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); w = width / 6; h = height / 6; for (int k=0; k<6; k++) { for (int m=0; m<6; m++) { glViewport(k * w, m h, w, h); glDrawArrays(GL LINE LOOP, 0, NumPoints); } } glFlush( ); myInit( ); glutMainLoop( ); } } Source: http://www.doksinet World Coordinate System • Problems with drawing in screen coordinates: • (x,y) dimensions in pixels: one mapping, inflexible • Not application specific, difficult to use • World coordinate: application‐specific • E.g: Same screen area Change input drawing (x,y) range Change World window (mapping) V.T V.B V.L 100 pixels = 30 miles V.R V.T V.B V.L V.R 100

pixels = 0.25 miles Source: http://www.doksinet Using Window Coordinates  Would like to: Specify set source boundaries (extents) of original drawing in world coordinates (miles, meters, etc) Display target region in screen coordinates (pixels)    Programming steps: 1. 2. 3.  Define world window (original drawing extents) Define viewport (drawing extents on screen) Map drawings within window to viewport Mapping called Window‐to‐viewport mapping! Source: http://www.doksinet World Coordinate System • World Window: region of source drawing to be rendered • Rectangle specified by world window is drawn to screen • Defined by (left, right, bottom, top) or (W.L, WR, WB, WT) W.T W.B W.L W.R Source: http://www.doksinet Defining World Window mat4 ortho = Ortho2D(left, right, bottom, top) Or mat4 ortho = Ortho2D(W.L, WR, WB, WT)    Ortho2D generates 4x4 matrix that scales input drawing Note: Ortho2D in header file mat.h W.T W.B W.L W.R

Source: http://www.doksinet Drawing  After setting world window (using ortho2D) and viewport (using glviewport),  Draw as usual with glDrawArrays Source: http://www.doksinet Apply ortho( ) matrix in Vertex Shader    One more detail: Need to pass ortho matrix to shader Multiply each vertex by ortho matrix to scale input drawing Need to connect ortho matrix to proj variable in shader mat4 ortho = Ortho2D( W.L, WR, WB, WT ); Call Ortho2D in Main .cpp file uniform mat4 Proj; in vec4 vPosition; void main( ){ gl Position = Proj * vPosition; } In vertex shader, multiply each vertex with proj matrix Source: http://www.doksinet Apply ortho( ) matrix in Vertex Shader 1. Include mat.h from book website (ortho2D declared in math ) #include "mat.h" 2. Connect ortho matrix to proj variable in shader mat4 ortho = Ortho2D( W.L, WR, WB, WT ); Call Ortho2D in Main .cpp file ProjLoc = glGetUniformLocation( program, "Proj" ); glUniformMatrix4fv(

ProjLoc, 1, GL TRUE, ortho ); uniform mat4 Proj; in vec4 vPosition; void main( ){ gl Position = Proj * vPosition; } In shader, multiply each vertex with proj matrix Source: http://www.doksinet Drawing Polyline Files    May read in list of vertices defining a drawing Problem: want to draw single dino.dat on screen Note: size of input drawing may vary 440 640 Source: http://www.doksinet Drawing Polyline Files   Problem: want to draw single dino.dat on screen Code snippet: // set world window (left, right, bottom, top) ortho = Ortho2D(0, 640.0, 0, 4400); // now set viewport (left, bottom, width, height) glViewport(0, 0, 64, 44); // Draw polyline fine drawPolylineFile(dino.dat); 440 Question: What if I wanted to draw the bottom quadrant of polyline? 640 Source: http://www.doksinet Tiling using W‐to‐V Mapping   Problem: Want to tile polyline file on screen Solution: W‐to‐V in loop, adjacent tiled viewports One world Window Multiple tiled

viewports Source: http://www.doksinet Tiling Polyline Files   Problem: want to tile dino.dat in 5x5 across screen Code snippet: // set world window ortho = Ortho2D(0, 640.0, 0, 4400); for(int i=0;i < 5;i++) { for(int j = 0;j < 5; j++) { // . now set viewport in a loop glViewport(i * 64, j 44; 64, 44); drawPolylineFile(dino.dat); } } Source: http://www.doksinet Maintaining Aspect Ratios    Aspect ratio R = Width/Height What if window and viewport have different aspect ratios? Two possible cases: Case a: viewport too wide Case b: viewport too tall Source: http://www.doksinet What if Window and Viewport have different Aspect Ratios?   R = window aspect ratio, W x H = viewport dimensions Two possible cases:  Case A (R > W/H): map window to tall viewport? Viewport Aspect ratio R H Window W/R ortho = Ortho2D(left, right, bottom, top ); R = (right – left)/(top – bottom); If(R > W/H) glViewport(0, 0, W, W/R); W Source:

http://www.doksinet What if Window and Viewport have different Aspect Ratios?  Case B (R < W/H): map window to wide viewport? W Aspect ratio R Aspect ratio R Window HR HR Viewport ortho = Ortho2D(left, right, bottom, top ); R = (right – left)/(top – bottom); If(R < W/H) glViewport(0, 0, H*R, H); H Source: http://www.doksinet reshape( ) function that maintains aspect ratio // // // // Ortho2D(left, right, bottom, top )is done previously, probably in your draw function function assumes variables left, right, top and bottom are declared and updated globally void myReshape(double W, double H ){ R = (right – left)/(top – bottom); if(R > W/H) glViewport(0, 0, W, W/R); else if(R < W/H) glViewport(0, 0, H*R, H); else glViewport(0, 0, W, H); // equal aspect ratios } Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 9 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition,

Appendix 4 Source: http://www.doksinet Computer Graphics 4731 Lecture 5: Fractals Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet What are Fractals?   Mathematical expressions to generate pretty pictures Evaluate math functions to create drawings      approach infinity ‐> converge to image Utilizes recursion on computers Popularized by Benoit Mandelbrot (Yale university) Dimensional:  Line is 1‐dimensional  Plane is 2‐dimensional Defined in terms of self‐similarity Source: http://www.doksinet Fractals: Self‐similarity    See similar sub‐images within image as we zoom in Example: surface roughness or profile same as we zoom in Types:  Exactly self‐similar  Statistically self‐similar Source: http://www.doksinet Examples of Fractals          Clouds Grass Fire Modeling mountains (terrain) Coastline Branches of a tree Surface

of a sponge Cracks in the pavement Designing antennae (www.fractennacom) Source: http://www.doksinet Example: Mandelbrot Set Source: http://www.doksinet Example: Mandelbrot Set Source: http://www.doksinet Example: Fractal Terrain Courtesy: Mountain 3D Fractal Terrain software Source: http://www.doksinet Example: Fractal Terrain Source: http://www.doksinet Example: Fractal Art Courtesy: Internet Fractal Art Contest Source: http://www.doksinet Application: Fractal Art Courtesy: Internet Fractal Art Contest Source: http://www.doksinet Recall: Sierpinski Gasket Program  Popular fractal Source: http://www.doksinet Koch Curves    Discovered in 1904 by Helge von Koch Start with straight line of length 1 Recursively:  Divide line into 3 equal parts  Replace middle section with triangular bump, sides of length 1/3  New length = 4/3 Source: http://www.doksinet Koch Curves Can form Koch snowflake by joining three Koch curves S3, S4, S5,

Source: http://www.doksinet Koch Snowflakes Pseudocode, to draw Kn: If (n equals 0) draw straight line Else{ Draw Kn-1 Turn left 60° Draw Kn-1 Turn right 120° Draw Kn-1 Turn left 60° } Draw Kn-1 Source: http://www.doksinet L‐Systems: Lindenmayer Systems     Express complex curves as simple set of string‐production rules Example rules:  ‘F’: go forward a distance 1 in current direction  ‘+’: turn right through angle A degrees  ‘‐’: turn left through angle A degrees Using these rules, can express koch curve as: “F‐F++F‐F” Angle A = 60 degrees Source: http://www.doksinet L‐Systems: Koch Curves        Rule for Koch curves is F ‐> F‐F++F‐F Means each iteration replaces every ‘F’ occurrence with “F‐F++F‐F” So, if initial string (called the atom) is ‘F’, then S1 =“F‐F++F‐F” S2 =“F‐F++F‐F‐ F‐F++F‐F++ F‐F++F‐F‐ F‐F++F‐F” S3 = . Gets very large quickly

Source: http://www.doksinet Hilbert Curve     Discovered by German Scientist, David Hilbert in late 1900s Space filling curve Drawn by connecting centers of 4 sub‐squares, make up larger square. Iteration 0: 3 segments connect 4 centers in upside‐down U Iteration 0 Source: http://www.doksinet Hilbert Curve: Iteration 1       Each of 4 squares divided into 4 more squares U shape shrunk to half its original size, copied into 4 sectors In top left, simply copied, top right: its flipped vertically In the bottom left, rotated 90 degrees clockwise, Bottom right, rotated 90 degrees counter‐clockwise. 4 pieces connected with 3 segments, each of which is same size as the shrunken pieces of the U shape (in red) Source: http://www.doksinet Hilbert Curve: Iteration 2     Each of the 16 squares from iteration 1 divided into 4 squares Shape from iteration 1 shrunk and copied. 3 connecting segments (shown in red) are added to complete the

curve. Implementation? Recursion is your friend!! Source: http://www.doksinet Gingerbread Man  Each new point q is formed from previous point p using the equation For 640 x 480 display area, use M = 40 L = 3   A good starting point is (115, 121) Source: http://www.doksinet Iterated Function Systems (IFS)     Recursively call a function Does result converge to an image? What image? IFS’s converge to an image Examples:  The Fern  The Mandelbrot set Source: http://www.doksinet The Fern Use either f1, f2, f3 or f4 with probabilities .01, 07,07,85 to generate next point Function f1 (previous point) Start at initial point (0,0). Draw dot at (0,0) .01 .07 Function f2 (previous point) .07 (0,0) Function f3 (previous point) .85 Function f4 (previous point) {Ref: Peitgen: Science of Fractals, p.221 ff} {Barnsley & Sloan, "A Better way to Compress Images" BYTE, Jan 1988, p.215} Source: http://www.doksinet The Fern Each new point

(new.x,newy) is formed from the prior point (oldx,oldy) using the rule: new.x := a[index] * old.x + c[index] * old.y + tx[index]; new.y := b[index] * old.x + d[index] * old.y + ty[index]; Function f1 .01 a[1]:= 0.0; b[1] := 00; c[1] := 00; d[1] := 016; tx[1] := 0.0; ty[1] := 00; (ie values for function f1) a[2]:= 0.2; b[2] := 023; c[2] :=‐026; d[2] := 022; tx[2] := 0.0; ty[2] := 16; (values for function f2) a[3]:= ‐0.15; b[3] := 026; c[3] := 028; d[3] := 024; tx[3] := 0.0; ty[3] := 044; (values for function f3) a[4]:= 0.85; b[4] := ‐004; c[4] := 004; d[4] := 085; tx[4] := 0.0; ty[4] := 16; (values for function f4) .07 Function f2 .07 (0,0) Function f3 .85 Function f4 Source: http://www.doksinet Mandelbrot Set   Based on iteration theory Function of interest: f ( z )  (s) 2  c  Sequence of values (or orbit): d1  ( s ) 2  c d 2  (( s ) 2  c) 2  c d 3  ((( s ) 2  c) 2  c) 2  c d 4  ((((s ) 2  c) 2  c) 2  c) 2  c

Source: http://www.doksinet Mandelbrot Set   Orbit depends on s and c Basic question,:  For given s and c,  does function stay finite? (within Mandelbrot set)  explode to infinity? (outside Mandelbrot set) Definition: if |d| < 1, orbit is finite else inifinite  Examples orbits:    s = 0, c = ‐1, orbit = 0,‐1,0,‐1,0,‐1,0,‐1,.finite s = 0, c = 1, orbit = 0,1,2,5,26,677 explodes Source: http://www.doksinet Mandelbrot Set       Mandelbrot set: use complex numbers for c and s Always set s = 0 Choose c as a complex number For example:  s = 0, c = 0.2 + 05i Hence, orbit:  0, c, c2+ c, (c2+ c)2 + c, Definition: Mandelbrot set includes all finite orbit c Source: http://www.doksinet Mandelbrot Set  Some complex number math: i * i  1  Example: Im Argand diagram 2i * 3i  6 Re  Modulus of a complex number, z = ai + b: z  a2  b2  Squaring a complex number: ( x  yi) 2  (

x 2  y 2 )  (2 xy )i Source: http://www.doksinet Mandelbrot Set  Examples: Calculate first 3 terms  with s=2, c=‐1, terms are 22  1  3 32  1  8 82  1  63  with s = 0, c = ‐2+i 0  (2  i )  2  i (2  i ) 2  (2  i )  1  3i 1  3i 2  (2  i)  10  5i ( x  yi ) 2  ( x 2  y 2 )  (2 xy )i Source: http://www.doksinet Mandelbrot Set   Fixed points: Some complex numbers converge to certain values after x iterations. Example:    s = 0, c = ‐0.2 + 05i converges to –0249227 + 0.333677i after 80 iterations Experiment: square –0.249227 + 0333677i and add ‐0.2 + 05i Mandelbrot set depends on the fact the convergence of certain complex numbers Source: http://www.doksinet Mandelbrot Set Routine      Math theory says calculate terms to infinity Cannot iterate forever: our program will hang! Instead iterate 100 times Math theorem: 

if no term has exceeded 2 after 100 iterations, never will! Routine returns:  100, if modulus doesn’t exceed 2 after 100 iterations  Number of times iterated before modulus exceeds 2, or s, c Mandelbrot function Number < 100 ( first term > 2) Number = 100 (did not explode) Source: http://www.doksinet Mandelbrot dwell( ) function ( x  yi ) 2  ( x 2  y 2 )  (2 xy )i ( x  yi ) 2  (c X  cY i )  [( x 2  y 2 )  c X ]  (2 xy  cY )i int dwell(double cx, double cy) { // return true dwell or Num, whichever is smaller #define Num 100 // increase this for better pics double tmp, dx = cx, dy = cy, fsq = cx*cx + cycy; for(int count = 0;count <= Num && fsq <= 4; count++) { tmp = dx; // save old real part [( x 2  y 2 )  c X dx = dx*dx – dydy + cx; // new real part dy = 2.0 * tmp dy + cy; // new imag. Part (2 xy  cY )i fsq = dx*dx + dydy; } return count; // number of iterations used } ] Source: http://www.doksinet

Mandelbrot Set    Map real part to x‐axis Map imaginary part to y‐axis Decide range of complex numbers to investigate. Eg:  X in range [‐2.25: 075], Y in range [‐15: 15] Range of complex Numbers ( c ) Representation of -1.5 + i (-1.5, 1) X in range [-2.25: 075], Y in range [-1.5: 15] Call ortho2D to set range of values to explore Source: http://www.doksinet Mandelbrot Set  Set world window (ortho2D) (range of complex numbers to investigate)   X in range [‐2.25: 075], Y in range [‐15: 15] Set viewport (glviewport). Eg:  ortho2D Viewport = [V.L, VR, VB, VT]= [60,380,80,240] glViewport Source: http://www.doksinet Mandelbrot Set    So, for each pixel:  For each point ( c ) in world window call your dwell( ) function  Assign color <Red,Green,Blue> based on dwell( ) return value Choice of color determines how pretty Color assignment:  Basic: In set (i.e dwell( ) = 100), color = black, else color = white 

Discrete: Ranges of return values map to same color  E.g 0 – 20 iterations = color 1  20 – 40 iterations = color 2, etc.  Continuous: Use a function Source: http://www.doksinet FREE SOFTWARE  Free fractal generating software  Fractint  FracZoom  Astro Fractals  Fractal Studio  3DFract Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 9 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Appendix 4 Source: http://www.doksinet Adding Interaction    So far, OpenGL programs just render images Can add user interaction Examples:   User hits ‘h’ on keyboard ‐> Program draws house User clicks mouse left button ‐> Program draws table Source: http://www.doksinet Types of Input Devices  String: produces string of characters e.g keyboard  Locator: User points to position on display. Eg mouse Source: http://www.doksinet Types of

Input Devices  Valuator: generates number between 0 and 1.0 (proportional to how much it is turned)  Pick: User selects location on screen (e.g touch screen in restaurant, ATM) Source: http://www.doksinet GLUT: How keyboard Interaction Works  Example: User hits ‘h’ on keyboard ‐> Program draws house 1. User hits ‘h’ key Keyboard handler Function ‘h’ key OS Programmer needs to write keyboard handler function Source: http://www.doksinet Using Keyboard Callback for Interaction void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); x,y location ASCII character glutCreateWindow(“my first attempt”); of mouse of pressed key glewInit( ); 2. Implement keyboard function // now register callback functions void myKeyboard(char key, int x, int y ) { // put

keyboard stuff here glutDisplayFunc(myDisplay); . switch(key){ // check which key glutReshapeFunc(myReshape); case ‘f’: glutMouseFunc(myMouse); // do stuff break; glutKeyboardFunc(myKeyboard); case ‘k’: // do other stuff break; myInit( ); glutMainLoop( ); } 1. Register keyboard Function } } Note: Backspace, delete, escape keys checked using their ASCII codes Source: http://www.doksinet Special Keys: Function, Arrow, etc glutSpecialFunc (specialKeyFcn); Void specialKeyFcn (Glint specialKey, GLint, xMouse, Glint yMouse)  Example: if (specialKey == GLUT KEY F1)// F1 key pressed     GLUT KEY F1, GLUT KEY F12, . for function keys GLUT KEY UP, GLUT KEY RIGHT, . for arrow keys keys GLUT KEY PAGE DOWN, GLUT KEY HOME, . for page up, home keys Complete list of special keys designated in glut.h Source: http://www.doksinet GLUT: How Mouse Interaction Works  Example: User clicks on (x,y) location in drawing window ‐> Program draws a line 1. User clicks

on (x,y) location Mouse handler Function OS Programmer needs to write keyboard handler function Source: http://www.doksinet Using Mouse Callback for Interaction void main(int argc, char* argv){ // First initialize toolkit, set display mode and create window glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); 2. Implement mouse function // now register callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit( ); glutMainLoop( ); } 1. Register keyboard Function void myMouse(int button, int state, int x, int y) { // put mouse stuff here } Source: http://www.doksinet Mouse Interaction  Declare prototype    Register callbacks:     GLUT LEFT BUTTON, GLUT MIDDLE BUTTON, GLUT RIGHT BUTTON State returned values: 

 glutMouseFunc(myMouse): mouse button pressed glutMotionFunc(myMovedMouse): mouse moves with button pressed glutPassiveMotionFunc(myMovedMouse): mouse moves with no buttons pressed Button returned values:   myMouse(int button, int state, int x, int y) myMovedMouse GLUT UP, GLUT DOWN GLUT (0,0) X,Y returned values:   x,y coordinates of mouse location Convert GLUT (0,0) to GLUT (0,0)? How? OpenGL (0,0) Source: http://www.doksinet Mouse Interaction Example    Example: draw (or select ) rectangle on screen Each mouse click generates separate events Store click points in global or static variable in mouse function void myMouse(int button, int state, int x, int y) { static GLintPoint corner[2]; static int numCorners = 0; // initial value is 0 if(button == GLUT LEFT BUTTON && state == GLUT DOWN) { corner[numCorners].x = x; corner[numCorners].y = screenHeight – y; //flip y coord numCorners++; Screenheight is height of drawing window Source:

http://www.doksinet Mouse Interaction Example (continued) Corner[1] 4 3 if(numCorners == 2) 2 Corner[0] 1 { // draw rectangle or do whatever you planned to do Point3 points[4] = corner[0].x, corner[0]y, //1 corner[1].x, corner[0]y, //2 corner[1].x, corner[1]y, //3 corner[0].x, corner[1]y); //4 glDrawArrays(GL QUADS, 0, 4); numCorners == 0; } else if(button == GLUT RIGHT BUTTON && state == GLUT DOWN) glClear(GL COLOR BUFFER BIT); // clear the window glFlush( ); } Source: http://www.doksinet Menus  Adding menu that pops up on mouse click 1. Create menu using glutCreateMenu(myMenu); 2. Use glutAddMenuEntry adds entries to menu 3. Attach menu to mouse button (left, right, middle) using glutAttachMenu Source: http://www.doksinet Menus  Example: Shows on menu Checked in mymenu glutCreateMenu(myMenu); glutAddMenuEntry(“Clear Screen”, 1); glutAddMenuEntry(“Exit”, 2); glutAttachMenu(GLUT RIGHT BUTTON); . void mymenu(int value){ if(value == 1){ glClear(GL

COLOR BUFFER BIT); glFlush( ); } if (value == 2) exit(0); } Clear Screen 1 Exit 2 Source: http://www.doksinet GLUT Interaction using other input devices  Tablet functions (mouse cursor must be in display window) glutTabletButton (tabletFcn); . void tabletFcn(Glint tabletButton, Glint action, Glint xTablet, Glint yTablet)      Spaceball functions Dial functions Picking functions: use your finger Menu functions: minimal pop‐up windows within your drawing window Reference: Hearn and Baker, 3rd edition (section 20‐6) Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 6: Shader Setup & GLSL Introduction Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet OpenGL function format function name Number of arguments glUniform3f(x,y,z) belongs to GL library x,y,z are floats glUniform3fv(p) Argument is array of values p is a pointer to array Source: http://www.doksinet Lack of

Object Orientation   OpenGL is not object oriented Multiple versions for each command    glUniform3f glUniform2i glUniform3dv Source: http://www.doksinet OpenGL Data Types C++ OpenGL Signed char GLByte Short GLShort Int GLInt Float GLFloat Double GLDouble Unsigned char GLubyte Unsigned short GLushort Unsigned int GLuint Example: Integer is 32‐bits on 32‐bit machine but 64‐bits on a 64‐bit machine Source: http://www.doksinet Recall: Single Buffering   If display mode set to single framebuffers Any drawing into framebuffer is seen by user. How?   glutInitDisplayMode(GLUT SINGLE | GLUT RGB);  Single buffering with RGB colors Drawing may not be drawn to screen until call to glFlush( ) void mydisplay(void){ glClear(GL COLOR BUFFER BIT); // clear screen glDrawArrays(GL POINTS, 0, N); glFlush( ); Drawing sent to screen } Single Frame buffer Source: http://www.doksinet Double Buffering  Set display mode to double

buffering (create front and back framebuffers)    glutInitDisplayMode(GLUT DOUBLE | GLUT RGB);  Double buffering with RGB colors Front buffer displayed on screen, back buffers not displayed Drawing into back buffers (not displayed) until swapped in using glutSwapBuffers( ) void mydisplay(void){ glClear(GL COLOR BUFFER BIT); // clear screen glDrawArrays(GL POINTS, 0, N); Back buffer drawing swapped glutSwapBuffers( ); in, becomes visible here } Back Front Double Frame buffer Source: http://www.doksinet Recall: OpenGL Skeleton void main(int argc, char* argv){ glutInit(&argc, argv); // initialize toolkit glutInitDisplayMode(GLUT SINGLE | GLUT RGB); glutInitWindowSize(640, 480); glutInitWindowPosition(100, 150); glutCreateWindow(“my first attempt”); glewInit( ); // now register callback functions glutDisplayFunc(myDisplay); void shaderSetup( void ) glutReshapeFunc(myReshape); { // Load shaders and use the resulting shader program glutMouseFunc(myMouse);

program = InitShader( "vshader1.glsl", "fshader1glsl" glutKeyboardFunc(myKeyboard); glUseProgram( program ); // Initialize vertex position attribute from vertex shader GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(0) ); glewInit( ); generateGeometry( ); initGPUBuffers( ); void shaderSetup( ); glutMainLoop( ); } ); // sets white as color used to clear screen glClearColor( 1.0, 10, 10, 10 ); } Source: http://www.doksinet Recall: OpenGL Program: Shader Setup  initShader( ): our homegrown shader initialization   Used in main program, connects and link vertex, fragment shaders Shader sources read in, compiled and linked Gluint = program; GLuint program = InitShader( "vshader1.glsl", "fshader1glsl" ); glUseProgram(program); example.cpp What’s inside initShader?? Next! Main Program Vertex shader vshader1.glsl

Fragment Shader fshader1.glsl Source: http://www.doksinet Coupling Shaders to Application (initShader function) Create a program object Read shaders Add + Compile shaders Link program (everything together) Link variables in application with variables in shaders 1. 2. 3. 4. 5.   Vertex attributes Uniform variables Source: http://www.doksinet Step 1. Create Program Object  Container for shaders  Can contain multiple shaders, other GLSL functions GLuint myProgObj; myProgObj = glCreateProgram(); Main Program Create container called Program Object Source: http://www.doksinet Step 2: Read a Shader  Shaders compiled and added to program object example.cpp Passed in as string Main Program Vertex shader vshader1.glsl   Passed in as string Fragment Shader Fshader1.glsl Shader file code passed in as null‐terminated string using the function glShaderSource Shaders in files (vshader.glsl, fshaderglsl), write function readShaderSource to convert

shader file to string Shader file name (e.g vshaderglsl) readShaderSource String of entire shader code Source: http://www.doksinet Shader Reader Code? #include <stdio.h> static char* readShaderSource(const char shaderFile) { FILE* fp = fopen(shaderFile, "r"); if ( fp == NULL ) { return NULL; } fseek(fp, 0L, SEEK END); long size = ftell(fp); fseek(fp, 0L, SEEK SET); char* buf = new char[size + 1]; fread(buf, 1, size, fp); buf[size] = ; fclose(fp); return buf; } Shader file name (e.g vshaderglsl) readShaderSource String of entire shader code Source: http://www.doksinet Step 3: Adding + Compiling Shaders Declare shader object (container for shader) GLuint myVertexObj; Gluint myFragmentObj; GLchar* vSource = readShaderSource(“vshader1.glsl”); GLchar* fSource = readShaderSource(“fshader1.glsl”); myVertexObj = glCreateShader(GL VERTEX SHADER); myFragmentObj = glCreateShader(GL FRAGMENT SHADER); example.cpp Main Program Vertex shader vshader1.glsl

Fragment Shader fshader1.glsl Read shader files, Convert code to string Create empty Shader objects Source: http://www.doksinet Step 3: Adding + Compiling Shaders Step 4: Link Program Read shader code strings into shader objects glShaderSource(myVertexObj, 1, vSource, NULL); glShaderSource(myFragmentObj, 1, fSource, NULL); glCompileShader(myVertexObj); glCompileShader(myFragmentObj); Compile shader objects glAttachShader(myProgObj, myVertexObj); glAttachShader(myProgObj, myFragmentObj); glLinkProgram(myProgObj); Attach shader objects to program object Link Program example.cpp Attach shader objects to program object Main Program Vertex shader vshader1.glsl Fragment Shader fshader1.glsl Source: http://www.doksinet Uniform Variables     Variables that are constant for an entire primitive Can be changed in application and sent to shaders Cannot be changed in shader Used to pass information to shader  Example: bounding box of a primitive Bounding Box

Source: http://www.doksinet Uniform variables   Sometimes want to connect uniform variable in OpenGL application to uniform variable in shader Example?   Check “elapsed time” variable (etime) in OpenGL application Use elapsed time variable (time) in shader for calculations etime OpenGL application time Shader application Source: http://www.doksinet Uniform variables  First declare etime variable in OpenGL application, get time float etime; Elapsed time since program started etime = 0.001*glutGet(GLUT ELAPSED TIME);  Use corresponding variable time in shader uniform float time; attribute vec4 vPosition; main( ){ vPosition.x += (1+sin(time)); gl Position = vPosition; }  Need to connect etime in application and time in shader!! Source: http://www.doksinet Connecting etime and time    Linker forms table of shader variables, each with an index Application can get index from table, tie it to application variable In application, find

location of shader time variable in linker table Glint timeLoc; 423 time timeLoc = glGetUniformLocation(program, “time”);  Connect: location of shader variable time to etime! 423 glUniform1(timeLoc, etime); Location of shader variable time Application variable, etime etime Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 2 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Chapter 2 Source: http://www.doksinet GL Shading Language (GLSL)     GLSL: high level C‐like language Main program (e.g example1cpp) program written in C/C++ Vertex and Fragment shaders written in GLSL From OpenGL 3.1, application must use shaders What does keyword out mean? const vec4 red = vec4(1.0, 00, 00, 10); out vec3 color out; Example code of vertex shader void main(void){ gl Position = vPosition; color out = red; } gl Position not declared Built-in types (already declared, just use)

Source: http://www.doksinet Passing values   Variable declared out in vertex shader can be declared as in in fragment shader and used Why? To pass result of vertex shader calculation to fragment shader in const vec4 red = vec4(1.0, 00, 00, 10); out vec3 color out; void main(void){ gl Position = vPosition; color out = red; } in vec3 color out; void main(void){ // can use color out here. } From main program Vertex shader Fragment shader in From Vertex shader Vertex Shader out To fragment shader Fragment Shader out To framebuffer Source: http://www.doksinet Data Types   C types: int, float, bool GLSL types:    float vec2: e.g (x,y) // vector of 2 floats float vec3: e.g (x,y,z) or (R,G,B) // vector of 3 floats float vec4: e.g (x,y,z,w) // vector of 4 floats Const float vec4 red = vec4(1.0, 00, 00, 10); out float vec3 color out; void main(void){ gl Position = vPosition; color out = red; }  Also:   int (ivec2, ivec3, ivec4) and boolean

(bvec2, bvec3,bvec4) C++ style constructors Vertex shader Source: http://www.doksinet Data Types  Matrices: mat2, mat3, mat4       Stored by columns Standard referencing m[row][column] Matrices and vectors are basic types  can be passed in and out from GLSL functions E.g mat3 func(mat3 a) No pointers in GLSL Can use C structs that are copied back from functions Source: http://www.doksinet Qualifiers    GLSL has many C/C++ qualifiers such as const Supports additional ones Variables can change      Once per vertex Once per fragment Once per primitive (e.g triangle) At any time in the application Primitive Vertex Example: variable vPosition may be assigned once per vertex const vec4 red = vec4(1.0, 00, 00, 10); out vec3 color out; void main(void){ gl Position = vPosition; color out = red; } Source: http://www.doksinet Operators and Functions  Standard C functions     Trigonometric: cos, sin, tan, etc

Arithmetic: log, min, max, abs, etc Normalize, reflect, length Overloading of vector and matrix types mat4 a; vec4 b, c, d; c = b*a; // a column vector stored as a 1d array d = a*b; // a row vector stored as a 1d array Source: http://www.doksinet Swizzling and Selection  Can refer to array elements by element using [] or selection (.) operator with       x, y, z, w r, g, b, a s, t, p, q vec4 a; a[2], a.b, az, ap are the same Swizzling operator lets us manipulate components a.yz = vec2(10, 20); Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 7: Building 3D Models Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet 3D Applications   2D points: (x,y) coordinates 3D points: have (x,y,z) coordinates Source: http://www.doksinet Setting up 3D Applications: Main Steps  Programming 3D similar to 2D 1. Load representation of 3D object into data structure Each vertex has

(x,y,z) coordinates. Store as vec3 NOT vec2 2. Draw 3D object 3. Set up Hidden surface removal: Correctly determine order in which primitives (triangles, faces) are rendered (e.g Blocked faces NOT drawn) Source: http://www.doksinet 3D Coordinate Systems   Vertex (x,y,z) positions specified on coordinate system OpenGL uses right hand coordinate system Y Y x +z Right hand coordinate system Tip: sweep fingers x‐y: thumb is z +z x Left hand coordinate system •Not used in OpenGL Source: http://www.doksinet Generating 3D Models: GLUT Models   Make GLUT 3D calls in OpenGL program to generate vertices describing different shapes (Restrictive?) Two types of GLUT models:   Wireframe Models Solid Models Solid models Wireframe models Source: http://www.doksinet 3D Modeling: GLUT Models  Basic Shapes     Cone: glutWireCone( ), glutSolidCone( ) Sphere: glutWireSphere( ), glutSolidSphere( ) Cube: glutWireCube( ), glutSolidCube( ) More

advanced shapes:   Newell Teapot: (symbolic) Dodecahedron, Torus Newell Teapot Sphere Cone Torus Source: http://www.doksinet 3D Modeling: GLUT Models   Glut functions under the hood  generate sequence of points that define a shape  Generated vertices and faces passed to OpenGL for rendering Example: glutWireCone generates sequence of vertices, and faces defining cone and connectivity vertices, and faces defining cone glutWireCone OpenGL program (renders cone) Source: http://www.doksinet Polygonal Meshes    Modeling with GLUT shapes (cube, sphere, etc) too restrictive Difficult to approach realism. Eg model a horse Preferred way is using polygonal meshes:  Collection of polygons, or faces, that form “skin” of object  More flexible, represents complex surfaces better  Examples:  Human face  Animal structures  Furniture, etc Each face of mesh is a polygon Source: http://www.doksinet Polygonal Mesh Example Smoothed Out

with Shading (later) Mesh (wireframe) Source: http://www.doksinet Polygonal Meshes      Meshes now standard in graphics OpenGL Good at drawing polygons, triangles Mesh = sequence of polygons forming thin skin around object Simple meshes exact. (eg barn) Complex meshes approximate (e.g human face) Source: http://www.doksinet Different Resolutions of Same Mesh Original: 424,000 triangles 60,000 triangles (14%). 1000 triangles (0.2%) (courtesy of Michael Garland and Data courtesy of Iris Development.) Source: http://www.doksinet Representing a Mesh v6 e2 v5 e3  Consider a mesh  There are 8 vertices and 12 edges  5 interior polygons  6 interior (shared) edges (shown in orange) Each vertex has a location vi = (xi yi zi)  e8 v e9 v4 8 e1 e11 e10 v7 e4 e 7 v1 e12 e6 e5 v3 v2 Source: http://www.doksinet Simple Representation   Define each polygon by (x,y,z) locations of its vertices OpenGL code vertex[i] = vec3(x1, y1, z1);

vertex[i+1] = vec3(x6, y6, z6); vertex[i+2] = vec3(x7, y7, z7); i+=3; Source: http://www.doksinet Issues with Simple Representation  Declaring face f1 vertex[i] vertex[i+1] vertex[i+2] vertex[i+3]  v5 = = = = vec3(x1, vec3(x7, vec3(x8, vec3(x6, v6 y1, y7, y8, y6, z1); z7); z8); z6); Declaring face f2 vertex[i] = vec3(x1, y1, z1); vertex[i+1] = vec3(x2, y2, z2); vertex[i+2] = vec3(x7, y7, z7);  v8 f1 v1 v4 v7 f2 v3 v2 Inefficient and unstructured    Repeats: vertices v1 and v7 repeated while declaring f1 and f2 Shared vertices shared declared multiple times Delete vertex? Move vertex? Search for all occurences of vertex Source: http://www.doksinet Geometry vs Topology  Better data structures separate geometry from topology    Geometry: (x,y,z) locations of the vertices Topology: How vertices and edges are connected Example:    A polygon is ordered list of vertices An edge connects successive pairs of vertices Topology holds

even if geometry changes (vertex moves) v6 v5 v8 f1 Example: even if we move (x,y,z) location of v1, v1 still connected to v6, v7 and v2 v1 v4 v7 f2 v3 v1 v2 Source: http://www.doksinet Polygon Traversal Convention   Use the right‐hand rule = counter‐clockwise encirclement of outward‐pointing normal Focus on direction of traversal    Orders {v1, v0, v3} and {v3, v2, v1} are same (ccw) Order {v1, v2, v3} is different (clockwise) What is outward‐pointing normal? 4 3 5 2 6 1 Source: http://www.doksinet Normal Vector    Normal vector: Direction each polygon is facing Each mesh polygon has a normal vector Normal vector used in shading Source: http://www.doksinet Vertex Lists    Vertex list: (x,y,z) of vertices (its geometry) are put in array Use pointers from vertices into vertex list Polygon list: vertices connected to each polygon (face) Topology example: Polygon P1 of mesh is connected to vertices (v1,v7,v6) P1 P2 P3 P4

P5 v1 v7 v6 v8 v5 v6 x1 y1 z1 x2 y2 z2 x3 y3 z3 x4 y4 z4 x5 y5 z5. x6 y6 z6 x7 y7 z7 x8 y8 z8 Geometry example: Vertex v7 coordinates are (x7,y7,z7). Note: If v7 moves, changed once in vertex list Source: http://www.doksinet Vertex List Issue: Shared Edges  Vertex lists draw filled polygons correctly If each polygon is drawn by its edges, shared edges are drawn twice  Alternatively: Can store mesh by edge list  Source: http://www.doksinet Edge List Simply draw each edges once E.g e1 connects v1 and v6 e1 e2 e3 e4 e5 e6 e7 e8 e9 v1 v6 x1 y1 z1 x2 y2 z2 x3 y3 z3 x4 y4 z4 x5 y5 z5. x6 y6 z6 x7 y7 z7 x8 y8 z8 v6 e2 v5 e3 e9 e8 v8 e e1 11 e10 v7 e4 e 7 v1 e12 e6 e5 v3 v2 Note polygons are not represented Source: http://www.doksinet Modeling a Cube • In 3D, declare vertices as (x,y,z) using point3 v[3] • Define global arrays for vertices and colors x y z typedef vec3 point3; point3 vertices[] = {point3(-1.0,-10,-10), point3(1.0,-10,-10),

point3(10,10,-10), point3(-1.0,10,-10), point3(-10,-10,10), point3(1.0,-10,10), point3(10,10,10), point3(-1.0,10,10)}; r g b typedef vec3 color3; color3 colors[] = {color3(0.0,00,00), color3(1.0,00,00), color3(10,10,00), color(0.0,10,00), color3(00,00,10), color3(1.0,00,10), color3(10,10,10), color3(0.0,10,10}); Source: http://www.doksinet Drawing a triangle from list of indices Draw a triangle from a list of indices into the array vertices and assign a color to each index void triangle(int a, int b, int c, int d) { vcolors[i] = colors[d]; position[i] = vertices[a]; vcolors[i+1] = colors[d]); position[i+1] = vertices[b]; vcolors[i+2] = colors[d]; position[i+2] = vertices[c]; i+=3; b } a c Variables a, b, c are indices into vertex array Variable d is index into color array Note: Same face, so all three vertices have same color Source: http://www.doksinet Draw cube from faces void colorcube( ) { quad(0,3,2,1); quad(2,3,7,6); quad(0,4,7,3); quad(1,2,6,5); quad(4,5,6,7);

quad(0,1,5,4); } 5 6 Normal vector 2 1 4 0 Note: vertices ordered (counterclockwise) so that we obtain correct outward facing normals 7 3 Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 3 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition Source: http://www.doksinet New Way: Vertex Representation and Storage    We have declare vertex lists, edge lists and arrays But vertex data usually passed to OpenGL in array with specific structure We now study that structure. Source: http://www.doksinet Vertex Attributes (18, 34, 6) (20, 12, 18) (12, 6, 15)  Vertices can have attributes     Position (e.g 20, 12, 18) Color (e.g red) Normal (x,y,z) Texture coordinates Source: http://www.doksinet Vertex Arrays   Previously: OpenGL provided a facility called vertex arrays for storing rendering data Six types of arrays were supported initially     

  Vertices Colors Color indices Normals Texture coordinates Edge flags Now vertex arrays can be used for any attributes Source: http://www.doksinet Vertex Attributes (18, 34, 6) (20, 12, 18) (12, 6, 15)  Store vertex attributes in single Array (array of structures) Vertex 1 Attributes x y Position z r g Color b s Vertex 2 Attributes t Tex0 s t Tex1 x y z Position r g Color b s t Tex0 s t Tex1 Source: http://www.doksinet Declaring Array of Vertex Attributes  Consider the following array of vertex attributes Vertex 1 Attributes x y z r g b s Vertex 2 Attributes t Position Color Tex0 0 1 2  s t Tex1 x y z Position r g b Color s t Tex0 s t Tex1 3 So we can define attribute positions (per vertex) #define #define #define #define VERTEX POS INDEX VERTEX COLOR INDEX VERTEX TEXCOORD0 INDX VERTEX TEXCOORD1 INDX 0 1 2 3 Source: http://www.doksinet Declaring Array of Vertex Attributes Vertex 1 Attributes

x y Position 3 floats  z r g Color b s Vertex 2 Attributes t Tex0 3 floats 2 floats s t x Tex1 y z Position r g Color b s t Tex0 s t Tex1 2 floats Also define number of floats (storage) for each vertex attribute #define #define #define #define VERTEX POS SIZE VERTEX COLOR SIZE VERTEX TEXCOORD0 SIZE VERTEX TEXCOORD1 SIZE #define VERTEX ATTRIB SIZE 3 3 2 2 // // // // x, y and z r, g and b s and t s and t VERTEX POS SIZE + VERTEX COLOR SIZE + VERTEX TEXCOORD0 SIZE + VERTEX TEXCOORD1 SIZE Source: http://www.doksinet Declaring Array of Vertex Attributes Vertex 1 Attributes x y z Position r g Color b s Vertex 2 Attributes t Tex0 s t Tex1 x y z Position r g Color b s t Tex0 s t Tex1 0 floats 3 floats 6 floats 8 floats  Define offsets (# of floats) of each vertex attribute from beginning #define #define #define #define VERTEX POS OFFSET VERTEX COLOR OFFSET VERTEX TEXCOORD0 OFFSET VERTEX TEXCOORD1 OFFSET 0 3 6 8

Source: http://www.doksinet Allocating Array of Vertex Attributes Vertex 1 Attributes x y Position  z r g Color b s Vertex 2 Attributes t Tex0 s t Tex1 x y z Position r g Color b s t Tex0 s t Tex1 Allocate memory for entire array of vertex attributes #define VERTEX ATTRIB SIZE Recall VERTEX POS SIZE + VERTEX COLOR SIZE + VERTEX TEXCOORD0 SIZE + VERTEX TEXCOORD1 SIZE float *p = malloc(numVertices VERTEX ATTRIB SIZE sizeof(float)); Allocate memory for all vertices Source: http://www.doksinet Specifying Array of Vertex Attributes Vertex 1 Attributes x y Position z r g Color b s Vertex 2 Attributes t Tex0 s t Tex1 x y z Position r g Color b s t Tex0 s t Tex1  glVertexAttribPointer used to specify vertex attributes  Example: to specify vertex position attribute Position 0 3 values (x, y, z) glVertexAttribPointer(VERTEX POS INDX, VERTEX POS SIZE, Data should not Be normalized GL FLOAT, GL FALSE, Data is floats

VERTEX ATTRIB SIZE * sizeof(float), p); glEnableVertexAttribArray(0);  Stride: distance between consecutive vertices do same for normal, tex0 and tex1 Pointer to data Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 8: Building 3D Models & Introduction to Transformations Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Full Example: Rotating Cube in 3D  Desired Program behaviour:   Draw colored cube Continuous rotation about X,Y or Z axis    Use 3‐button mouse to change direction of rotation     Idle function called repeatedly when nothing to do Increment angle of rotation in idle function Click left button ‐> rotate cube around X axis Click middle button ‐> rotate cube around Y axis Click right button ‐> rotate cube around Z axis Use default camera   If we don’t set camera, we get a default camera Located at origin and points in the

negative z direction Source: http://www.doksinet Cube Vertices Declare array of (x,y,z,w) vertex positions for a unit cube centered at origin (Sides aligned with axes) point4 vertices[8] = { 0 point4( -0.5, -05, 1 point4( -0.5, 05, 2 point4( 0.5, 05, 3 point4( 0.5, -05, 4 point4( -0.5, -05, 5 point4( -0.5, 05, 6 point4( 0.5, 05, 7 point4( 0.5, -05, }; 0.5, 0.5, 0.5, 0.5, -0.5, -0.5, -0.5, -0.5, 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 color4 vertex colors[8] = { color4( 0.0, 00, 00, 10 color4( 1.0, 00, 00, 10 color4( 1.0, 10, 00, 10 color4( 0.0, 10, 00, 10 color4( 0.0, 00, 10, 10 color4( 1.0, 00, 10, 10 color4( 1.0, 10, 10, 10 color4( 0.0, 10, 10, 10 }; ), ), ), ), ), ), ), ) ), ), ), ), ), ), ), ) // // // // // // // // Declare array of vertex colors black (set of RGBA colors vertex can have) red yellow green blue magenta white cyan Source: http://www.doksinet Color Cube point4 vertices[8] = { 0 point4( -0.5, -05, 1 point4( -0.5, 05, point4( 0.5, 05, point4( 0.5, -05, 4 point4(

-0.5, -05, 5 point4( -0.5, 05, point4( 0.5, 05, point4( 0.5, -05, }; // generate 6 quads, // sides of cube void colorcube() { quad( 1, 0, 3, quad( 2, 3, 7, quad( 3, 0, 4, quad( 6, 5, 1, quad( 4, 5, 6, quad( 5, 4, 0, } 2 6 7 2 7 1 ); ); ); ); ); ); 6 5 2 1 4 Function quad is Passed vertex indices 7 0 3 0.5, 0.5, 0.5, 0.5, -0.5, -0.5, -0.5, -0.5, 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ), ), ), ), ), ), ), ) Source: http://www.doksinet Quad Function d c d a b a c c a b // quad generates two triangles (a,b,c) and (a,c,d) for each face // and assigns colors to the vertices int Index = 0; // Index goes 0 to 5, one for each vertex of face void quad( int a, { 0 colors[Index] 1 colors[Index] 2 colors[Index] 3 colors[Index] 4 colors[Index] 5 colors[Index] } quad 0 quad 1 quad 2 int b, int c, int d ) = = = = = = vertex colors[a]; vertex colors[b]; vertex colors[c]; vertex colors[a]; vertex colors[c]; vertex colors[d]; = points[0 - 5 ] = points[6 – 11] = points [12 –

17] etc points[Index] points[Index] points[Index] points[Index] points[Index] points[Index] Points[ ] array to be Sent to GPU = = = = = = vertices[a]; vertices[b]; vertices[c]; vertices[a]; vertices[c]; vertices[d]; Index++; Index++; Index++; Index++; Index++; Index++; Read from appropriate index of unique positions declared Source: http://www.doksinet Initialization I void init() { colorcube(); // Generates cube data in application using quads // Create a vertex array object GLuint vao; glGenVertexArrays ( 1, &vao ); glBindVertexArray ( vao ); // Create a buffer object and move data to GPU GLuint buffer; glGenBuffers( 1, &buffer ); glBindBuffer( GL ARRAY BUFFER, buffer ); glBufferData( GL ARRAY BUFFER, sizeof(points) + sizeof(colors), NULL, GL STATIC DRAW ); points Points[ ] array of vertex positions sent to GPU colors colors[ ] array of vertex colors sent to GPU Source: http://www.doksinet Initialization II Send points[ ] and colors[ ] data to GPU separately

using glBufferSubData glBufferSubData( GL ARRAY BUFFER, 0, sizeof(points), points ); glBufferSubData( GL ARRAY BUFFER, sizeof(points), sizeof(colors), colors ); points colors // Load vertex and fragment shaders and use the resulting shader program GLuint program = InitShader( "vshader36.glsl", "fshader36glsl" ); glUseProgram( program ); Source: http://www.doksinet Initialization III // set up vertex arrays Specify vertex data GLuint vPosition = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( vPosition ); glVertexAttribPointer( vPosition, 4, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(0) ); GLuint vColor = glGetAttribLocation( program, "vColor" ); glEnableVertexAttribArray( vColor ); glVertexAttribPointer( vColor, 4, GL FLOAT, GL FALSE, 0, BUFFER OFFSET(sizeof(points)) ); points colors Specify color data theta = glGetUniformLocation( program, "theta" ); Want to Connect rotation variable theta in program to

variable in shader Source: http://www.doksinet Display Callback void display( void ) { glClear( GL COLOR BUFFER BIT|GL DEPTH BUFFER BIT ); glUniform3fv( theta, 1, theta ); glDrawArrays( GL TRIANGLES, 0, NumVertices ); glutSwapBuffers(); } Draw series of triangles forming cube Source: http://www.doksinet Mouse Callback enum { Xaxis = 0, Yaxis = 1, Zaxis = 2, NumAxes = 3 }; void mouse( int button, int state, int x, int y ) { if ( state == GLUT DOWN ) { switch( button ) { case GLUT LEFT BUTTON: axis = Xaxis; case GLUT MIDDLE BUTTON: axis = Yaxis; case GLUT RIGHT BUTTON: axis = Zaxis; } } } Select axis (x,y,z) to rotate around Using mouse click break; break; break; Source: http://www.doksinet Idle Callback void idle( void ) { theta[axis] += 0.01; if ( theta[axis] > 360.0 ) { theta[axis] -= 360.0; } The idle( ) function is called whenever nothing to do Use it to increment rotation angle in steps of theta = 0.01 around currently selected axis glutPostRedisplay(); } void

main( void ){ glutIdleFunc( idle ); } Note: still need to: • Apply rotation by (theta) in shader Source: http://www.doksinet Hidden‐Surface Removal     We want to see only surfaces in front of other surfaces OpenGL uses hidden‐surface technique called the z‐buffer algorithm Z‐buffer uses distance from viewer (depth) to determine closer objects Objects rendered so that only front objects appear in image If overlap, Draw face A (front face) Do not draw faces B and C Source: http://www.doksinet Using OpenGL’s z‐buffer algorithm   Z‐buffer uses an extra buffer, (the z‐buffer), to store depth information as geometry travels down the pipeline 3 steps to set up Z‐buffer: 1. In main( ) function glutInitDisplayMode(GLUT SINGLE | GLUT RGB | GLUT DEPTH) 2. Enabled in init( ) function glEnable(GL DEPTH TEST) 3. Clear depth buffer whenever we clear screen glClear(GL COLOR BUFFER BIT | DEPTH BUFFER BIT) Source: http://www.doksinet 3D Mesh

file formats        3D meshes usually stored in 3D file format Format defines how vertices, edges, and faces are declared Over 400 different file formats Polygon File Format (PLY) used a lot in graphics Originally PLY was used to store 3D files from 3D scanner We can get PLY models from web to work with We will use PLY files in this class Source: http://www.doksinet Sample PLY File ply format ascii 1.0 comment this is a simple file obj info any data, in one line of free form text element vertex 3 property float x property float y property float z element face 1 property list uchar int vertex indices end header -1 0 0 0 1 0 1 0 0 3 0 1 2 Source: http://www.doksinet Georgia Tech Large Models Archive Source: http://www.doksinet Stanford 3D Scanning Repository Lucy: 28 million faces Happy Buddha: 9 million faces Source: http://www.doksinet Introduction to Transformations  May also want to transform objects by changing its:     Position

(translation) Size (scaling) Orientation (rotation) Shapes (shear) Source: http://www.doksinet Translation  Move each vertex by same distance d = (dx, dy, dz) object translation: every point displaced by same vector Source: http://www.doksinet Scaling Expand or contract along each axis (fixed point of origin) x’=sxx y’=syy z’=szz p’=Sp where S = S(sx, sy, sz) Source: http://www.doksinet Recall: Introduction to Transformations  May also want to transform objects by changing its:     Position (translation) Size (scaling) Orientation (rotation) Shapes (shear) Source: http://www.doksinet Recall: Translation  Move each vertex by same distance d = (dx, dy, dz) object translation: every point displaced by same vector Source: http://www.doksinet Recall: Scaling Expand or contract along each axis (fixed point of origin) x’=sxx y’=syy z’=szz p’=Sp where S = S(sx, sy, sz) Source: http://www.doksinet Introduction to Transformations

 We can transform (translation, scaling, rotation, shearing, etc) object by applying matrix multiplications to object vertices  Px   m11 m12     Py   m21 m22  P   m m32  z   31 0  1  0 Transformed Vertex  m13 m23 m33 0 m14  Px    m24  Py  m34  Pz    1  1  Original Vertex Transform Matrix Note: point (x,y,z) needs to be represented as (x,y,z,1), also called Homogeneous coordinates Source: http://www.doksinet Why Matrices?    Multiple transform matrices can be pre‐multiplied One final resulting matrix applied (efficient!) For example: transform 1 transform 2 .  Qx   m11     Q y   m21 Q   m  z   31  1   0    Transformed Point m12 m13 m22 m32 m23 m33 0 0 m14  m11  m24  m21 m34  m31  1  0 m12 m13 m22 m32 m23 m33 0

0 Transform Matrices can Be pre-multiplied m14  Px    m24  Py  m34  Pz    1  1  Original Point Source: http://www.doksinet 3D Translation Example object  Translation of object Example: If we translate a point (2,2,2) by displacement (2,4,6), new location of point is (4,6,8) Translate(2,4,6) Translated x: 2 + 2 = 4 Translated y: 2 + 4 = 6 Translated z: 2 + 6 = 4  4    6 8   1 Translated point  1  0 0  0 0 1 0 0 0 0 1 0 2  4 6  1  Translation Matrix  2    2  2   1 Original point Source: http://www.doksinet 3D Translation  Translate object = Move each vertex by same distance d = (dx, dy, dz) object Translate(dx,dy,dz) Where:  x’= x + dx  y’= y + dy  z’= z + dz Translation of object  x     y  z    1 

  1  0 0  0 0 1 0 0 0 dx   0 dy  1 dz   0 1 Translation Matrix *  x    y  z   1   Source: http://www.doksinet Scaling Scale object = Move each object vertex by scale factor S = (Sx, Sy, Sz) Expand or contract along each axis (relative to origin) x’=sxx y’=syy z’=szz  x   S x     y   0  z    0    1  0 Scale(Sx,Sy,Sz) 0 0 Sy 0 0 0 Sz 0 Scale Matrix 0  x     0  y     0 z    1  1  Source: http://www.doksinet Scaling Example If we scale a point (2,4,6) by scaling factor (0.5,05,05) Scaled point position = (1, 2, 3) Scaled x: 2 x 0.5 = 1 Scaled y: 4 x 0.5 = 2 Scaled z: 6 x 0.5 = 3 0  1   0.5 0     2   0 0.5 0  3   0 0 0.5    0 0 1  0 0  2    0  4

   6 0    1  1 Scale Matrix for Scale(0.5, 05, 05) Source: http://www.doksinet Shearing (x,y) x y*h (x + y*h, y)   Y coordinates are unaffected, but x cordinates are translated linearly with y That is:  y’ = y  x   1 h 0   x         x’ = x + y * h   y    0 1 0   y   1   0 0 1  1        h is fraction of y to be added to x Source: http://www.doksinet 3D Shear Source: http://www.doksinet Reflection  corresponds to negative scale factors sx = -1 sy = 1 original sx = -1 sy = -1 sx = 1 sy = -1 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 9: Implementing Transformations Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Objectives  Learn how to implement transformations in OpenGL     Rotation Translation Scaling

Introduce mat.h and vech transformations   Model‐view Projection Source: http://www.doksinet Affine Transformations    Translate, Scale, Rotate, Shearing, are affine transforms Rigid body transformations: rotation, translation, scaling, shear Line preserving: important in graphics since we can 1. 2. Transform endpoints of line segments Draw line segment between the transformed endpoints Straight line Vertices v v’ Affine Transform u Straight line u’ Transformed vertices Source: http://www.doksinet Previously: Transformations in OpenGL  Pre 3.0 OpenGL had a set of transformation functions     glTranslate glRotate( ) glScale( ) Previously, OpenGL would   Receive transform commands (Translate, Rotate, Scale) Multiply tranform matrices together and maintain transform matrix stack known as modelview matrix Source: http://www.doksinet Previously: Modelview Matrix Formed? glMatrixMode(GL MODELVIEW) glLoadIdentity();

glScale(1,2,3); Specify transforms In OpenGL Program glTranslate(3,6,4); 1  0 0  0  0 0 0 1 0   1 0 0 0 2   0 0 0 1 0   0 0 1   0 0 Identity Matrix 0 0 1 0   0 0 0 1 3 0  0 0    0 0 0 1  glScale Matrix OpenGL implementations (glScale, glTranslate, etc) in Hardware (Graphics card) 0 3  0 6 1 4  0 1  glTranslate Matrix  1  0 0  0  3  2 0 12  0 3 12   0 0 1  0 0 Modelview Matrix OpenGL multiplies transforms together To form modelview matrix Applies final matrix to vertices of objects Source: http://www.doksinet Previously: OpenGL Matrices  OpenGL maintained 4 matrix stacks maintained as part of OpenGL state     Model‐View (GL MODELVIEW) Projection (GL PROJECTION) Texture (GL TEXTURE) Color(GL COLOR) Source: http://www.doksinet Now: Transformations in OpenGL    

From OpenGL 3.0: No transform commands (scale, rotate, etc), matrices maintained by OpenGL!! glTranslate, glScale, glRotate, OpenGL modelview all deprecated!! If programmer needs transforms, matrices implement it! Optional: Programmer *may now choose to maintain transform matrices or NOT! Source: http://www.doksinet Current Transformation Matrix (CTM)   Conceptually user can implement a 4 x 4 homogeneous coordinate matrix, the current transformation matrix (CTM) The CTM defined and updated in user program Implement in Header file Implement transforms Scale, rotate, etc Implement Main .cpp file Build rotate, scale matrices, put results in CTM User space Transform Graphics card C Matrix (CTM) p vertices Vertex shader p’=Cp p Transformed vertices Source: http://www.doksinet CTM in OpenGL Matrices  CTM = modelview + projection     Model‐View (GL MODELVIEW) Projection (GL PROJECTION) Texture (GL TEXTURE) Color(GL COLOR) Translate, scale, rotate

go here CTM Projection goes Here. More later Source: http://www.doksinet CTM Functionality glMatrixMode(GL MODELVIEW) glLoadIdentity(); glScale(1,2,3); 1. We need to implement our own transforms glTranslate(3,6,4); 1  0 0  0  0 0 0 1 0   1 0 0 0 2   0 0 0 1 0   0 0 1   0 0 Identity Matrix 0 0 1 0   0 0 0 1 3 0  0 0    0 0 0 1  glScale Matrix 0 3  0 6 1 4  0 1  glTranslate Matrix  1  0 0  0  3  2 0 12  0 3 12   0 0 1  0 0 Modelview Matrix 2. Multiply our transforms together to form CTM matrix 3. Apply final matrix to vertices of objects Source: http://www.doksinet Implementing Transforms and CTM   Where to implement transforms and CTM? We implement CTM in 3 parts mat.h (Header file) 1.  Implementations of translate( ) , scale( ), etc Application code (.cpp file) 2.  Multiply

together translate( ) , scale( ) = final CTM matrix GLSL functions (vertex and fragment shader) 3.  Apply final CTM matrix to vertices Source: http://www.doksinet Implementing Transforms and CTM    We just have to include mat.h (#include “math”), use it Uniformity: mat.h syntax resembles GLSL language in shaders Matrix Types: mat4 (4x4 matrix), mat3 (3x3 matrix). class mat4 { vec4 m[4]; . }  Can declare CTM as mat4 type mat4 ctm = Translate(3,6,4);  CTM 1  0 0  0  0 0 3  1 0 6 0 1 4  0 0 1  Translation Matrix mat.h also has transform functions: Translate, Scale, Rotate, etc mat4 Translate(const GLfloat x, const GLfloat y, const GLfloat z ) mat4 Scale( const GLfloat x, const GLfloat y, const GLfloat z ) Source: http://www.doksinet CTM operations  The CTM can be altered either by loading a new CTM or by postmutiplication Load identity matrix: C  I Load arbitrary matrix: C  M Load a translation

matrix: C  T Load a rotation matrix: C  R Load a scaling matrix: C  S Postmultiply by an arbitrary matrix: C  CM Postmultiply by a translation matrix: C  CT Postmultiply by a rotation matrix: C  C R Postmultiply by a scaling matrix: C  C S Source: http://www.doksinet Example: Rotation, Translation, Scaling Create an identity matrix: mat4 m = Identity(); Form Translate and Scale matrices, multiply together mat4 s = Scale( sx, sy, sz) mat4 t = Transalate(dx, dy, dz); m = m*st; Source: http://www.doksinet Example: Rotation about a Fixed Point   We want C = T R T–1 Be careful with order. Do operations in following order CI C  CT C  CR C  CT -1   Each operation corresponds to one function call in the program. Note: last operation specified is first executed Source: http://www.doksinet Transformation matrices Formed?    Converts all transforms (translate, scale, rotate) to 4x4 matrix We put 4x4 transform matrix into CTM

Example CTM Matrix mat4 m = Identity(); mat4 type stores 4x4 matrix Defined in mat.h 1  0 0  0  0 0 0  1 0 0 0 1 0  0 0 1  Source: http://www.doksinet Transformation matrices Formed? mat4 m = Identity(); mat4 t = Translate(3,6,4); m = m*t; Identity Matrix 1  0 0  0  Translation Matrix 0 0 0 1   1 0 0 0   0 1 0  0 0 0 1   0 0 0 3  1 0 6  0 1 4  0 0 1  CTM Matrix 1  0 0  0  0 0 3  1 0 6 0 1 4  0 0 1  Source: http://www.doksinet Transformation matrices Formed?  Consider following code snipet mat4 m = Identity(); mat4 s = Scale(1,2,3); m = m*s; Identity Matrix 1  0 0  0  Scaling Matrix 0 0 0 1 0   1 0 0  0 2 0 1 0 0 0   0 0 1   0 0 CTM Matrix 0 0  0 0 3 0  0 1   1  0 0  0  0 0

0  2 0 0 0 3 0  0 0 1  Source: http://www.doksinet Transformation matrices Formed?   What of translate, then scale, then . Just multiply them together. Evaluated in reverse order!! Eg: mat4 m = Identity(); mat4 s = Scale(1,2,3); mat4 t = Translate(3,6,4); m = m*st; 1  0 0  0  0 0 0 1 0   1 0 0 0 2   0 0 0 1 0    0 0 1   0 0 Identity Matrix 0 0 1 0   0 0 0 1 3 0  0 0    0 0 0 1  Scale Matrix 0 3  0 6 1 4  0 1  Translate Matrix  1  0 0  0  3  2 0 12  0 3 12   0 0 1  0 0 Final CTM Matrix Source: http://www.doksinet How are Transform matrices Applied? mat4 m = Identity(); mat4 s = Scale(1,2,3); mat4 t = Translate(3,6,4); m = m*st; colorcube( ); 1. In application: Load object vertices into points[ ] array -> VBO Call glDrawArrays CTM Matrix Application code Object

Vertices CTM Vertex shader 3. In vertex shader: Each vertex of object (cube) is multiplied by CTM to get transformed vertex position 1  0 0  0  1  0 0  0  0 0 3  2 0 12  0 3 12   0 0 1  0 0 3  2 0 12  0 3 12   0 0 1  2. CTM built in application, passed to vertex shader  1   1 1   1    4   14  15    1   Transformed vertex gl Position = model view*vPosition; Source: http://www.doksinet Passing CTM to Vertex Shader   Build CTM (modelview) matrix in application program Pass matrix to shader void display( ){ . mat4 m = Identity(); mat4 s = Scale(1,2,3); mat4 t = Translate(3,6,4); m = m*st; Build CTM in application CTM matrix m in application is same as model view in shader // find location of matrix variable “model view” in shader // then pass matrix to shader matrix loc =

glGetUniformLocation(program, “model view”); glUniformMatrix4fv(matrix loc, 1, GL TRUE, m); . } Source: http://www.doksinet Implementation: Vertex Shader    On glDrawArrays( ), vertex shader invoked with different vPosition per shader E.g If colorcube( ) generates 8 vertices, each vertex shader receives a vertex stored in vPosition Shader calculates modified vertex position, stored in gl Position in vec4 vPosition; uniform mat4 model view; p vPosition void main( ) { gl Position = model view*vPosition; } Transformed vertex position Contains CTM p’=Cp Vertex Shader Original vertex position p’ gl Position Source: http://www.doksinet What Really Happens to Vertex Position Attributes? Image credit: Arcsynthesis tutorials Source: http://www.doksinet What About Multiple Vertex Attributes? Image credit: Arcsynthesis tutorials Source: http://www.doksinet Transformation matrices Formed?  Example: Vertex (1, 1, 1) is one of 8 vertices of cube In vertex

shader In application mat4 m = Identity(); mat4 s = Scale(1,2,3); m = m*s; colorcube( ); CTM (m) 1  0 0  0  0 2 0 0 0 0 3 0 p’ p 0  0 * 0  1   1    1  1    1   Original vertex  1    2  3   1   Transformed vertex Each vertex of cube is multiplied by modelview matrix to get scaled vertex position Source: http://www.doksinet Transformation matrices Formed?  Another example: Vertex (1, 1, 1) is one of 8 vertices of cube In application In vertex shader mat4 m = Identity(); mat4 s = Scale(1,2,3); mat4 t = Translate(3,6,4); m = m*st; colorcube( ); 1  0 0  0  0 0 3  1    2 0 12   1     0 3 12 1       0 0 1  1 4   14   15    1   CTM Matrix Original vertex Transformed vertex Each vertex of cube is multiplied

by modelview matrix to get scaled vertex position Source: http://www.doksinet References   Angel and Shreiner, Chapter 3 Hill and Kelley, appendix 4 Source: http://www.doksinet Recall: Function Calls to Create Transform Matrices    Previously made function calls to generate 4x4 matrices for identity, translate, scale, rotate transforms Put transform matrix into CTM Example CTM Matrix mat4 m = Identity(); 1  0 0  0  0 0 0  1 0 0 0 1 0  0 0 1  Source: http://www.doksinet Arbitrary Matrices   Can multiply by matrices from transformation commands (Translate, Rotate, Scale) into CTM Can also load arbitrary 4x4 matrices into CTM Load into CTM Matrix  1 0 15 3     0 2 0 12   34 0 3 12     0 24 0 1    Source: http://www.doksinet Matrix Stacks  CTM is actually not just 1 matrix but a matrix STACK       Multiple matrices in stack, “current”

matrix at top Can save transformation matrices for use later (push, pop) E.g: Traversing hierarchical data structures (Ch 8) Pre 3.1 OpenGL also maintained matrix stacks Right now just implement 1‐level CTM Matrix stack later for hierarchical transforms Source: http://www.doksinet Reading Back State  Can also access OpenGL variables (and other parts of the state) by query functions glGetIntegerv glGetFloatv glGetBooleanv glGetDoublev glIsEnabled  Example: to find out maximum number of texture units glGetIntegerv(GL MAX TEXTURE UNITS, &MaxTextureUnits); Source: http://www.doksinet Using Transformations  Example: use idle function to rotate a cube and mouse function to change direction of rotation  Start with program that draws cube as before  Centered at origin  Sides aligned with axes Source: http://www.doksinet Recall: main.c void main(int argc, char *argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT DOUBLE | GLUT RGB | GLUT DEPTH);

glutInitWindowSize(500, 500); glutCreateWindow("colorcube"); glutReshapeFunc(myReshape); glutDisplayFunc(display); Calls spinCube continuously glutIdleFunc(spinCube); Whenever OpenGL program is idle glutMouseFunc(mouse); glEnable(GL DEPTH TEST); glutMainLoop(); } Source: http://www.doksinet Recall: Idle and Mouse callbacks void spinCube() { theta[axis] += 2.0; if( theta[axis] > 360.0 ) theta[axis] -= 3600; glutPostRedisplay(); } void mouse(int button, int state, int x, int y) { if(button==GLUT LEFT BUTTON && state == GLUT DOWN) axis = 0; if(button==GLUT MIDDLE BUTTON && state == GLUT DOWN) axis = 1; if(button==GLUT RIGHT BUTTON && state == GLUT DOWN) axis = 2; } Source: http://www.doksinet Display callback void display() { glClear(GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT); ctm = RotateX(theta[0])*RotateY(theta[1]) *RotateZ(theta[2]); glUniformMatrix4fv(matrix loc,1,GL TRUE,ctm); glDrawArrays(GL TRIANGLES, 0, N); glutSwapBuffers(); } •

Alternatively, we can • send rotation angle + axis to vertex shader, • Let shader form CTM then do rotation • Inefficient: if mesh has 10,000 vertices each one forms CTM, redundant!!!! Source: http://www.doksinet Using the Model‐view Matrix    In OpenGL the model‐view matrix used to  Transform 3D models (translate, scale, rotate)  Position camera (using LookAt function) (next) The projection matrix used to define view volume and select a camera lens (later) Although these matrices no longer part of OpenGL, good to create them in our applications (as CTM) Source: http://www.doksinet 3D? Interfaces   Major interactive graphics problem: how to use 2D devices (e.g mouse) to control 3D objects Some alternatives    Virtual trackball 3D input devices such as the spaceball Use areas of the screen  Distance from center controls angle, position, scale depending on mouse button depressed Source: http://www.doksinet Computer Graphics 4731

Lecture 10: Rotations and Matrix Concatenation Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Recall: 3D Translate Example object  Translation of object Example: If we translate a point (2,2,2) by displacement (2,4,6), new location of point is (4,6,8) Translate(2,4,6) Translated x: 2 + 2 = 4 Translated y: 2 + 4 = 6 Translated z: 2 + 6 = 4  4    6 8   1 Translated point  1  0 0  0 0 1 0 0 0 0 1 0 2  4 6  1  Translation Matrix  2    2  2   1 Original point Source: http://www.doksinet Recall: 3D Scale Example If we scale a point (2,4,6) by scaling factor (0.5,05,05) Scaled point position = (1, 2, 3) Scaled x: 2 x 0.5 = 1 Scaled y: 4 x 0.5 = 2 Scaled z: 6 x 0.5 = 3 0  1   0.5 0     2   0 0.5 0  3   0 0 0.5  

 0 0 1  0 Scaled point 0  2    0  4    6 0    1  1 Scale Matrix for Scale(0.5, 05, 05) Original point Source: http://www.doksinet Nate Robbins Translate, Scale Rotate Demo Source: http://www.doksinet Rotating in 3D    Many degrees of freedom. Rotate about what axis? 3D rotation: about a defined axis Different transform matrix for:  Rotation about x‐axis  Rotation about y‐axis y  Rotation about z‐axis + z x Source: http://www.doksinet Rotating in 3D  New terminology     X‐roll: rotation about x‐axis Y‐roll: rotation about y‐axis Z‐roll: rotation about z‐axis Which way is +ve rotation   Look in –ve direction (into +ve arrow) y CCW is +ve rotation + z x Source: http://www.doksinet Rotating in 3D y y x z z y z x y x z x Source: http://www.doksinet Rotating in 3D   For a rotation angle,  about an axis Define:

c  cos  x-roll or (RotateX) 1  0 Rx      0  0  s  sin   0 0 0  c  s 0 s c 0  0 0 1  Source: http://www.doksinet Rotating in 3D  c  y-roll (or RotateY)  0 R y     s   0  0 1 0 0 s 0 c 0 0  0 0  1  Rules: •Write 1 in rotation row, column •Write 0 in the other rows/columns •Write c,s in rect pattern c  s  s c Rz      0 0  0 0  z-roll (or RotateZ) 0 0 1 0 0  0 0  1  Source: http://www.doksinet Example: Rotating in 3D Question: Using y-roll equation, rotate P = (3,1,4) by 30 degrees: Answer: c = cos(30) = 0.866, s = sin(30) = 05, and  m11   m21 M  m31   0  m12 m22 m32 0 m13 m23 m33 0 m14   m24  m34   1  Line 1: 3.c + 10 + 4s + 10 = 3 x 0.866 + 4 x 05 = 46 Source: http://www.doksinet 3D Rotation   Rotate(angle, ux,

uy, uz): rotate by angle β about an arbitrary axis (a vector) passing through origin and (ux, uy, uz) Note: Angular position of u specified as azimuth/longitude (Θ ) and latitude (φ ) (ux, uy, uz) z u  Q P  β Origin x  y Source: http://www.doksinet Approach 1: 3D Rotation About Arbitrary Axis  Can compose arbitrary rotation as combination of:    X‐roll (by an angle β1) Y‐roll (by an angle β2) Z‐roll (by an angle β3) M  Rz (  3 ) R y (  2 ) Rx (  1 ) Read in reverse order Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem     Classic: use Euler’s theorem Euler’s theorem: any sequence of rotations = one rotation about some axis Want to rotate  about arbitrary axis u through origin Our approach: 1. 2. 3. Use two rotations to align u and x‐axis Do x‐roll through angle  Negate two previous rotations to de‐align u and x‐axis Source: http://www.doksinet Approach 1: 3D

Rotation using Euler Theorem   Note: Angular position of u specified as azimuth (Θ ) and latitude (φ ) First try to align u with x axis Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem  Step 1: Do y‐roll to line up rotation axis with x‐y plane R y ( ) y u z x Θ Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem  Step 2: Do z‐roll to line up rotation axis with x axis Rz ( ) R y ( ) y ‐φ z x u Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem    Remember: Our goal is to do rotation by β around u But axis u is now lined up with x axis. So, Step 3: Do x‐roll by β around axis u Rx (  ) Rz ( ) R y ( ) y β z u Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem   Next 2 steps are to return vector u to original position Step 4: Do z‐roll in x‐y plane y Rz ( ) Rx (  ) Rz ( ) R y ( ) u φ z x

Source: http://www.doksinet Approach 1: 3D Rotation using Euler Theorem  Step 5: Do y‐roll to return u to original position Ru (  )  R y ( ) Rz ( ) Rx (  ) Rz ( ) R y ( ) y u z x Θ Source: http://www.doksinet Approach 2: Rotation using Quaternions   Extension of imaginary numbers from 2 to 3 dimensions Requires 1 real and 3 imaginary components i, j, k q=q0+q1i+q2j+q3k  Quaternions can express rotations on sphere smoothly and efficiently Source: http://www.doksinet Approach 2: Rotation using Quaternions   Derivation skipped! Check answer Solution has lots of symmetry  c  (1  c)u x 2   (1  c)u x u y  su z R(  )    (1  c)u x u z  su y  0  c  cos  (1  c)u y u x  su z 2 c  (1  c)u y (1  c)u y u z  su x (1  c)u z u x  su y (1  c)u z u y  su x 2 c  (1  c)u z 0 0 s  sin   Arbitrary axis u 0  0  0 1

 Source: http://www.doksinet Inverse Matrices   Can compute inverse matrices by general formulas But some easy inverse transform observations -1  Translation: T (dx, dy, dz) = T(‐dx, ‐dy, ‐dz)  Scaling: S‐1 (sx, sy, sz) = S ( 1/sx, 1/sy, 1/sz ) ‐1  Rotation: R (q) = R(‐q)  Holds for any rotation matrix Source: http://www.doksinet Instancing     During modeling, often start with simple object centered at origin, aligned with axis, and unit size Can declare one copy of each shape in scene E.g declare 1 mesh for soldier, 500 instances to create army Then apply instance transformation to its vertices to Scale Orient Locate Source: http://www.doksinet References   Angel and Shreiner, Chapter 3 Hill and Kelley, Computer Graphics Using OpenGL, 3rd edition Source: http://www.doksinet Rotation About Arbitrary Point other than the Origin   Default rotation matrix is about origin How to rotate about any arbitrary point

pf (Not origin)?  Move fixed point to origin T(-pf)  Rotate R() Move fixed point back T(pf)  So, M = T(pf) R() T(-pf) T(-pf) R() T(pf) Source: http://www.doksinet Scale about Arbitrary Center   Similary, default scaling is about origin To scale about arbitrary point P = (Px, Py, Pz) by (Sx, Sy, Sz) 1. 2. 3.  Translate object by T(‐Px, ‐Py, ‐Pz) so P coincides with origin Scale object by (Sx, Sy, Sz) Translate object back: T(Px, Py, Py) In matrix form: T(Px,Py,Pz) (Sx, Sy, Sz) T(‐Px,‐Py,‐Pz) * P  x   1     y  0  z    0     1  0    0 1 0 0 0 Px  S x  0 Py  0 1 Pz  0  0 1  0 0 Sy 0 0 0 0 Sz 0 0  1  0  0 0  0  1  0 0 1 0 0 0  Px  x    0  Py  y  1  Pz  z    0 1  1  Source: http://www.doksinet Example 

Rotation about z axis by 30 degrees about a fixed point (1.0, 20, 30) mat 4 m = Identity(); m = Translate(1.0, 20, 30)* Rotate(30.0, 00, 00, 10)* Translate(-1.0, -20, -30);  Remember last matrix specified in program (i.e translate matrix in example) is first applied Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 11: Hierarchical 3D Models Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Instance Transformation   Start with unique object (a symbol) Each appearance of object in model is an instance   Must scale, orient, position Defines instance transformation Instance Symbol Source: http://www.doksinet Symbol‐Instance Table Can store intances + instance transformations Source: http://www.doksinet Problems with Symbol‐Instance Table   Symbol‐instance table does not show relationships between parts of model Consider model of car    Chassis (body) + 4 identical

wheels Two symbols Relationships:   Wheels connected to chassis Chassis motion determined by rotational speed of wheels Source: http://www.doksinet Structure Program Using Function Calls? car(speed) { chassis() wheel(right front); wheel(left front); wheel(right rear); wheel(left rear); }   Chassis Left front wheel Fails to show relationships between parts Look into graph representation 8 Left back wheel Source: http://www.doksinet Graphs   Set of nodes + edges (links) Edge connects a pair of nodes Directed or undirected   Cycle: directed path that is a loop edge node 9 loop Source: http://www.doksinet Tree  Graph in which each node (except root) has exactly one parent node   A parent may have multiple children Leaf node: no children root node leaf node 10 Source: http://www.doksinet Tree Model of Car 11 Source: http://www.doksinet Hierarchical Transforms   Robot arm: Many small connected parts Attributes (position,

orientation, etc) depend on each other A Robot Hammer! hammer Upper arm lower arm base Source: http://www.doksinet Hierarchical Transforms  Object dependency description using tree structure Root node Base Lower arm Object position and orientation can be affected by its parent, grand-parent, grand-grand-parent nodes Upper arm Hierarchical representation is Leaf node Hammer known as a Scene Graph Source: http://www.doksinet Transformations  Two ways to specify transformations:  (1) Absolute transformation: each part transformed independently (relative to origin) Translate the base by (5,0,0); Translate the lower arm by (5,0,0); Translate the upper arm by (5,0,0); y z x Source: http://www.doksinet Relative Transformation A better (and easier) way: (2) Relative transformation: Specify transformation for each object relative to its parent Step 1: Translate base and its descendants by (5,0,0); Source: http://www.doksinet Relative Transformation Step 2:

Rotate the lower arm and all its descendants relative to the base’s local y axis by -90 degree y y z x z x Source: http://www.doksinet Relative Transformation  Relative transformation using scene graph Base Lower arm Upper arm Hammer Translate (5,0,0) Rotate (-90) about its local y Apply all the way down Apply all the way down Source: http://www.doksinet Hierarchical Transforms Using OpenGL   Translate base and all its descendants by (5,0,0) Rotate lower arm and its descendants by ‐90 degree about local y ctm = LoadIdentity(); Base Lower arm // setup your camera ctm = ctm * Translatef(5,0,0); Draw base(); Upper arm Hammer ctm = ctm * Rotatef(-90, 0, 1, 0); Draw lower arm(); Draw upper arm(); Draw hammer(); Source: http://www.doksinet Hierarchical Modeling   For large objects with many parts, need to transform groups of objects Need better tools Upper arm Torso Lower arm Upper leg Lower leg Source: http://www.doksinet Hierarchical

Modeling    Previous CTM had 1 level Hierarchical modeling: extend CTM to stack with multiple levels using linked list Manipulate stack levels using 2 operations   pushMatrix popMatrix Current top Of CTM stack 1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  Source: http://www.doksinet PushMatrix   PushMatrix( ): Save current modelview matrix (CTM) in stack Positions 1 & 2 in linked list are same after PushMatrix Before PushMatrix Current top Of CTM stack 1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  After PushMatrix 1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  Current top Of CTM stack Copy of matrix at top of CTM Source: http://www.doksinet PushMatrix   Further Rotate, Scale, Translate affect only top matrix E.g ctm = ctm * Translate (3,8,6) After

PushMatrix 1  0 0  0  1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  0 0 0  2 0 0 0 3 0  0 0 1  1  0 0  0  0 0 3  1 0 8 0 1 6  0 0 1  Translate(3,8,6) applied only to current top Of CTM stack Matrix in second position saved. Unaffected by Translate(3,8,6) Source: http://www.doksinet PopMatrix  PopMatrix( ): Delete position 1 matrix, position 2 matrix becomes top After PopMatrix Before PopMatrix Current top Of CTM stack 1  0 0  0  5 4 0  2 2 0 6 3 0  0 0 1  1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  Current top Of CTM stack Delete this matrix 1  0 0  0  0 0 0  2 0 0 0 3 0  0 0 1  Source: http://www.doksinet PopMatrix and PushMatrix Illustration • Note: Diagram uses old glTranslate, glScale, etc commands • We want same

behavior though Apply matrix at top of CTM to vertices of object created Ref: Computer Graphics Through OpenGL by Guha Source: http://www.doksinet Humanoid Figure Upper arm Torso Lower arm Upper leg Lower leg 25 Source: http://www.doksinet Building the Model  Draw each part as a function    Transform Matrices: transform of node wrt its parent   torso() left upper arm(), etc Mlla positions left lower arm with respect to left upper arm Stack based traversal (push, pop) 26 Upper arm Mlla Lower arm Source: http://www.doksinet Draw Humanoid using Stack figure() { PushMatrix() torso(); 27 save present model-view matrix draw torso Source: http://www.doksinet Draw Humanoid using Stack figure() { PushMatrix() torso(); Rotate (); head(); (Mh) Transformation of head Relative to torso draw head 28 Source: http://www.doksinet Draw Humanoid using Stack 29 figure() { PushMatrix() torso(); Rotate (); head(); PopMatrix(); Go back to torso matrix,

and save it again PushMatrix(); Translate(); (Mlua) Transformation(s) of left upper arm relative to torso Rotate(); left upper arm(); draw left-upper arm . // rest of code() Source: http://www.doksinet Complete Humanoid Tree with Matrices Scene graph of Humanoid Robot Source: http://www.doksinet VRML      Scene graph introduced by SGI Open Inventor Used in many graphics applications (Maya, etc) Want scene graph for World Wide Web Need links scene parts in distributed data bases Virtual Reality Markup Language   31 Based on Inventor data base Implemented with OpenGL Source: http://www.doksinet VRML World Example Source: http://www.doksinet References  Angel and Shreiner, Interactive Computer Graphics (6th edition), Chapter 8 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 12: Linear Algebra for Graphics (Points, Scalars, Vectors) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source:

http://www.doksinet Points, Scalars and Vectors  Points, vectors defined relative to a coordinate system  Point: Location in coordinate system  Example: Point (5,4) y 5 (5,4) 4 (0,0) x Source: http://www.doksinet Vectors      Magnitude Direction NO position Can be added, scaled, rotated CG vectors: 2, 3 or 4 dimensions Length Angle Source: http://www.doksinet Points   Cannot add or scale points Subtract 2 points = vector Point Source: http://www.doksinet Vector‐Point Relationship   Diff. b/w 2 points = vector v=Q–P point + vector = point P+v=Q P v Q Source: http://www.doksinet Vector Operations  Define vectors a  (a1, a2 , a3 ) b  (b1,b2 , b3 ) Then vector addition: a  b  (a1  b1, a2  b2 , a3  b3 ) a a+b b Source: http://www.doksinet Vector Operations   Define scalar, s Scaling vector by a scalar as  (a1s, a2 s, a3 s ) Note vector subtraction: ab  (a1  (b1 ),

a2  (b2 ), a3  (b3 )) a a-b a 2.5a b Source: http://www.doksinet Vector Operations: Examples  Scaling vector by a scalar as  (a1s, a2 s, a3 s )  •Vector addition: a  b  (a1  b1, a2  b2 , a3  b3 ) For example, if a=(2,5,6) and b=(‐2,7,1) and s=6, then a  b  (a1  b1, a2  b2 , a3  b3 )  (0,12,7) as  (a1s, a2 s, a3 s )  (12,30,36) Source: http://www.doksinet Affine Combination  Given a vector a  (a1, a2 , a3 ,., an )   a1  a2  .an  1 Affine combination: Sum of all components = 1 Convex affine = affine + no negative component i.e a1 , a2 ,.an  non  negative Source: http://www.doksinet Magnitude of a Vector  Magnitude of a 2 2 | a | a1  a2 .  an  2 Normalizing a vector (unit vector) a vector â   a magnitude  Note magnitude of normalized vector = 1. ie 2 2 2 a1  a2 .  an  1 Source: http://www.doksinet Magnitude of a Vector 

Example: if a = (2, 5, 6)  Magnitude of a  Normalizing a | a | 2 2  52  6 2  65 5 6   2 â   , ,   65 65 65  Source: http://www.doksinet Convex Hull   Smallest convex object containing P1P2,.Pn Formed by “shrink wrapping” points 12 Source: http://www.doksinet Dot Product (Scalar product)  Dot product, d  a  b  a1  b1  a2  b2 .  a3  b3  For example, if a=(2,3,1) and b=(0,4,‐1) then a b  (2  0)  (3  4)  (1 1)  0  12  1  11 Source: http://www.doksinet Properties of Dot Products  Symmetry (or commutative): a b  b a  Linearity: (a  c)  b  a  b  c  b  Homogeneity: ( sa)  b  s (a  b)  And b2  b  b Source: http://www.doksinet Angle Between Two Vectors y b   b cos b , b sin b  c c  b c   c cos c , c sin c  b  c  b c cos b x b b b Sign

of b.c: c b.c > 0 c b.c = 0 c b.c < 0 Source: http://www.doksinet Angle Between Two Vectors  Find the angle b/w the vectors b = (3,4) and c = (5,2) Source: http://www.doksinet Angle Between Two Vectors   Find the angle b/w vectors b = (3,4) and c = (5,2) Step 1: Find magnitudes of vectors b and c | b | 32  4 2  25  5 | c | 52  2 2  29  Step 2: Normalize vectors b and c 3 4 b̂   ,  5 5 2   5 ĉ   ,   29 29  Source: http://www.doksinet Angle Between Two Vectors  Step 3: Find angle as dot product bˆ  cˆ ˆb  cˆ   3 , 4    5 , 2   5 5   29 29  ˆb  cˆ  15  8  23  0.85422 5 29 5 29 5 29  Step 4: Find angle as inverse cosine   cos(0.85422)  31326 Source: http://www.doksinet Standard Unit Vectors y Define i  1,0,0 j  0,1,0  k  0,0,1 So that any vector, i k 0 z v 

a, b, c   ai  bj  ck j x Source: http://www.doksinet Cross Product (Vector product) If a  a x , a y , a z  b  bx , by , bz  Then a  b  (a y bz  a z by )i  (a x bz  a z bx ) j  (a x by  a y bx )k Remember using determinant i j k ax bx ay by az bz Note: a x b is perpendicular to a and b Source: http://www.doksinet Cross Product Note: a x b is perpendicular to both a and b axb a 0 b Source: http://www.doksinet Cross Product Calculate a x b if a = (3,0,2) and b = (4,1,8) Source: http://www.doksinet Cross Product (Vector product) Calculate a x b if a = (3,0,2) and b = (4,1,8) a  3,0,2 Using determinant b  4,1,8 i j k 3 0 2 4 1 8 Then a  b  (0  2)i  (24  8) j  (3  0)k  2i  16 j  3k Source: http://www.doksinet Normal for Triangle using Cross Product Method n plane p2 n·(p - p0 ) = 0 n = (p2 - p0 ) ×(p1 - p0 ) normalize n  n/ |n| p p1 p0 Note

that right‐hand rule determines outward face Source: http://www.doksinet Newell Method for Normal Vectors  Problems with cross product method:    calculation difficult by hand, tedious If 2 vectors almost parallel, cross product is small Numerical inaccuracy may result p1 p0  p2 Proposed by Martin Newell at Utah (teapot guy)    Uses formulae, suitable for computer Compute during mesh generation Robust! Source: http://www.doksinet Newell Method Example  Example: Find normal of polygon with vertices P0 = (6,1,4), P1=(7,0,9) and P2 = (1,1,2)  Using simple cross product: ((7,0,9)‐(6,1,4)) X ((1,1,2)‐(6,1,4)) = (2,‐23,‐5) P1 - P0 P2 - P0 P2 (1,1,2) PO (6,1,4) P1 (7,0,9) Source: http://www.doksinet Newell Method for Normal Vectors  Formulae: Normal N = (mx, my, mz) N 1 mx    yi  ynext ( i ) zi  z next (i )  i 0 N 1 m y   zi  z next (i ) xi  xnext ( i )  i 0 N

1 mz   xi  xnext (i )  yi  ynext (i )  i 0 Source: http://www.doksinet Newell Method for Normal Vectors  Calculate x component of normal N 1 mx    yi  ynext ( i ) zi  z next (i )  i 0 x P0 P1 mx  (1)(13)  (1)(11)  (0)(6) mx  13  11  0 P2 mx  2 P0 y z 6 1 4 7 0 9 1 1 2 6 1 4 Source: http://www.doksinet Newell Method for Normal Vectors  Calculate y component of normal x N 1 m y   zi  z next (i ) xi  xnext ( i )  i 0 P0 P1 m y  (5)(13)  (7)(8)  (2)(7) m y  65  56  14 m y  23 P2 P0 y z 6 1 4 7 0 9 1 1 2 6 1 4 Source: http://www.doksinet Newell Method for Normal Vectors  Calculate z component of normal x N 1 mz   xi  xnext (i )  yi  ynext (i )  i 0 P0 P1 mz  (1)(1)  (6)(1)  (5)(2) mz  1  6  10 P2 mz  5 P0 Note: Using Newell method

yields same result as Cross product method (2,-23,-5) y z 6 1 4 7 0 9 1 1 2 6 1 4 Source: http://www.doksinet Finding Vector Reflected From a Surface      a = original vector n = normal vector r = reflected vector m = projection of a along n e = projection of a orthogonal to n n Note: Θ1 = Θ2 a r e  am r  em  r  a  2m m Θ1 e -m Θ2 e Source: http://www.doksinet Lines  Consider all points of the form   P()=P0 +  d Line: Set of all points that pass through P0 in direction of vector d Source: http://www.doksinet Parametric Form  Two‐dimensional forms of a line     Explicit: y = mx +h Implicit: ax + by +c =0 Parametric: x() = x0 + (1-)x1 y() = y0 + (1-)y1 Parametric form of line   α P1 1-α Po More robust and general than other forms Extends to curves and surfaces Pα Source: http://www.doksinet Convexity  An object is convex iff for any two points in

the object all points on the line segment between these points are also in the object P P Q convex Q not convex Source: http://www.doksinet Curves and Surfaces   Curves: 1‐parameter non‐linear functions of the form P() Surfaces: two‐parameter functions P(, )  Linear functions give planes and polygons P() P(, ) Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 3 Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Sections 4.2 ‐ 44 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 13: Viewing & Camera Control Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet 3D Viewing?   Objects inside view volume drawn to viewport (screen) Objects outside view volume clipped (not drawn)! 2. Set view volume (3D region of interest) 1. Set camera position Source: http://www.doksinet

Different View Volume Shapes y y z z x x Orthogonal view volume   Perspective view volume Different view volume => different look Foreshortening? Near objects bigger   Perpective projection has foreshortening Orthogonal projection: no foreshortening Source: http://www.doksinet The World Frame    Objects/scene initially defined in world frame World Frame origin at (0,0,0) Objects positioned, oriented (translate, scale, rotate transformations) applied to objects in world frame World frame (Origin at 0,0,0) Source: http://www.doksinet Camera Frame   More natural to describe object positions relative to camera (eye) Think about   Our view of the world First person shooter games Source: http://www.doksinet Camera Frame   Viewing: After user chooses camera (eye) position, represent objects in camera frame (origin at eye position) Viewing transformation: Changes object positions from world frame to positions in camera frame

using model‐view matrix World frame (Origin at 0,0,0) Camera frame (Origin at camera) Source: http://www.doksinet Default OpenGL Camera    Initially Camera at origin: object and camera frames same Camera located at origin and points in negative z direction Default view volume is cube with sides of length 2 Default view volume (objects in volume are seen) clipped out 2 z=0 Source: http://www.doksinet Moving Camera Frame Same relative distance after Same result/look default frames Translate objects +5 away from camera Translate camera ‐5 away from objects Source: http://www.doksinet Moving the Camera   We can move camera using sequence of rotations and translations Example: side view    Rotate the camera Move it away from origin Model‐view matrix C = TR // Using mat.h mat4 t = Translate (0.0, 00, -d); mat4 ry = RotateY(90.0); mat4 m = t*ry; Source: http://www.doksinet Moving the Camera Frame  Object distances relative to camera

determined by the model‐view matrix   Transforms (scale, translate, rotate) go into modelview matrix Camera transforms also go in modelview matrix (CTM) Rotate Scale Translate Camera Transforms CTM Source: http://www.doksinet The LookAt Function    Previously, command gluLookAt to position camera gluLookAt deprecated!! Homegrown mat4 method LookAt() in mat.h  Can concatenate with modeling transformations void display( ){ mat4 mv = LookAt(vec4 eye, vec4 at, vec4 up); . } Builds 4x4 matrix for positioning, orienting Camera and puts it into variable mv Source: http://www.doksinet LookAt LookAt(eye, at, up) But Why do we set Up direction? Programmer defines: • eye position • LookAt point (at) and • Up vector (Up direction usually (0,1,0)) Source: http://www.doksinet Nate Robbins LookAt Demo Source: http://www.doksinet What does LookAt do?   Programmer defines eye, lookAt and Up LookAt method:   Form new axes (u, v, n) at camera

Transform objects from world to eye camera frame World coordinate Frame Eye coordinate Frame Source: http://www.doksinet Camera with Arbitrary Orientation and Position  Define new axes (u, v, n) at eye      v points vertically upward, n away from the view volume, u at right angles to both n and v. The camera looks toward ‐n. All vectors are normalized. Eye coordinate Frame (new) World coordinate Frame (old) Source: http://www.doksinet LookAt: Effect of Changing Eye Position or LookAt Point   Programmer sets LookAt(eye, at, up) If eye, lookAt point changes => u,v,n changes Source: http://www.doksinet Viewing Transformation Steps 1. 2.  Form camera (u,v,n) frame Transform objects from world frame (Composes matrix for coordinate transformation) Next, let’s form camera (u,v,n) frame (0,1,0) v y lookAt world x z (1,0,0) u n (0,0,1) (0,0,0) Source: http://www.doksinet Constructing U,V,N Camera Frame    Lookat arguments:

LookAt(eye, at, up) Known: eye position, LookAt Point, up vector Derive: new origin and three basis (u,v,n) vectors Lookat Point eye o 90 Source: http://www.doksinet Eye Coordinate Frame   New Origin: eye position (that was easy) 3 basis vectors:   one is the normal vector (n) of the viewing plane, other two (u and v) span the viewing plane v Lookat Point n is pointing away from the world because we use left hand coordinate system u eye world origin (u,v,n should all be orthogonal) n N = eye – Lookat Point n= N / |N| Remember u,v,n should be all unit vectors Source: http://www.doksinet Eye Coordinate Frame  How about u and v? v Lookat V up •We can get u first •u is a vector that is perp to the plane spanned by N and view up vector (V up) u eye n U = V up x n u = U / |U| Source: http://www.doksinet Eye Coordinate Frame  How about v? v Lookat V up u eye Knowing n and u, getting v is easy n v = n xu v is already normalized Source:

http://www.doksinet Eye Coordinate Frame Eye space origin: (Eye.x , Eyey,Eyez)  Put it all together v Lookat V up Basis vectors: u n u v eye = = = n (eye – Lookat) / | eye – Lookat| (V up x n) / | V up x n | n x u Source: http://www.doksinet Step 2: World to Eye Transformation   Next, use u, v, n to compose LookAt matrix Transformation matrix (Mw2e) ? P’ = Mw2e x P v y world u n P Eye frame x z 1. Come up with transformation sequence that lines up eye frame with world frame 2. Apply this transform sequence to point P in reverse order Source: http://www.doksinet World to Eye Transformation 1. 2. Rotate eye frame to “align” it with world frame Translate (‐ex, ‐ey, ‐ez) to align origin with eye v uy vy ny 0 Translation: 1 0 0 0 0 1 0 0 n uz 0 vz 0 nz 0 0 1 (ex,ey,ez) x z ux vx nx 0 u y world Rotation: 0 -ex 0 -ey 1 -ez 0 1 Source: http://www.doksinet World to Eye Transformation  Transformation order: apply the

transformation to the object in reverse order ‐ translation first, and then rotate Rotation Mw2e = y world z ux vx nx 0 uy vy ny 0 Translation ux 0 vz 0 nz 0 0 1 1 0 0 -ex 0 1 0 -ey 0 0 1 -ez 0 0 0 1 v u n (ex,ey,ez) x = ux vx nx 0 uy vy ny 0 uz -e . u vz -e . v nz -e . n 0 1 Multiplied together = lookAt transform Note: e.u = exux + eyuy + ezuz e.v = exvx + eyvy + ezvz e.n = exnx + eyny + eznz Source: http://www.doksinet lookAt Implementation (from mat.h) Eye space origin: (Eye.x , Eyey,Eyez) Basis vectors: n = u = v = (eye – Lookat) / | eye – Lookat| (V up x n) / | V up x n | n x u mat4 LookAt( { vec4 n = vec4 u = vec4 v = vec4 t = mat4 c = return c } ux vx nx 0 uy vy ny 0 uz -e . u vz -e . v nz -e . n 0 1 const vec4& eye, const vec4& at, const vec4& up ) normalize(eye - at); normalize(cross(up,n)); normalize(cross(n,u)); vec4(0.0, 00, 00, 10); mat4(u, v, n, t); * Translate( -eye ); Source: http://www.doksinet References  

Interactive Computer Graphics, Angel and Shreiner, Chapter 4 Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Other Camera Controls   The LookAt function is only for positioning camera Other ways to specify camera position/movement    Yaw, pitch, roll Elevation, azimuth, twist Direction angles Source: http://www.doksinet Flexible Camera Control    Sometimes, we want camera to move Like controlling an airplane’s orientation Adopt aviation terms:    Pitch: nose up‐down Roll: roll body of plane Yaw: move nose side to side Source: http://www.doksinet Yaw, Pitch and Roll Applied to Camera  Similarly, yaw, pitch, roll with a camera Source: http://www.doksinet Flexible Camera Control  Create a camera class class Camera private: Point3 eye; Vector3 u, v, n;. etc  Camera functions to specify pitch, roll, yaw. Eg cam.slide(1, 0, 2); // slide camera backward 2 and cam.roll(30); // roll

camera 30 degrees cam.yaw(40); // yaw it 40 degrees cam.pitch(20); // pitch it 20 degrees right 1 Source: http://www.doksinet Recall: Final LookAt Matrix • Slide along u, v or n • Changes eye position • Changes these components ux vx nx 0 uy vy ny 0 slide uz -e . u vz -e . v nz -e . n 0 1 • Pitch, yaw, roll rotates u, v or n • Changes u, v or n roll Source: http://www.doksinet Implementing Flexible Camera Control  Camera class: maintains current (u,v,n) and eye position class Camera private: Point3 eye; Vector3 u, v, n;. etc  User inputs desired roll, pitch, yaw angle or slide 1. Roll, pitch, yaw: calculate modified vector (u’, v’, n’) 2. Slide: Calculate new eye position 3. Update lookAt matrix, Load it into CTM Source: http://www.doksinet Example: Camera Slide     Recall: the axes are unit vectors User changes eye by delU, delV or delN eye = eye + changes (delU, delV, delN) Note: function below combines all slides into one E.g

moving camera by D along its u axis = eye + Du void camera::slide(float delU, float delV, float delN) { eye.x += delU*u.x + delV*v.x + delN*n.x; eye.y += delU*u.y + delV*v.y + delN*n.y; eye.z += delU*u.z + delV*v.z + delN*n.z; setModelViewMatrix( ); } Source: http://www.doksinet Load Matrix into CTM ux vx nx 0 uy vy ny 0 uz -e . u vz -e . v nz -e . n 0 1 void Camera::setModelViewMatrix(void) { // load modelview matrix with camera values mat4 m; Vector3 eVec(eye.x, eyey, eyez);// eye as vector m[0] = u.x; m[4] = uy; m[8] = uz; m[12] = -dot(eVec,u); m[1] = v.x; m[5] = vy; m[9] = vz; m[13] = -dot(eVec,v); m[2] = n.x; m[6] = ny; m[10] = nz; m[14] = -dot(eVec,n); m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0; CTM = m; // Finally, load matrix m into CTM Matrix } • Slide changes eVec, • roll, pitch, yaw, change u, v, n • Call setModelViewMatrix after slide, roll, pitch or yaw Source: http://www.doksinet Example: Camera Roll v v’  u’  u  cos( )u  sin( )

v v   sin( )u  cos( ) v u void Camera::roll(float angle) { // roll the camera through angle degrees float cs = cos(3.142/180 * angle); float sn = sin(3.142/180 * angle); Vector3 t = u; // remember old u u.set(cs*t.x – sn*v.x, cs*t.y – snvy, cs*t.z – snvz); v.set(sn*t.x + cs*v.x, sn*t.y + csvy, sn*t.z + csvz) setModelViewMatrix( ); } Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 14: Projection (Part I) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Recall: 3D Viewing and View Volume Previously: Lookat( ) to set camera position Now: Set view volume Source: http://www.doksinet Recall: Different View Volume Shapes y y z z x x Orthogonal view volume (no foreshortening)   Perspective view volume (exhibits foreshortening) Different view volume => different look Foreshortening? Near objects bigger Source: http://www.doksinet View Volume Parameters  Need to set

view volume parameters    Projection type: perspective, orthographic, etc. Field of view and aspect ratio Near and far clipping planes Source: http://www.doksinet Field of View    View volume parameter Determines how much of world in picture (vertically) Larger field of view = smaller objects drawn field of view (view angle) y z center of projection y  z x Source: http://www.doksinet Near and Far Clipping Planes  Only objects between near and far planes drawn Near plane y z x Far plane Source: http://www.doksinet Viewing Frustrum   Near plane + far plane + field of view = Viewing Frustum Objects outside the frustum are clipped Near plane Far plane y z x Viewing Frustum Source: http://www.doksinet Setting up View Volume/Projection Type  Previous OpenGL projection commands deprecated!!  Perspective view volume/projection:    z y Orthographic:   gluPerspective(fovy, aspect, near, far) or glFrustum(left,

right, bottom, top, near, far) glOrtho(left, right, bottom, top, near, far) z x y x Useful functions, so we implement similar in mat.h:    Perspective(fovy, aspect, near, far) or Frustum(left, right, bottom, top, near, far) Ortho(left, right, bottom, top, near, far) What are these arguments? Next! Source: http://www.doksinet Perspective(fovy, aspect, near, far)  Aspect ratio used to calculate window width Near plane y z y fovy w z h x Aspect = w / h near far Source: http://www.doksinet Frustum(left, right, bottom, top, near, far)   Can use Frustrum( ) in place of Perspective() Same view volume shape, different arguments left top y z x bottom near near right far and far measured from camera Source: http://www.doksinet Ortho(left, right, bottom, top, near, far)  For orthographic projection top left y z x right bottom near near far and far measured from camera Source: http://www.doksinet Example Usage: Setting View

Volume/Projection Type void display() { // clear screen glClear(GL COLOR BUFFER BIT); . // Set up camera position LookAt(0,0,1,0,0,0,0,1,0); . // set up perspective transformation Perspective(fovy, aspect, near, far); . // draw something display all(); // your display routine } Source: http://www.doksinet Demo  Nate Robbins demo on projection Source: http://www.doksinet Perspective Projection   After setting view volume, then projection transformation Projection?    Classic: Converts 3D object to corresponding 2D on screen How? Draw line from object to projection center Calculate where each intersects projection plane Projectors camera Object in 3 space Projected image projection plane VRP COP Source: http://www.doksinet Orthographic Projection    How? Draw parallel lines from each object vertex The projection center is at infinite In short, use (x,y) coordinates, just drop z coordinates y z x Projection of Triangle in 2D Triangle In 3D

Source: http://www.doksinet Default View Volume/Projection?    What if you user does not set up projection? Default OpenGL projection is orthogonal (Ortho( )); To project points within default view volume xp = x yp = y zp = 0 y z Vertices before Projection Vertices after Projection x Projection of Triangle in 2D Triangle In 3D Source: http://www.doksinet Homogeneous Coordinate Representation default orthographic projection xp = x yp = y zp = 0 wp = 1 Vertices before Projection (3D) pp = Mp Vertices after Projection (2D) 1 0 M=  0  0 0 1 0 0 0 0 0 0 0 0 0  1 Default Projection Matrix In practice, can let M = I, set the z term to zero later Source: http://www.doksinet The Problem with Classic Projection   Keeps (x,y) coordintates for drawing, drops z We may need z. Why? Projectors Object in 3 space Projected image VRP y COP z xp = x yp = y zp = 0 x Classic Projection Loses z value Projection of Triangle in 2D

VertexTriangle In 3D Source: http://www.doksinet Normalization: Keeps z Value   Most graphics systems use view normalization Normalization: convert all other projection types to orthogonal projections with the default view volume y Perspective transform matrix z x Default view volume Clipping against it y z x Ortho transform matrix Source: http://www.doksinet References   Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 15: Projection (Part 2): Derivation Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Parallel Projection  normalization  find 4x4 matrix to transform user‐specified view volume to canonical view volume (cube) User‐specified View Volume Canonical View Volume glOrtho(left, right, bottom, top,near, far) Source:

http://www.doksinet Parallel Projection: Ortho  Parallel projection: 2 parts 1. Translation: centers view volume at origin Source: http://www.doksinet Parallel Projection: Ortho 2. Scaling: reduces user‐selected cuboid to canonical cube (dimension 2, centered at origin) Source: http://www.doksinet Parallel Projection: Ortho    1  0 0  0  Translation lines up midpoints: E.g midpoint of x = (right + left)/2 Thus translation factors: ‐(right + left)/2, ‐(top + bottom)/2, ‐(far+near)/2 Translation matrix: 0 0 1 0 0 1 0 0  ( right  left ) / 2    ( top  bottom ) / 2   ( far  near ) / 2    1  Source: http://www.doksinet Parallel Projection: Ortho    Scaling factor: ratio of ortho view volume to cube dimensions Scaling factors: 2/(right ‐ left), 2/(top ‐ bottom), 2/(far ‐ near) Scaling Matrix M2: 2    right  left  0    0   0  0 0 2 top 

bottom 0 0 0 2 far  near 0  0   0  0  1  Source: http://www.doksinet Parallel Projection: Ortho Concatenating Translation x Scaling, we get Ortho Projection matrix 2    right  left  0    0   0  0 0 2 top  bottom 0 0 0 P = ST = 2 far  near 0 2   right  left   0   0   0   0   0  0  1  X 1  0 0  0  0 1 0 0 0 0 1 0 0 0 2 top  bottom 0 0 0  ( right  left ) / 2    ( top  bottom ) / 2   ( far  near ) / 2    1  2 near  far 0 right  left  right  left   top  bottom   top  bottom  far  near   far  near  1   Source: http://www.doksinet Final Ortho Projection   Set z =0 Equivalent to the homogeneous coordinate transformation 1 0 Morth =  0  0  0 1 0 0 0 0 0 0 0 0 0 

1 Hence, general orthogonal projection in 4D is P = MorthST Source: http://www.doksinet Perspective Projection  Projection – map the object from 3D space to 2D screen y z x Perspective() Frustrum( ) Source: http://www.doksinet Perspective Projection: Classical Projectors Object in 3 space Projected image VRP COP Projection plane y (x,y,z) Based on similar triangles: (x’,y’,z’) (0,0,0) -z +z y’ = y N -z N y’ = y x -z Eye (COP) Near Plane (VOP) N -z Source: http://www.doksinet Perspective Projection: Classical  So (x*,y) projection of point, (x,y,z) unto near plane N is given as: Projectors  N N   x*, y    x , y  z  z Object in 3 space Projected image VRP COP Numerical example: Q. Where on the viewplane does P = (1, 05, ‐15) lie for a near plane at N = 1?   N N    x*, y    x , y   1 1 ,0.5  1   (0666,0333)  z   1.5 1.5   z

Source: http://www.doksinet Pseudodepth  Classical perspective projection projects (x,y) coordinates to (x*, y), drops z coordinates Map to same (x*,y) Compare their z values? Projectors Object in 3 space Projected image VRP (0,0,0) COP  But we need z to find closest object (depth testing)!!! z Source: http://www.doksinet Perspective Transformation  Perspective transformation maps actual z distance of perspective view volume to range [ –1 to 1] (Pseudodepth) for canonical view volume Actual view volume Actual depth We want perspective Transformation and NOT classical projection!! -Near -Far Canonical view volume Pseudodepth -1 1 Set scaling z Pseudodepth = az + b Next solve for a and b Source: http://www.doksinet Perspective Transformation  We want to transform viewing frustum volume into canonical view volume (1, 1, -1) y z x (-1, -1, 1) Canonical View Volume Source: http://www.doksinet Perspective Transformation using Pseudodepth  N N az

 b   x*, y, z    x , y ,   z z z    Choose a, b so as z varies from Near to Far, pseudodepth varies from –1 to 1 (canonical cube) Actual view volume Boundary conditions  z* = ‐1 when z = ‐N  z* = 1 when z = ‐F Actual depth Z -Near -Far Pseudodepth Canonical view volume Z* 1 -1 Source: http://www.doksinet Transformation of z: Solve for a and b  Solving:  Use boundary conditions    z*  az  b z z* = ‐1 when z = ‐N(1) z* = 1 when z = ‐F.(2) Set up simultaneous equations  aN  b   N  aN  b.(1) N  aF  b 1  F  aF  b.(2) F 1  Source: http://www.doksinet Transformation of z: Solve for a and b  N  aN  b.(1) F  aF  b.(2)  Multiply both sides of (1) by ‐1 N  aN  b.(3)  Add eqns (2) and (3) F  N  aN  aF F  N  (F  N ) a  .(4) N F FN  Now put (4) back into (3)

Source: http://www.doksinet Transformation of z: Solve for a and b  Put solution for a back into eqn (3) N  aN  b.(3)  N (F  N ) N b FN  N (F  N )  b  N  FN  N ( F  N )  N ( F  N )  NF  N 2  NF  N 2  2 NF b   FN FN FN  So a  (F  N ) FN b  2 FN FN Source: http://www.doksinet What does this mean?  Original point z in original view volume, transformed into z* in canonical view volume z*   az  b z Actual view volume Original vertex z value -Near -Far where a  (F  N ) FN b  2 FN FN Transformed vertex z* value Canonical view volume 1 -1 Source: http://www.doksinet Homogenous Coordinates      Want to express projection transform as 4x4 matrix Previously, homogeneous coordinates of P = (Px,Py,Pz) => (Px,Py,Pz,1) Introduce arbitrary scaling factor, w, so that P = (wPx, wPy, wPz, w) (Note: w is

non‐zero) For example, the point P = (2,4,6) can be expressed as  (2,4,6,1)  or (4,8,12,2) where w=2  or (6,12,18,3) where w = 3, or. To convert from homogeneous back to ordinary coordinates, first divide all four terms by w and discard 4th term Source: http://www.doksinet Perspective Projection Matrix  Recall Perspective Transform  N N az  b   x*, y, z    x , y ,   z z z   We have:  In matrix form: N   0  0   0  N y*  y z N x*  x z 0 N 0 0 Perspective Transform Matrix 0 0 a 1 0   wx  0   wy b   wz  0   w        Original vertex z*  az  b z N     x   wNx  z     N   y   wNy   z    w ( az  b )  az  b       wz      z   1   Transformed Vertex Transformed Vertex after dividing by 4th

term Source: http://www.doksinet Perspective Projection Matrix N   0  0   0  0 N 0 0 a 0 0 a 1 N     x 0   wP x   wNP x   z       N   y 0   wP y   wNP y    z       w ( aP z  b ) b wP z  az  b        0   w    wP z   z   1    (F  N ) FN b  2 FN FN  In perspective transform matrix, already solved for a and b:  So, we have transform matrix to transform z values Source: http://www.doksinet Perspective Projection     Not done yet!! Can now transform z! Also need to transform the x = (left, right) and y = (bottom, top) ranges of viewing frustum to [‐1, 1] Similar to glOrtho, we need to translate and scale previous matrix along x and y to get final projection transform matrix we translate by    –(right + left)/2 in x

‐(top + bottom)/2 in y y top Scale by:   2/(right – left) in x 2/(top – bottom) in y x bottom left 1 -1 right Source: http://www.doksinet Perspective Projection  Translate along x and y to line up center with origin of CVV    –(right + left)/2 in x ‐(top + bottom)/2 in y Multiply by translation matrix: 1  0 0  0  0 0 1 0 0 1 0 0  ( right  left ) / 2    ( top  bottom ) / 2   0   1  y top x Line up centers Along x and y 1 bottom left -1 right Source: http://www.doksinet Perspective Projection  To bring view volume size down to size of of CVV, scale by    2/(right – left) in x 2/(top – bottom) in y Multiply by scale matrix: 2    right  left  0   0   0  0 2 top  bottom 0 0  0 0   0 0  1 0 0 1  y top x Scale size down along x and y bottom left 1 -1 right Source: http://www.doksinet

Perspective Projection Matrix 2    right  left  0   0   0  Translate Scale 0 0 2 top  bottom 0 1 0 0 2N    x max  x min  0    0   0  0  0 1   0  0   0 0 0  1  0 2N top  bottom 0 0 0 1 0 0 0 1 0 0 Previous Perspective Transform Matrix  ( right  left ) / 2   N    ( top  bottom ) / 2   0  0 0     0 1   right  left right  left top  bottom top  bottom  (F  N ) FN 1 glFrustum(left, right, bottom, top, N, F)     0    2 FN  FN  0  0 N 0 0 0 0 a 1 0  0 b  0  0 Final Perspective Transform Matrix N = near plane, F = far plane Source: http://www.doksinet Perspective Transformation  After perspective transformation, viewing frustum volume is transformed into canonical view volume (1, 1, -1) y z x (-1, -1,

1) Canonical View Volume Source: http://www.doksinet Geometric Nature of Perspective Transform a) b) Lines through eye map into lines parallel to z axis after transform Lines perpendicular to z axis map to lines perp to z axis after transform Source: http://www.doksinet Normalization Transformation distorted object projects correctly original clipping volume original object new clipping volume Source: http://www.doksinet Implementation   Set modelview and projection matrices in application program Pass matrices to shader void display( ){ Build 4x4 projection matrix . model view = LookAt(eye, at, up); projection = Ortho(left, right, bottom,top, near, far); // pass model view and projection matrices to shader glUniformMatrix4fv(matrix loc, 1, GL TRUE, model view); glUniformMatrix4fv(projection loc, 1, GL TRUE, projection); . } Source: http://www.doksinet Implementation  And the corresponding shader in vec4 vPosition; in vec4 vColor; Out vec4 color; uniform

mat4 model view; Uniform mat4 projection; void main( ) { gl Position = projection*model viewvPosition; color = vColor; } Source: http://www.doksinet References   Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Why do we need Lighting & shading?  Sphere without lighting & shading  We want (sphere with shading):  Has visual cues for humans (shape, light position, viewer position, surface orientation, material properties, etc) Source: http://www.doksinet What Causes Shading?  Shading caused by different angles with light, camera at different points Source: http://www.doksinet Lighting?    Problem: Calculate surface color based on angle that

surface makes with light, viewer Programmer writes vertex shader code to calculate lighting at vertices! Equation for lighting calculation = lighting model 3. 1. Light attributes: intensity, color, position, direction, shape Interaction between lights and objects 2. Surface attributes color, reflectivity, transparency, etc Source: http://www.doksinet Shading?  After triangle is rasterized (drawn in 2D)     Triangle converted to pixels Per‐vertex lighting calculation means we know color of pixels coinciding with vertices (red dots) Shading: Graphics hardware figures out color of interior pixels How? Assume linear change => interpolate Lighting (done at vertices in vertex shader) Rasterization Find pixels belonging to each object Shading (done in hardware during rasterization) Source: http://www.doksinet Global Illumination (Lighting) Model  Global illumination: model interaction of light from all surfaces in scene (track multiple bounces)

shadow multiple reflection translucent surface Source: http://www.doksinet Rendering Equation  The infinite reflection, scattering and absorption of light is described by the rendering equation   Includes many effects (Reflection, Shadows, etc) Mathematical basis for all global illumination algorithms Lo         L ( x,     fr ( x, , ) Li ( x, )(   n )d  e       Li Lo is outgoing radiance fr Li incident radiance Le Le emitted radiance, fr is bidirectional reflectance distribution function (BRDF)  Fraction of incident light reflected by a surface Lo Source: http://www.doksinet Local Illumination (Lighting) Model  One bounce!  Doesn’t track inter‐reflections, transmissions   Global Illumination (GI) is accurate, looks real  But raster graphics pipeline (e.g OpenGL) renders each polygon independently (local rendering), no GI Source: http://www.doksinet

Light Sources   General light sources are difficult to model (e.g light bulb) Why? We must compute effect of light coming from all points on light source Source: http://www.doksinet Light Sources Abstractions   We generally use simpler light sources Abstractions that are easier to model Point light Spot light Directional light Area light Light intensity can be independent or dependent of the distance between object and the light source Source: http://www.doksinet Light‐Material Interaction   Light strikes object, some absorbed, some reflected Fraction reflected determines object color and brightness  Example: A surface looks red under white light because red component of light is reflected, other wavelengths absorbed Source: http://www.doksinet Phong Model      Simple lighting model that can be computed quickly 3 components  Diffuse  Specular  Ambient Compute each component separately Vertex Illumination = ambient +

diffuse + specular Materials reflect each component differently Source: http://www.doksinet Phong Model   Compute lighting (components) at each vertex (P) Uses 4 vectors, from vertex  To light source (l)  To viewer (v)  Normal (n)  Mirror direction (r) Source: http://www.doksinet Mirror Direction?   Angle of reflection = angle of incidence Normal is determined by surface orientation r = 2 (l · n ) n - l Source: http://www.doksinet Surface Roughness   Smooth surfaces: more reflected light concentrated in mirror direction Rough surfaces: reflects light in all directions smooth surface rough surface Source: http://www.doksinet Diffuse Lighting Example Source: http://www.doksinet Diffuse Light Calculation   How much light received from light source? Based on Lambert’s Law Receive more light Receive less light Source: http://www.doksinet Diffuse Light Reflected  Illumination surface receives from a light source and

reflects equally in all directions Eye position does not matter Source: http://www.doksinet Diffuse Light Calculation light vector (from object to light)   N : surface normal Lambert’s law: radiant energy D a small surface patch receives from a light source is: D = I x kD cos ()    I: light intensity : angle between light vector and surface normal kD: Diffuse reflection coefficient. Controls how much diffuse light surface reflects Source: http://www.doksinet Specular light example Specular? Bright spot on object Source: http://www.doksinet Specular light contribution     Incoming light reflected out in small surface area Specular bright in mirror direction Drops off away from mirror direction Depends on viewer position relative to mirror direction Mirror direction: lots of specular Away from mirror direction A little specular specular highlight Source: http://www.doksinet Specular light calculation     Perfect

reflection surface: all specular seen in mirror direction Non‐perfect (real) surface: some specular still seen away from mirror direction  is deviation of view angle from mirror direction Small  = more specular   p Mirror direction Source: http://www.doksinet Modeling Specular Relections  incoming intensity Mirror direction Is = ks I cos reflected intensity shininess coef Absorption coef Source: http://www.doksinet The Shininess Coefficient,     controls falloff sharpness High sharper falloff = small, bright highlight Low slow falloff = large, dull highlight    between 100 and 200 = metals  between 5 and 10 = plastic look cos  -90  90 Source: http://www.doksinet Specular light: Effect of ‘α’ Is = ks I cos α = 10 α = 30 α = 90 α = 270 Source: http://www.doksinet Ambient Light Contribution    Very simple approximation of global illumination (Lump

2nd, 3rd, 4th, . etc bounce into single term) Assume to be a constant No direction!  Independent of light position, object orientation, observer’s position or orientation object 4 object 3 object 2 object 1 Ambient = Ia x Ka constant Source: http://www.doksinet Ambient Light Example Ambient: background light, scattered by environment Source: http://www.doksinet Light Attentuation with Distance Light reaching a surface inversely proportional to square of distance d  We can multiply by factor of form 1/(ad + bd +cd2) to diffuse and specular terms  Source: http://www.doksinet Adding up the Components Adding all components (no attentuation term) , phong model for each light source can be written as diffuse + specular + ambient I = kd Id cos + ks Is cos + ka Ia  = kd Id (l · n) + ks Is (v · r )+  Note:  cos = l · n  cos = v · r  ka Ia  Source: http://www.doksinet Separate

RGB Components   We can separate red, green and blue components Instead of 3 light components Id, Is, Ia,     E.g Id = Idr, Idg, Idb 9 coefficients for each point source Idr, Idg, Idb, Isr, Isg, Isb, Iar, Iag, Iab Instead of 3 material components kd, ks, ka,    E.g kd = kdr, kdg, kdb 9 material absorption coefficients kdr, kdg, kdb, ksr, ksg, ksb, kar, kag, kab Source: http://www.doksinet Put it all together Can separate red, green and blue components. Instead of: I = kd Id (l · n) + ks Is (v · r )+ ka Ia  We computing lighting for RGB colors separately Red Ir = kdr Idr l · n + ksr Isr (v · r )+ kar Iar Ig = kdg Idg l · n + ksg Isg (v · r )+ kag Iag Green Blue Ib = kdb Idb l · n + ksb Isb (v · r )+ kab Iab   Above equation is just for one light source!!  For N lights, repeat calculation for each light Total illumination for a point P =  (Lighting for all lights) Source:

http://www.doksinet Coefficients for Real Materials Material Ambient Kar, Kag,kab Diffuse Kdr, Kdg,kdb Specular Ksr, Ksg,ksb Exponent, Black plastic 0.0 0.0 0.0 0.01 0.01 0.01 0.5 0.5 0.5 32 Brass 0.329412 0.223529 0.027451 0.780392 0.568627 0.113725 0.992157 0.941176 0.807843 27.8974 Polished Silver 0.23125 0.23125 0.23125 0.2775 0.2775 0.2775 0.773911 0.773911 0.773911 89.6 Figure 8.17, Hill, courtesy of McReynolds and Blythe  Source: http://www.doksinet References   Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Modified Phong Model I = kd Id l · n + ks Is (v · r )+ ka Ia I = kd Id l · n + ks Is (n · h ) + ka Ia   Used in OpenGL Blinn proposed using halfway vector, more efficient h is normalized vector halfway between l and v h = ( l + v )/ | l + v | Source: http://www.doksinet Example Modified Phong model

gives Similar results as original Phong Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 17: Lighting, Shading and Materials (Part 2) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Computation of Vectors   To calculate lighting at vertex P Need l, n, r and v vectors at vertex P User specifies:       Light position Viewer (camera) position Vertex (mesh position) l: Light position – vertex position v: Viewer position – vertex position Normalize all vectors! Source: http://www.doksinet Specifying a Point Light Source   For each light source component, set RGBA and position alpha = transparency Red vec4 vec4 vec4 vec4 Green Blue Alpha diffuse0 =vec4(1.0, 00, 00, 10); ambient0 = vec4(1.0, 00, 00, 10); specular0 = vec4(1.0, 00, 00, 10); light0 pos =vec4(1.0, 20, 3,0, 10); x y z w Source: http://www.doksinet Distance and Direction vec4 light0 pos =vec4(1.0,

20, 3,0, 10); x  y z Position is in homogeneous coordinates w Source: http://www.doksinet Recall: Mirror Direction Vector r    Can compute r from l and n l, n and r are co‐planar What about determining vertex normal n? r = 2 (l · n ) n - l Source: http://www.doksinet Finding Normal, n  Normal calculation in application, passed to vertex shader OpenGL Application Calculates n n vertex Shader Source: http://www.doksinet Recall: Newell Method for Normal Vectors  Formulae: Normal N = (mx, my, mz) N 1 mx    yi  ynext ( i ) zi  z next (i )  i 0 N 1 m y   zi  z next (i ) xi  xnext ( i )  i 0 N 1 mz   xi  xnext (i )  yi  ynext (i )  i 0 Source: http://www.doksinet OpenGL shading Need     Normals material properties Lights State‐based shading functions now deprecated   (glNormal, glMaterial, glLight) deprecated Source:

http://www.doksinet Material Properties      Need to specify material properties of scene objects Material properties also has ambient, diffuse, specular Material properties specified as RGBA + reflectivities w component gives opacity (transparency) Default? all surfaces are opaque Red Green Blue Opacity vec4 ambient = vec4(0.2, 02, 02, 10); vec4 diffuse = vec4(1.0, 08, 00, 10); vec4 specular = vec4(1.0, 10, 10, 10); GLfloat shine = 100.0 Material Shininess Source: http://www.doksinet Recall: CTM Matrix passed into Shader  Recall: CTM matrix concatenated in application mat4 ctm = ctm * LookAt(vec4 eye, vec4 at, vec4 up);   CTM matrix passed in contains object transform + Camera Connected to matrix ModelView in shader OpenGL Application Builds CTM CTM vertex Shader in vec4 vPosition; Uniform mat4 ModelView ; CTM passed in main( ) { // Transform vertex position into eye coordinates vec3 pos = (ModelView * vPosition).xyz; . } Source:

http://www.doksinet Computation of Vectors  CTM transforms vertex position into eye coordinates     Eye coordinates? Object, light distances measured from eye Normalize all vectors! (magnitude = 1) GLSL has a normalize function Note: vector lengths affected by scaling // Transform vertex position into eye coordinates vec3 pos = (ModelView * vPosition).xyz; vec3 L = normalize( LightPosition.xyz - pos ); // light vector vec3 E = normalize( -pos ); // view vector vec3 H = normalize( L + E ); // Halfway vector Source: http://www.doksinet Spotlights  Derive from point source    Direction I (of lobe center) Cutoff: No light outside  Attenuation: Proportional to cos    Source: http://www.doksinet Recall: Lighting Calculated Per Vertex   Phong model (ambient+diffuse+specular) calculated at each vertex to determine vertex color Per vertex calculation? Usually done in vertex shader n l v Source: http://www.doksinet

Per‐Vertex Lighting Shaders I // vertex shader in vec4 vPosition; in vec3 vNormal; out vec4 color; //vertex shade Ambient, diffuse, specular (light * reflectivity) specified by user // light and material properties uniform vec4 AmbientProduct, DiffuseProduct, SpecularProduct; uniform mat4 ModelView; uniform mat4 Projection; kd Id uniform vec4 LightPosition; ka Ia ks Is uniform float Shininess; exponent of specular term Source: http://www.doksinet Per‐Vertex Lighting Shaders II void main( ) { // Transform vertex position into eye coordinates vec3 pos = (ModelView * vPosition).xyz; vec3 L = normalize( LightPosition.xyz - pos ); vec3 E = normalize( -pos ); vec3 H = normalize( L + E ); // halfway Vector // Transform vertex normal into eye coordinates vec3 N = normalize( ModelView*vec4(vNormal, 0.0) )xyz; Source: http://www.doksinet Per‐Vertex Lighting Shaders III // Compute terms in the illumination equation vec4 ambient = AmbientProduct; ka Ia float cos theta = max( dot(L,

N), 0.0 ); kd Id l · n vec4 diffuse = cos theta * DiffuseProduct; float cos phi = pow( max(dot(N, H), 0.0), Shininess ); ks Is (n · h )  vec4 specular = cos phi * SpecularProduct; if( dot(L, N) < 0.0 ) specular = vec4(00, 00, 00, 10); gl Position = Projection * ModelView vPosition; color = ambient + diffuse + specular; color.a = 10; } I = kd Id l · n + ks Is (n · h ) + ka Ia Source: http://www.doksinet Per‐Vertex Lighting Shaders IV // in vertex shader, we declared color as out, set it . color = ambient + diffuse + specular; color.a = 10; } // in fragment shader ( in vec4 color; void main() { gl FragColor = color; } Graphics Hardware color set in vertex shader color used in fragment shader Source: http://www.doksinet References   Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Shading?  After triangle is

rasterized/drawn   Per‐vertex lighting calculation means we know color of pixels coinciding with vertices (red dots) Shading determines color of interior surface pixels Shading I = kd Id l · n + ks Is (n · h ) + ka Ia Lighting calculation at vertices (in vertex shader) Source: http://www.doksinet Shading?  Two types of shading   Assume linear change => interpolate (Smooth shading) No interpolation (Flat shading) Shading I = kd Id l · n + ks Is (n · h ) + ka Ia Lighting calculation at vertices (in vertex shader) Source: http://www.doksinet Flat Shading  compute lighting once for each face, assign color to whole face Source: http://www.doksinet Flat shading    Only use face normal for all vertices in face and material property to compute color for face Benefit: Fast! Used when:     Polygon is small enough Light source is far away (why?) Eye is very far away (why?) Previous OpenGL command:

glShadeModel(GL FLAT) deprecated! Source: http://www.doksinet Mach Band Effect   Flat shading suffers from “mach band effect” Mach band effect – human eyes accentuate the discontinuity at the boundary perceived intensity Side view of a polygonal surface Source: http://www.doksinet Smooth shading    Fix mach band effect – remove edge discontinuity Compute lighting for more points on each face 2 popular methods:   Gouraud shading Phong shading Flat shading Smooth shading Source: http://www.doksinet Gouraud Shading     Lighting calculated for each polygon vertex Colors are interpolated for interior pixels Interpolation? Assume linear change from one vertex color to another Gouraud shading (interpolation) is OpenGL default Source: http://www.doksinet Flat Shading Implementation     Default is smooth shading Colors set in vertex shader interpolated Flat shading? Prevent color interpolation In vertex shader, add

keyword flat to output color flat out vec4 color; //vertex shade color = ambient + diffuse + specular; color.a = 10; Source: http://www.doksinet Flat Shading Implementation  Also, in fragment shader, add keyword flat to color received from vertex shader flat in vec4 color; void main() { gl FragColor = color; } Source: http://www.doksinet Gouraud Shading   Compute vertex color in vertex shader Shade interior pixels: vertex color interpolation C1 for all scanlines Ca = lerp(C1, C2) C2 Cb = lerp(C1, C3) C3 Lerp(Ca, Cb) * lerp: linear interpolation Source: http://www.doksinet Linear interpolation Example b a v1     x v2 If a = 60, b = 40 RGB color at v1 = (0.1, 04, 02) RGB color at v2 = (0.15, 03, 05) Red value of v1 = 0.1, red value of v2 = 015 40 60 0.1 x 0.15 Red value of x = 40 /100 * 0.1 + 60/100 * 0.15 = 0.04 + 009 = 013 Similar calculations for Green and Blue values Source: http://www.doksinet Gouraud Shading  Interpolate

triangle color 1. 2. Interpolate y distance of end points (green dots) to get color of two end points in scanline (red dots) Interpolate x distance of two ends of scanline (red dots) to get color of pixel (blue dot) Interpolate using y values Interpolate using x values Source: http://www.doksinet Gouraud Shading Function (Pg. 433 of Hill) for(int y = ybott; y < ytop; y++) // for each scan line { find xleft and xright find colorleft and colorright colorinc = (colorright – colorleft)/ (xright – xleft) for(int x = xleft, c = colorleft; x < xright; x++, c+ = colorinc) { put c into the pixel at (x, y) } } ytop xleft,colorleft xright,colorright ybott Source: http://www.doksinet Gouraud Shading Implemenation  Vertex lighting interpolated across entire face pixels if passed to fragment shader in following way 1. 2. Vertex shader: Calculate output color in vertex shader, Declare output vertex color as out I = kd Id l · n + ks Is (n · h ) + ka Ia Fragment

shader: Declare color as in, use it, already interpolated!! Source: http://www.doksinet Calculating Normals for Meshes   For meshes, already know how to calculate face normals (e.g Using Newell method) For polygonal models, Gouraud proposed using average of normals around a mesh vertex n = (n1+n2+n3+n4)/ |n1+n2+n3+n4| Source: http://www.doksinet Gouraud Shading Problem    Assumes linear change across face If polygon mesh surfaces have high curvatures, Gouraud shading in polygon interior can be inaccurate Phong shading may look smooth Source: http://www.doksinet Phong Shading  Need vectors n, l, v, r for all pixels – not provided by user  Instead of interpolating vertex color  Interpolate vertex normal and vectors  Use pixel vertex normal and vectors to calculate Phong shading at pixel (per pixel lighting)  Phong shading computes lighting in fragment shader Source: http://www.doksinet Phong Shading (Per Fragment)  Normal

interpolation (also interpolate l,v) n1 nb = lerp(n1, n3) na = lerp(n1, n2) lerp(na, nb) n2 n3 At each pixel, need to interpolate Normals (n) and vectors v and l Source: http://www.doksinet Gouraud Vs Phong Shading Comparison Phong shading more work than Gouraud shading  Move lighting calculation to fragment shaders  Just set up vectors (l,n,v,h) in vertex shader  a. Gouraud Shading • • Set Vectors (l,n,v,h) Calculate vertex colors Hardware Interpolates Vertex color • • Read/set fragment color (Already interpolated) I = kd Id l · n + ks Is (n · h ) + ka Ia b. Phong Shading • Set Vectors (l,n,v,h) Hardware Interpolates Vectors (l,n,v,h) • • • Read in vectors (l,n,v,h) (interpolated) Calculate fragment lighting I = kd Id l · n + ks Is (n · h ) + ka Ia Source: http://www.doksinet Per‐Fragment Lighting Shaders I // vertex shader in vec4 vPosition; in vec3 vNormal; // output values that will be interpolatated

per-fragment out vec3 fN; Declare variables n, v, l as out in vertex shader out vec3 fE; out vec3 fL; uniform mat4 ModelView; uniform vec4 LightPosition; uniform mat4 Projection; Source: http://www.doksinet Per‐Fragment Lighting Shaders II void main() { fN = vNormal; fE = -vPosition.xyz; fL = LightPosition.xyz; Set variables n, v, l in vertex shader if( LightPosition.w != 00 ) { fL = LightPosition.xyz - vPositionxyz; } gl Position = Projection*ModelViewvPosition; } Source: http://www.doksinet Per‐Fragment Lighting Shaders III // fragment shader // per-fragment interpolated values from the vertex shader in vec3 fN; Declare vectors n, v, l as in in fragment shader in vec3 fL; (Hardware interpolates these vectors) in vec3 fE; uniform vec4 AmbientProduct, DiffuseProduct, SpecularProduct; uniform mat4 ModelView; uniform vec4 LightPosition; uniform float Shininess; Source: http://www.doksinet Per=Fragment Lighting Shaders IV void main() { // Normalize the input lighting

vectors vec3 N = normalize(fN); vec3 E = normalize(fE); vec3 L = normalize(fL); Use interpolated variables n, v, l in fragment shader vec3 H = normalize( L + E ); vec4 ambient = AmbientProduct; I = kd Id l · n + ks Is (n · h ) + ka Ia Source: http://www.doksinet Per‐Fragment Lighting Shaders V float Kd = max(dot(L, N), 0.0); vec4 diffuse = Kd*DiffuseProduct; Use interpolated variables n, v, l in fragment shader float Ks = pow(max(dot(N, H), 0.0), Shininess); vec4 specular = Ks*SpecularProduct; // discard the specular highlight if the lights behind the vertex if( dot(L, N) < 0.0 ) specular = vec4(0.0, 00, 00, 10); gl FragColor = ambient + diffuse + specular; gl FragColor.a = 10; } I = kd Id l · n + ks Is (n · h ) + ka Ia Source: http://www.doksinet Toon (or Cel) Shading   Non‐Photorealistic (NPR) effect Shade in bands of color Source: http://www.doksinet Toon (or Cel) Shading   How? Consider (l · n) diffuse term (or cos Θ) term

I = kd Id l · n + ks Is (n · h ) + ka Ia  Clamp values to min value of ranges to get toon shading effect l·n Value used Between 0.75 and 1 0.75 Between 0.5 and 075 0.5 Between 0.25 and 05 0.25 Between 0.0 and 025 0.0 Source: http://www.doksinet BRDF Evolution       BRDFs have evolved historically 1970’s: Empirical models  Phong’s illumination model 1980s:  Physically based models  Microfacet models (e.g Cook Torrance model) 1990’s  Physically‐based appearance models of specific effects (materials, weathering, dust, etc) Early 2000’s  Measurement & acquisition of static materials/lights (wood, translucence, etc) Late 2000’s  Measurement & acquisition of time‐varying BRDFs (ripening, etc) Source: http://www.doksinet Physically‐Based Shading Models       Phong model produces pretty pictures Cons: empirical (fudged?) (cos), plastic look Shaders can implement better

lighting/shading models Big trend towards Physically‐based lighting models Physically‐based?  Based on physics of how light interacts with actual surface  Apply Optics/Physics theories Classic: Cook‐Torrance shading model (TOGS 1982) Source: http://www.doksinet Cook‐Torrance Shading Model   Same ambient and diffuse terms as Phong New, better specular component than (cos), F  , DG cos   n  v    Idea: surfaces has small V‐shaped microfacets (grooves) Incident light    δ Average normal n microfacets Many grooves at each surface point Distribution term D: Grooves facing a direction contribute E.g half of grooves face 30 degrees, etc Source: http://www.doksinet BV BRDF Viewer BRDF viewer (View distribution of light bounce) Source: http://www.doksinet BRDF Evolution       BRDFs have evolved historically 1970’s: Empirical models  Phong’s illumination model 1980s: 

Physically based models  Microfacet models (e.g Cook Torrance model) 1990’s  Physically‐based appearance models of specific effects (materials, weathering, dust, etc) Early 2000’s  Measurement & acquisition of static materials/lights (wood, translucence, etc) Late 2000’s  Measurement & acquisition of time‐varying BRDFs (ripening, etc) Source: http://www.doksinet Measuring BRDFs Murray‐Coleman and Smith Gonioreflectometer. ( Copied and Modified from [Ward92] ) Source: http://www.doksinet Measured BRDF Samples Mitsubishi Electric Research Lab (MERL) http://www.merlcom/brdf/  Wojciech Matusik  MIT PhD Thesis  100 Samples  Source: http://www.doksinet BRDF Evolution       BRDFs have evolved historically 1970’s: Empirical models  Phong’s illumination model 1980s:  Physically based models  Microfacet models (e.g Cook Torrance model) 1990’s  Physically‐based appearance models of specific effects

(materials, weathering, dust, etc) Early 2000’s  Measurement & acquisition of static materials/lights (wood, translucence, etc) Late 2000’s  Measurement & acquisition of time‐varying BRDFs (ripening, etc) Source: http://www.doksinet Time‐varying BRDF    BRDF: How different materials reflect light Time varying?: how reflectance changes over time Examples: weathering, ripening fruits, rust, etc Source: http://www.doksinet References   Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Source: http://www.doksinet Computer Graphics (4731) Lecture 19: Texturing Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet The Limits of Geometric Modeling   Although graphics cards can render over 10 million polygons per second Many phenomena even more detailed      Clouds Grass Terrain Skin

Images: Computationally inexpensive way to add details 2 Image complexity does not affect the complexity of geometry processing (transformation, clipping) Source: http://www.doksinet Textures in Games   Everthing is a texture except foreground characters that require interaction Even details on foreground texture (e.g clothes) is texture Source: http://www.doksinet Types of Texturing 1. geometric model 2. texture mapped Paste image (marble) onto polygon Source: http://www.doksinet Types of Texturing 3. Bump mapping Simulate surface roughness (dimples) 4. Environment mapping Picture of sky/environment over object Source: http://www.doksinet Texture Mapping 1. Define texture position on geometry 2. projection 4. patch texel 3D geometry 3. texture lookup 2D projection of 3D geometry t 2D image S Source: http://www.doksinet Texture Representation Bitmap (pixel map) textures: images (jpg, bmp, etc) loaded   Procedural textures: E.g fractal

picture generated in cpp file Textures applied in shaders (1,1) t Bitmap texture:     s (0,0) 2D image - 2D array texture[height][width] Each element (or texel ) has coordinate (s, t) s and t normalized to [0,1] range Any (s,t) => [red, green, blue] color Source: http://www.doksinet Texture Mapping  Map? Each (x,y,z) point on object, has corresponding (s, t) point in texture s = s(x,y,z) t = t(x,y,z) (x,y,z) t s texture coordinates world coordinates Source: http://www.doksinet 6 Main Steps to Apply Texture Create texture object Specify the texture 1. 2.    Assign texture (corners) to Object corners Specify texture parameters 3. 4.  5. 6. Read or generate image assign to texture (hardware) unit enable texturing (turn on) wrapping, filtering Pass textures to shaders Apply textures in shaders Source: http://www.doksinet Step 1: Create Texture Object  OpenGL has texture objects (multiple objects possible)   1 object stores 1

texture image + texture parameters First set up texture object GLuint mytex[1]; glGenTextures(1, mytex); // Get texture identifier glBindTexture(GL TEXTURE 2D, mytex[0]); // Form new texture object   Subsequent texture functions use this object Another call to glBindTexture with new name starts new texture object Source: http://www.doksinet Step 2: Specifying a Texture Image   Define input picture to paste onto geometry Define texture image as array of texels in CPU memory Glubyte my texels[512][512][3];  Read in scanned images (jpeg, png, bmp, etc files)   If uncompressed (e.g bitmap): read into array from disk If compressed (e.g jpeg), use third party libraries (eg Qt, devil) to uncompress + load bmp, jpeg, png, etc Source: http://www.doksinet Step 2: Specifying a Texture Image  Procedural texture: generate pattern in application code  Enable texture mapping   glEnable(GL TEXTURE 2D) OpenGL supports 1‐4 dimensional texture maps

Source: http://www.doksinet Specify Image as a Texture Tell OpenGL: this image is a texture!! glTexImage2D( target, level, components, w, h, border, format, type, texels ); target: type of texture, e.g GL TEXTURE 2D level: used for mipmapping (0: highest resolution. More later) components: elements per texel w, h: width and height of texels in pixels border: used for smoothing (discussed later) format,type: describe texels texels: pointer to texel array Example: glTexImage2D(GL TEXTURE 2D, 0, 3, 512, 512, 0, GL RGB, GL UNSIGNED BYTE, my texels); Source: http://www.doksinet Fix texture size OpenGL textures must be power of 2 If texture dimensions not power of 2, either   1) Pad zeros 2) Scale the Image 60 100 128 Remember to adjust target polygon corners – don’t want black texels in your final picture 64 Source: http://www.doksinet 6 Main Steps. Where are we? Create texture object Specify the texture 1. 2.    Assign texture (corners) to Object corners

Specify texture parameters 3. 4.  5. 6. Read or generate image assign to texture (hardware) unit enable texturing (turn on) wrapping, filtering Pass textures to shaders Apply textures in shaders Source: http://www.doksinet Step 3: Assign Object Corners to Texture Corners  Each object corner (x,y,z) => image corner (s, t)    E.g object (200,348,100) => (1,1) in image Programmer esablishes this mapping Target polygon can be any size/shape (200,348,100) (0,1) (1,1) t (1,0) (0,0,0) (0,0) s Source: http://www.doksinet Step 3: Assigning Texture Coordinates   After specifying corners, interior (s,t) ranges also mapped Example? Corners mapped below, abc subrange also mapped t Texture Space 0, 1 1, 1 (s, t) = (0.2, 08) A a b 0, 0 Object Space c (0.4, 02) B 1, 0 s C (0.8, 04) Source: http://www.doksinet Step 3: Code for Assigning Texture Coordinates Example: Trying to map a picture to a quad For each quad corner (vertex), specify 

 Vertex (x,y,z), Corresponding corner of texture (s, t)   May generate array of vertices + array of texture coordinates  points[i] = point3(2,4,6); tex coord[i] = point2(0.0, 10); points array x y z Position 1 A x y z Position 2 B tex coord array x y z Position 3 C s t Tex0 a s t Tex1 b s t Tex3 c Source: http://www.doksinet Step 3: Code for Assigning Texture Coordinates void quad( int a, int b, int c, int d ) { quad colors[Index] = colors[a]; // specify vertex color points[Index] = vertices[a]; // specify vertex position tex coords[Index] = vec2( 0.0, 00 ); //specify corresponding texture corner index++; quad colors[Index] = colors[b]; colors array points[Index] = vertices[b]; tex coords[Index] = vec2( 0.0, 10 ); r g b r g b r g b Index++; Color 1 // other vertices } y z Position 1 a x y z Position 2 b Colors 3 b a points array x Colors 2 c tex coord array x y z Position 3 c s t Tex0 a s t Tex1 b s t Tex2 c

Source: http://www.doksinet Step 5: Passing Texture to Shader   Pass vertex, texture coordinate data as vertex array Set texture unit Variable names in shader offset = 0; GLuint vPosition = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( vPosition ); glVertexAttribPointer( vPosition, 4, GL FLOAT, GL FALSE, 0,BUFFER OFFSET(offset) ); offset += sizeof(points); GLuint vTexCoord = glGetAttribLocation( program, "vTexCoord" ); glEnableVertexAttribArray( vTexCoord ); glVertexAttribPointer( vTexCoord, 2,GL FLOAT, GL FALSE, 0, BUFFER OFFSET(offset) ); // Set the value of the fragment shader texture sampler variable // ("texture") to the appropriate texture unit. glUniform1i( glGetUniformLocation(program, "texture"), 0 ); Source: http://www.doksinet Step 6: Apply Texture in Shader (Vertex Shader)  Vertex shader receives data, output texture coordinates to fragment shader in vec4 vPosition; //vertex position in

object coordinates in vec4 vColor; //vertex color from application in vec2 vTexCoord; //texture coordinate from application out vec4 color; //output color to be interpolated out vec2 texCoord; //output tex coordinate to be interpolated texCoord = vTexCoord color = vColor gl Position = modelview * projection vPosition Source: http://www.doksinet Step 6: Apply Texture in Shader (Fragment Shader)   Textures applied in fragment shader Samplers return a texture color from a texture object in vec4 color; //color from rasterizer in vec2 texCoord; //texure coordinate from rasterizer uniform sampler2D texture; //texture object from application void main() { gl FragColor = color * texture2D( texture, texCoord ); } Output color Of fragment Original color of object Lookup color of texCoord (s,t) in texture Source: http://www.doksinet Map textures to surfaces  Texture mapping is performed in rasterization (0,1) (1,1)  For each pixel, its texture coordinates (s, t)

interpolated based on corners’ texture coordinates (why not just interpolate the color?)  The interpolated texture (s,t) coordinates are then used to perform texture lookup (0,0) (1,0) Source: http://www.doksinet Texture Mapping and the OpenGL Pipeline  Images and geometry flow through separate pipelines that join during fragment processing    Object geometry: geometry pipeline Image: pixel pipeline “complex” textures do not affect geometric complexity vertices Image (texture) geometry pipeline Fragment processor pixel pipeline Source: http://www.doksinet 6 Main Steps to Apply Texture Create texture object Specify the texture 1. 2.    Assign texture (corners) to Object corners Specify texture parameters 3. 4.  5. 6. Read or generate image assign to texture (hardware) unit enable texturing (turn on) wrapping, filtering Pass textures to shaders Apply textures in shaders still haven’t talked about setting texture parameters

Source: http://www.doksinet Step 4: Specify Texture Parameters  Texture parameters control how texture is applied  Wrapping parameters used if s,t outside (0,1) range Clamping: if s,t > 1 use 1, if s,t <0 use 0 Wrapping: use s,t modulo 1 glTexParameteri( GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP ) glTexParameteri( GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT ) t texture s GL REPEAT GL CLAMP Source: http://www.doksinet Step 4: Specify Texture Parameters Mipmapped Textures   Mipmapping pre‐generates prefiltered (averaged) texture maps of decreasing resolutions Declare mipmap level during texture definition glTexImage2D( GL TEXTURE *D, level, ) Source: http://www.doksinet References         Angel and Shreiner, Interactive Computer Graphics, 6th edition Hill and Kelley, Computer Graphics using OpenGL, 3rd edition UIUC CS 319, Advanced Computer Graphics Course David Luebke, CS 446, U. of Virginia, slides Chapter 1‐6 of RT

Rendering Hanspeter Pfister, CS 175 Introduction to Computer Graphics, Harvard Extension School, Fall 2010 slides Christian Miller, CS 354, Computer Graphics, U. of Texas, Austin slides, Fall 2011 Ulf Assarsson, TDA361/DIT220 ‐ Computer graphics 2011, Chalmers Instititute of Tech, Sweden Source: http://www.doksinet Recall: 6 Main Steps to Apply Texture Create texture object Specify the texture 1. 2.    Assign texture (corners) to Object corners Specify texture parameters 3. 4.  5. 6. Read or generate image assign to texture (hardware) unit enable texturing (turn on) wrapping, filtering Pass textures to shaders Apply textures in shaders still haven’t talked about setting texture parameters Source: http://www.doksinet Recall: Step 4: Specify Texture Parameters  Texture parameters control how texture is applied  Wrapping parameters used if s,t outside (0,1) range Clamping: if s,t > 1 use 1, if s,t <0 use 0 Wrapping: use s,t modulo 1

glTexParameteri( GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP ) glTexParameteri( GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT ) t texture s GL REPEAT GL CLAMP Source: http://www.doksinet Magnification and Minification Magnification: Stretch small texture to fill many pixels Minification: Shrink large texture to fit few pixels Texture Polygon Magnification Texture Polygon Minification Source: http://www.doksinet Step 4: Specify Texture Parameters Texture Value Lookup (1,1) How about coordinates that are not exactly at the intersection (pixel) positions? A) Nearest neighbor B) Linear Interpolation C) Other filters (0,0) (0.25,0) (05,0) (075,0) (1,0) Source: http://www.doksinet Example: Texture Magnification  48 x 48 image projected (stretched) onto 320 x 320 pixels Nearest neighbor filter Bilinear filter (avg 4 nearest texels) Cubic filter (weighted avg. 5 nearest texels) Source: http://www.doksinet Texture mapping parameters 1) Nearest Neighbor (lower image

quality) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST); 2) Linear interpolate the neighbors (better quality, slower) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) Or GL TEXTURE MAX FILTER Source: http://www.doksinet Dealing with Aliasing  Point sampling of texture can lead to aliasing errors miss blue stripes point samples in texture space point samples in u,v (or x,y,z) space Source: http://www.doksinet Area Averaging Better but slower option is area averaging pixel preimage Source: http://www.doksinet Other Stuff  Wrapping texture onto curved surfaces. Eg cylinder, can, etc  a s b  a  Wrapping texture onto sphere  a s b  a  z  za t zb  z a   a t b  a Bump mapping: perturb surface normal by a quantity proportional to texture Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 20: Environment Mapping (Reflections and

Refractions) Prof Emmanuel Agu (Adapted from slides by Ed Angel) Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Environment Mapping  Environmental mapping is way to create the appearance of highly reflective and refractive surfaces without ray tracing 11 Source: http://www.doksinet Reflecting the Environment N V R Sphere of environment around object N V R Cube of environment around object Source: http://www.doksinet Types of Environment Maps   Assumes environment infinitely far away Options: Store “object’s environment as a) Sphere around object (sphere map) b) Cube around object (cube map) N V R  OpenGL supports cube maps and sphere maps Source: http://www.doksinet Cube mapping eye n y x r z   Need to compute reflection vector, r Use r by for environment map lookup Source: http://www.doksinet Cube Map: How to Store   Stores “environment” around objects as 6 sides of a cube (1

texture) Load 6 textures separately into 1 OpenGL cubemap Source: http://www.doksinet Cube Maps Loaded cube map texture can be accessed in GLSL through cubemap sampler vec4 texColor = textureCube(mycube, texcoord); Texture coordinates must be 3D Source: http://www.doksinet Source: http://www.doksinet Creating Cube Map  Use 6 cameras directions from scene center  18 each with a 90 degree angle of view Source: http://www.doksinet Indexing into Cube Map •Compute R = 2(N∙V)N‐V •Object at origin •Perform lookup: vec4 texColor = textureCube(mycube, R); •Largest magnitude component of R (x,y,z) used to determine face of cube •Other 2 components give texture coordinates More on this later. 19 V R Source: http://www.doksinet Declaring Cube Maps in OpenGL glTextureMap2D(GL TEXTURE CUBE MAP POSITIVE X, level, rows, columns, border, GL RGBA, GL UNSIGNED BYTE, image1)    Repeat similar for other 5 images (sides) Make 1 cubemap texture

object from 6 images Parameters apply to all six images. Eg glTexParameteri( GL TEXTURE CUBE MAP, GL TEXTURE MAP WRAP S, GL REPEAT)  Note: texture coordinates are in 3D space (s, t, r) Source: http://www.doksinet Cube Map Example (init) // colors for sides of cube GLubyte red[3] = {255, 0, 0}; GLubyte green[3] = {0, 255, 0}; GLubyte blue[3] = {0, 0, 255}; GLubyte cyan[3] = {0, 255, 255}; GLubyte magenta[3] = {255, 0, 255}; GLubyte yellow[3] = {255, 255, 0}; This example generates simple Colors as a texture You can also just load 6 pictures of environment glEnable(GL TEXTURE CUBE MAP); // Create texture object glGenTextures(1, tex); glActiveTexture(GL TEXTURE1); glBindTexture(GL TEXTURE CUBE MAP, tex[0]); Source: http://www.doksinet Cube Map (init II) Load 6 different pictures into 1 cube map of environment glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X , 0,3,1,1,0,GL RGB,GL UNSIGNED BYTE, red); glTexImage2D(GL TEXTURE CUBE MAP NEGATIVE X , 0,3,1,1,0,GL RGB,GL UNSIGNED

BYTE, green); glTexImage2D(GL TEXTURE CUBE MAP POSITIVE Y , 0,3,1,1,0,GL RGB,GL UNSIGNED BYTE, blue); glTexImage2D(GL TEXTURE CUBE MAP NEGATIVE Y , 0,3,1,1,0,GL RGB,GL UNSIGNED BYTE, cyan); glTexImage2D(GL TEXTURE CUBE MAP POSITIVE Z , 0,3,1,1,0,GL RGB,GL UNSIGNED BYTE, magenta); glTexImage2D(GL TEXTURE CUBE MAP NEGATIVE Z , 0,3,1,1,0,GL RGB,GL UNSIGNED BYTE, yellow); glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER,GL NEAREST); Source: http://www.doksinet Cube Map (init III) GLuint texMapLocation; GLuint tex[1]; texMapLocation = glGetUniformLocation(program, "texMap"); glUniform1i(texMapLocation, tex[0]); Connect texture map (tex[0]) to variable texMap in fragment shader (texture mapping done in frag shader) Source: http://www.doksinet Adding Normals void quad(int a, int b, int c, int d) { static int i =0; normal = normalize(cross(vertices[b] - vertices[a], vertices[c] - vertices[b])); normals[i] = normal; points[i] = vertices[a]; i++; // rest of data

Calculate and set quad normals Source: http://www.doksinet Vertex Shader out vec3 R; in vec4 vPosition; in vec4 Normal; uniform mat4 ModelView; uniform mat4 Projection; void main() { gl Position = Projection*ModelViewvPosition; vec4 eyePos = vPosition; // calculate view vector V vec4 NN = ModelView*Normal; // transform normal vec3 N =normalize(NN.xyz); // normalize normal R = reflect(eyePos.xyz, N); // calculate reflection vector R } 25 Source: http://www.doksinet Fragment Shader in vec3 R; uniform samplerCube texMap; void main() { vec4 texColor = textureCube(texMap, R); // look up texture map using R gl FragColor = texColor; } Source: http://www.doksinet Refraction using Cube Map  Can also use cube map for refraction (transparent) Reflection Refraction Source: http://www.doksinet Reflection vs Refraction Reflection Refraction Source: http://www.doksinet Reflection and Refraction  At each vertex I  I amb  I diff  I spec  I refl  I tran m r I

IR dir v s Ph IT  t Refracted component IT is along transmitted direction t Source: http://www.doksinet Finding Transmitted (Refracted) Direction   Transmitted direction obeys Snell’s law Snell’s law: relationship holds in diagram below m 1 faster slower sin( 2 ) sin(1 )  c2 c1 Ph 2 t c1, c2 are speeds of light in medium 1 and 2 Source: http://www.doksinet Finding Transmitted Direction     If ray goes from faster to slower medium (e.g air to glass), ray is bent towards normal If ray goes from slower to faster medium (e.g glass to air), ray is bent away from normal c1/c2 is important. Usually measured for medium‐to‐ vacuum. Eg water to vacuum Some measured relative c1/c2 are:      Air: 99.97% Glass: 52.2% to 59% Water: 75.19% Sapphire: 56.50% Diamond: 41.33% Source: http://www.doksinet Transmission Angle  Vector for transmission angle can be found as  c2  c2 t  dir   (m  dir )

 cos( 2 ) m c1  c1  where m dir 1 c1 Medium #1 Medium #2 Ph  c2  cos( 2 )  1    1  (m  dir ) 2  c1    c2 2 t Or just use GLSL built‐in function refract to get T Source: http://www.doksinet Refraction Vertex Shader out vec3 T; in vec4 vPosition; in vec4 Normal; uniform mat4 ModelView; uniform mat4 Projection; void main() { gl Position = Projection*ModelViewvPosition; vec4 eyePos = vPosition; // calculate view vector V vec4 NN = ModelView*Normal; // transform normal vec3 N =normalize(NN.xyz); // normalize normal T = refract(eyePos.xyz, N, iorefr); // calculate refracted vector T } Was previously R = reflect(eyePos.xyz, N); Source: http://www.doksinet Refraction Fragment Shader in vec3 T; uniform samplerCube RefMap; void main() { vec4 refractColor = textureCube(RefMap, T); // look up texture map using T refractcolor = mix(refractColor, WHITE, 0.3); // mix pure color with 03 white gl FragColor =

refractcolor; } Source: http://www.doksinet References    Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Real Time Rendering by Akenine‐Moller, Haines and Hoffman Source: http://www.doksinet Recall: Indexing into Cube Map •Compute R = 2(N∙V)N‐V •Object at origin •Use largest magnitude component of R to determine face of cube •Other 2 components give texture coordinates 1 V R Source: http://www.doksinet Cube Map Layout Source: http://www.doksinet Source: http://www.doksinet Example      R = (‐4, 3, ‐1) Same as R = (‐1, 0.75, ‐025) Use face x = ‐1 and y = 0.75, z = ‐025 Not quite right since cube defined by x, y, z = ± 1 rather than [0, 1] range needed for texture coordinates Remap by from [‐1,1] to [0,1] range   s = ½ + ½ y, t = ½ + ½ z Hence, s =0.875, t = 0375 Source: http://www.doksinet Sphere Environment Map 

Cube can be replaced by a sphere (sphere map) Source: http://www.doksinet Sphere Mapping      Original environmental mapping technique Proposed by Blinn and Newell Uses lines of longitude and latitude to map parametric variables to texture coordinates OpenGL supports sphere mapping Requires a circular texture map equivalent to an image taken with a fisheye lens Source: http://www.doksinet Sphere Map Source: http://www.doksinet Source: http://www.doksinet Capturing a Sphere Map Source: http://www.doksinet Normal Mapping   Store normals in texture Very useful for making low‐resolution geometry look like it’s much more detailed Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 21: Shadows and Fog Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Introduction to Shadows  Shadows give information on relative positions of objects Use ambient + diffuse + specular

components Use just ambient component Source: http://www.doksinet Introduction to Shadows  Two popular shadow rendering methods: 1. 2.  Shadows as texture (projection) Shadow buffer Third method used in ray‐tracing (covered in grad class) Source: http://www.doksinet Projective Shadows   Oldest method: Used in early flight simulators Projection of polygon is polygon called shadow polygon Actual polygon Shadow polygon Source: http://www.doksinet Projective Shadows    Works for flat surfaces illuminated by point light For each face, project vertices V to find V’ of shadow polygon Object shadow = union of projections of faces Source: http://www.doksinet Projective Shadow Algorithm   Project light‐object edges onto plane Algorithm:   First, draw ground plane/scene using specular+diffuse+ambient components Then, draw shadow projections (face by face) using only ambient component Source: http://www.doksinet Projective Shadows

for Polygon 1. 2. 3. If light is at (xl, yl, zl) Vertex at (x, y, z) Would like to calculate shadow polygon vertex V projected onto ground at (xp, 0, zp) (x,y,z) (xp,0,zp) Ground plane: y = 0 Source: http://www.doksinet Projective Shadows for Polygon   If we move original polygon so that light source is at origin Matrix M projects a vertex V to give its projection V’ in shadow polygon 1 0  m  0  0  0 1 0 1  y l 0 0 0 0 1 0  0 0  Source: http://www.doksinet Building Shadow Projection Matrix 1. 2. 3. Translate source to origin with T(‐xl, ‐yl, ‐zl) Perspective projection Translate back by T(xl, yl, zl) 1 0 M  0  0 0 0 1 0 0 1 0 0 1 xl   0 yl   0 zl    0 1   0 1 0 1  y l 0 0 1  0 0  0  1 0  0 0 0  0   0 0  xl  1 0  yl  0 1  zl   0 0 1  Final matrix that projects

Vertex V onto V’ in shadow polygon Source: http://www.doksinet Code snippets?  Set up projection matrix in OpenGL application float light[3]; // location of light mat4 m; // shadow projection matrix initially identity M[3][1] = -1.0/light[1]; 1 0  M  0  0  0 1 0 1  y l 0 0 0 0 1 0  0 0  Source: http://www.doksinet Projective Shadow Code  Set up object (e.g a square) to be drawn point4 square[4] = {vec4(-0.5, 05, -05, 10} {vec4(-0.5, 05, -05, 10} {vec4(-0.5, 05, -05, 10} {vec4(-0.5, 05, -05, 10}   Copy square to VBO Pass modelview, projection matrices to vertex shader Source: http://www.doksinet What next?   Next, we load model view as usual then draw original polygon Then load shadow projection matrix, change color to black, re‐render polygon 1. Load modelview draw polygon as usual 2. Modify modelview with Shadow projection matrix Re-render as black (or ambient) Source:

http://www.doksinet Shadow projection Display( ) Function void display( ) { mat4 mm; // clear the window glClear(GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT); // render red square (original square) using modelview // matrix as usual (previously set up) glUniform4fv(color loc, 1, red); glDrawArrays(GL TRIANGLE STRIP, 0, 4); Source: http://www.doksinet Shadow projection Display( ) Function // modify modelview matrix to project square // and send modified model view matrix to shader mm = model view * Translate(light[0], light[1], light[2] *m * Translate(-light[0], -light[1], -light[2]); glUniformMatrix4fv(matrix loc, 1, GL TRUE, mm); //and re-render square as // black square (or using only ambient component) glUniform4fv(color loc, 1, black); glDrawArrays(GL TRIANGLE STRIP, 0, 4); glutSwapBuffers( ); 0 0 1 1 0 0 x }  0 1 0 M  0 0 1  0 0 0  0 yl   0 zl    0 1   l 1 0 1  y l 0 1 0 0  0 1 0  

0 0 0  0   0 0  xl  1 0  yl  0 1  zl   0 0 1  Source: http://www.doksinet Shadow Buffer Approach     Uses second depth buffer called shadow buffer Pros: not limited to plane surfaces Cons: needs lots of memory Depth buffer? Source: http://www.doksinet OpenGL Depth Buffer (Z Buffer)   Depth: While drawing objects, depth buffer stores distance of each polygon from viewer Why? If multiple polygons overlap a pixel, only closest one polygon is drawn Depth 1.0 1.0 1.0 1.0 1.0 0.3 0.3 0.5 0.3 0.3 1.0 0.5 0.5 1.0 1.0 Z = 0.5 Z = 0.3 1.0 eye Source: http://www.doksinet Setting up OpenGL Depth Buffer  Note: You did this in order to draw solid cube, meshes 1. glutInitDisplayMode(GLUT DEPTH | GLUT RGB) instructs openGL to create depth buffer 2. glEnable(GL DEPTH TEST) enables depth testing 3. glClear(GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT) Initializes depth buffer every time we draw a new

picture Source: http://www.doksinet Shadow Buffer Theory  Along each path from light    Only closest object is lit Other objects on that path in shadow Shadow buffer stores closest object on each path Lit In shadow Source: http://www.doksinet Shadow Buffer Approach  Rendering in two stages:   Loading shadow buffer Render the scene Source: http://www.doksinet Loading Shadow Buffer     Initialize each element to 1.0 Position a camera at light source Rasterize each face in scene updating closest object Shadow buffer tracks smallest depth on each path Source: http://www.doksinet Shadow Buffer (Rendering Scene)   Render scene using camera as usual While rendering a pixel find:     If d[i][j] < D (other object on this path closer to light)    pseudo‐depth D from light source to P Index location [i][j] in shadow buffer, to be tested Value d[i][j] stored in shadow buffer point P is in shadow lighting =

ambient Otherwise, not in shadow  Lighting = amb + diffuse + specular D[i][j] D In shadow Source: http://www.doksinet Loading Shadow Buffer     Shadow buffer calculation is independent of eye position In animations, shadow buffer loaded once If eye moves, no need for recalculation If objects move, recalculation required Source: http://www.doksinet Soft Shadows    Point light sources => simple hard shadows, unrealistic Extended light sources => more realistic Shadow has two parts:  Umbra (Inner part) => no light  Penumbra (outer part) => some light Source: http://www.doksinet Fog example  Fog is atmospheric effect  Better realism, helps determine distances Source: http://www.doksinet Fog   Fog was part of OpenGL fixed function pipeline Programming fixed function fog      Parameters: Choose fog color, fog model Enable: Turn it on Fixed function fog deprecated!! Shaders can implement even better

fog Shaders implementation: fog applied in fragment shader just before display Source: http://www.doksinet Rendering Fog  Mix some color of fog: c f + color of surface: c s c p  fc f  (1  f )c s  f  [0,1] If f = 0.25, output color = 25% fog + 75% surface color    f computed as function of distance z 3 ways: linear, exponential, exponential-squared z End Linear: zend  z p zP f  zend  z start z start Source: http://www.doksinet Fog Shader Fragment Shader Example f  zend  z p zend  z start float dist = abs(Position.z); Float fogFactor = (Fog.maxDist – dist)/ Fog.maxDist – FogminDist); fogFactor = clamp(fogFactor, 0.0, 10); vec3 shadeColor = ambient + diffuse + specular vec3 color = mix(Fog.color, shadeColor,fogFactor); FragColor = vec4(color, 1.0); c p  fc f  (1  f )c s Source: http://www.doksinet Fog    d f z p f e Exponential ( d f z p )2 Squared exponential f  e Exponential derived

from Beer’s law  Beer’s law: intensity of outgoing light diminishes exponentially with distance Source: http://www.doksinet Fog Optimizations    f values for different depths ( z P )can be pre‐computed and stored in a table on GPU Distances used in f calculations are planar Can also use Euclidean distance from viewer or radial distance to create radial fog Source: http://www.doksinet References    Interactive Computer Graphics (6th edition), Angel and Shreiner Computer Graphics using OpenGL (3rd edition), Hill and Kelley Real Time Rendering by Akenine‐Moller, Haines and Hoffman Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 22: 2D Clipping Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet OpenGL Stages   After projection, several stages before objects drawn to screen These stages are NOT programmable Vertex shader: programmable Transform Projection In

hardware: NOT programmable Primitive Assembly Hidden Surface Removal Clipping Rasterization Source: http://www.doksinet Hardware Stage: Primitive Assembly    Up till now: Transformations and projections applied to vertices individually Primitive assembly: After transforms, projections, individual vertices grouped back into primitives E.g v6, v7 and v8 grouped back into triangle v3 v4 v1 v6 v5 v6 v2 v7 v8 Source: http://www.doksinet Hardware Stage: Clipping   After primitive assembly, subsequent operations are per‐primitive Clipping: Remove primitives (lines, polygons, text, curves) outside view frustum (canonical view volume) Clipping lines Clipping polygons Source: http://www.doksinet Rasterization  Determine which pixels that primitives map to   Fragment generation Rasterization or scan conversion Source: http://www.doksinet Fragment Processing  Some tasks deferred until fragment processing Hidden Surface Removal

Transformation Projection Antialiasing Hidden surface Removal Antialiasing Source: http://www.doksinet Clipping  2D and 3D clipping algorithms    2D against clipping window 3D against clipping volume 2D clipping     Lines (e.g dinodat) Polygons Curves Text Source: http://www.doksinet Clipping 2D Line Segments  Brute force approach: compute intersections with all sides of clipping window  Inefficient: one division per intersection Source: http://www.doksinet 2D Clipping   Better Idea: eliminate as many cases as possible without computing intersections Cohen‐Sutherland Clipping algorithm y = ymax x = xmin x = xmax y = ymin Source: http://www.doksinet Clipping Points (xmax, ymax) Determine whether a point (x,y) is inside or outside of the world window? If (xmin <= x <= xmax) and (ymin <= y <= ymax) (xmin, ymin) then the point (x,y) is inside else the point is outside Source: http://www.doksinet Clipping

Lines 3 cases: 2 (xmax, ymax) 1 (xmin, ymin) 3 Case 1: All of line in Case 2: All of line out Case 3: Part in, part out Source: http://www.doksinet Clipping Lines: Trivial Accept (Xmax, Ymax) p1 Case 1: All of line in Test line endpoints: Xmin <= P1.x, P2x <= Xmax and Ymin <= P1.y, P2y <= Ymax p2 Note: simply comparing x,y values of endpoints to x,y values of rectangle (Xmin, Ymin) Result: trivially accept. Draw line in completely Source: http://www.doksinet Clipping Lines: Trivial Reject p1 Case 2: All of line out Test line endpoints:  p1.x, p2x <= Xmin OR  p1.x, p2x >= Xmax OR  p1.y, p2y <= ymin OR  p1.y, p2y >= ymax p2 Note: simply comparing x,y values of endpoints to x,y values of rectangle Result: trivially reject. Don’t draw line in Source: http://www.doksinet Clipping Lines: Non‐Trivial Cases p2 Case 3: Part in, part out d e p1 delx Two variations: dely One point in, other out Both points out, but part of line

cuts through viewport Need to find inside segments Use similar triangles to figure out length of inside segments d e  dely delx Source: http://www.doksinet Clipping Lines: Calculation example If chopping window has (left, right, bottom, top) = (30, 220, 50, 240), what happens when the following lines are chopped? p2 d e p1 dely (a) p1 = (40,140), p2 = (100, 200) delx (b) p1 = (20,10), p2 = (20, 200) d e  dely delx (c) p1 = (100,180), p2 = (200, 250) Source: http://www.doksinet Cohen‐Sutherland pseudocode (Hill) int clipSegment(Point2& p1, Point2& p2, RealRect W) { do{ if(trivial accept) return 1; // whole line survives if(trivial reject) return 0; // no portion survives // now chop if(p1 is outside) // find surviving segment { if(p1 is to the left) chop against left edge else if(p1 is to the right) chop against right edge else if(p1 is below) chop against the bottom edge else if(p1 is above) chop against the top edge } Source: http://www.doksinet

Cohen‐Sutherland pseudocode (Hill) else // p2 is outside // find surviving segment { if(p2 is to the left) chop against left edge else if(p2 is to right) chop against right edge else if(p2 is below) chop against the bottom edge else if(p2 is above) chop against the top edge } }while(1); } Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 22: 3D Clipping Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Liang‐Barsky 3D Clipping   Goal: Clip object edge-by-edge against Canonical View volume (CVV) Problem:   2 end-points of edge: A = (Ax, Ay, Az, Aw) and C = (Cx, Cy, Cz, Cw) If edge intersects with CVV, compute intersection point I =(Ix,Iy,Iz,Iw) Source: http://www.doksinet Determining if point is inside CVV  y=1 Problem: Determine if point (x,y,z) is inside or outside CVV? Point (x,y,z) is inside CVV if (-1 <= x <= 1) and (-1 <= y <= 1) and (-1 <= z <= 1) y= -1 x =

-1  x=1 else point is outside CVV CVV == 6 infinite planes (x=‐1,1; y=‐1,1; z=‐1,1) Source: http://www.doksinet Determining if point is inside CVV  y/w = 1 - If point specified as (x,y,z,w) Test (x/w, y/w , z/w)! Point (x/w, y/w, z/w) is inside CVV y/w = -1 x/w = -1 x /w = 1 if (-1 <= x/w <= 1) and (-1 <= y/w <= 1) and (-1 <= z/w <= 1) else point is outside CVV Source: http://www.doksinet Modify Inside/Outside Tests Slightly Our test: (-1 < x/w < 1) y/w = 1 Point (x,y,z,w) inside plane x = 1 if x/w < 1 => w – x > 0 y/w = -1 Point (x,y,z,w) inside plane x = -1 if x/w = -1 x /w = 1 -1 < x/w => w + x > 0 Source: http://www.doksinet Numerical Example: Inside/Outside CVV Test  Point (x,y,z,w) is   inside plane x=-1 if w+x > 0 inside plane x=1 if w – x > 0 -1    1 x/w Example Point (0.5, 02, 07) inside planes (x = -1,1) because - 1 <= 05 <= 1 If w = 10, (0.5, 02, 07) = (5,

2, 7, 10) Can either divide by w then test: – 1 <= 5/10 <= 1 OR To test if inside x = - 1, w + x = 10 + 5 = 15 > 0 To test if inside x = 1, w- x= 10 - 5 = 5 > 0 Source: http://www.doksinet 3D Clipping  Do same for y, z to form boundary coordinates for 6 planes as: Boundary coordinate (BC) Homogenous coordinate Clip plane Example (5,2,7,10) BC0 w+x x=-1 15 BC1 w-x x=1 5 BC2 w+y y=-1 12 BC3 w-y y=1 8 BC4 w+z z=-1 17 BC5 w-z z=1 3 Consider line that goes from point A to C  Trivial accept: 12 BCs (6 for pt. A, 6 for pt C) > 0  Trivial reject: Both endpoints outside (-ve) for same plane Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition Hill and Kelley, Computer Graphics using OpenGL, 3rd edition Source: http://www.doksinet Recall: Liang‐Barsky 3D Clipping   Goal: Clip object edge-by-edge against Canonical View volume (CVV) Problem:   2

end-points of edge: A = (Ax, Ay, Az, Aw) and C = (Cx, Cy, Cz, Cw) If edge intersects with CVV, compute intersection point I =(Ix,Iy,Iz,Iw) Source: http://www.doksinet Recall: Determining if point is inside CVV  y=1 Problem: Determine if point (x,y,z) is inside or outside CVV? Point (x,y,z) is inside CVV if (-1 <= x <= 1) and (-1 <= y <= 1) and (-1 <= z <= 1) y= -1 x = -1  x=1 else point is outside CVV CVV == 6 infinite planes (x=‐1,1; y=‐1,1; z=‐1,1) Source: http://www.doksinet Recall: Determining if point is inside CVV  y/w = 1 - If point specified as (x,y,z,w) Test (x/w, y/w , z/w)! Point (x/w, y/w, z/w) is inside CVV y/w = -1 x/w = -1 x /w = 1 if (-1 <= x/w <= 1) and (-1 <= y/w <= 1) and (-1 <= z/w <= 1) else point is outside CVV Source: http://www.doksinet Recall: Modify Inside/Outside Tests Slightly Our test: (-1 < x/w < 1) y/w = 1 Point (x,y,z,w) inside plane x = 1 if x/w < 1 => w – x >

0 y/w = -1 Point (x,y,z,w) inside plane x = -1 if x/w = -1 x /w = 1 -1 < x/w => w + x > 0 Source: http://www.doksinet Recall: Numerical Example: Inside/Outside CVV Test  Point (x,y,z,w) is   inside plane x=-1 if w+x > 0 inside plane x=1 if w – x > 0 -1    1 x/w Example Point (0.5, 02, 07) inside planes (x = -1,1) because - 1 <= 05 <= 1 If w = 10, (0.5, 02, 07) = (5, 2, 7, 10) Can either divide by w then test: – 1 <= 5/10 <= 1 OR To test if inside x = - 1, w + x = 10 + 5 = 15 > 0 To test if inside x = 1, w- x= 10 - 5 = 5 > 0 Source: http://www.doksinet Recall: 3D Clipping  Do same for y, z to form boundary coordinates for 6 planes as: Boundary coordinate (BC) Homogenous coordinate Clip plane Example (5,2,7,10) BC0 w+x x=-1 15 BC1 w-x x=1 5 BC2 w+y y=-1 12 BC3 w-y y=1 8 BC4 w+z z=-1 17 BC5 w-z z=1 3 Consider line that goes from point A to C  Trivial accept: 12 BCs (6 for pt.

A, 6 for pt C) > 0  Trivial reject: Both endpoints outside (-ve) for same plane Source: http://www.doksinet Edges as Parametric Equations F ( x, y )  0  Implicit form  Parametric forms:  points specified based on single parameter value  Typical parameter: time t P(t )  P0  ( P1  P0 ) * t   0  t 1 Some algorithms work in parametric form  Clipping: exclude line segment ranges  Animation: Interpolate between endpoints by varying t Represent each edge parametrically as A + (C – A)t  at time t=0, point at A  at time t=1, point at C Source: http://www.doksinet Inside/outside?  Test A, C against 6 walls (x=-1,1; y=-1,1; z=-1,1)  There is an intersection if BCs have opposite signs. ie if either   A is outside (< 0), C is inside ( > 0) or  A inside (> 0) , C outside (< 0) Edge intersects with plane at some t hit between [0,1] t hit A t=0 C t=1 A t=0 t hit C t=1 Source:

http://www.doksinet Calculating hit time (t hit)  How to calculate t hit?  Represent an edge t as: Edge(t )  (( Ax  (Cx  Ax )t , ( Ay  (Cy  Ay )t , ( Az  (Cz  Az )t , ( Aw  (Cw  Aw)t )  E.g If x = 1,  Solving for t above, Ax  (Cx  Ax)t 1 Aw  (Cw  Aw)t Aw  Ax t ( Aw  Ax)  (Cw  Cx ) Source: http://www.doksinet Inside/outside?  t hit can be “entering (t in) ” or ”leaving (t out)”  Define: “entering” if A outside, C inside Why? As t goes [0‐1], edge goes from outside (at A) to inside (at C)  Define “leaving” if A inside, C outside  Why? As t goes [0-1], edge goes from inside (at A) to inside (at C)  Entering t in A t=0 Leaving C t=1 A t=0 t out C t=1 Source: http://www.doksinet Chop step by Step against 6 planes  Initially C t=1 t in = 0, t out = 1 Candidate Interval (CI) = [0 to 1] A t=0  Chop against each of 6 planes t out = 0.74 Plane y = 1

C t in = 0, t out = 0.74 Candidate Interval (CI) = [0 to 0.74] Why t out? A t=0 Source: http://www.doksinet Chop step by Step against 6 planes  Initially t out = 0.74 C t in = 0, t out = 0.74 Candidate Interval (CI) = [0 to 0.74] A t=0  Plane x = -1 Then t out = 0.74 t in= 0.36 A Why t in? C t in = 0.36, t out = 0.74 Candidate Interval (CI) CI = [0.36 to 074] Source: http://www.doksinet Candidate Interval  Candidate Interval (CI): time interval during which edge might still be inside CVV. ie CI = t in to t out  Initialize CI to [0,1]  For each of 6 planes, calculate t in or t out, shrink CI CI 0 1 t t in  t out Conversely: values of t outside CI = edge is outside CVV Source: http://www.doksinet Shortening Candidate Interval  Algorithm:  Test for trivial accept/reject (stop if either occurs)  Set CI to [0,1]  For each of 6 planes:  Find hit time t hit  If t in, new t in = max(t in,t hit)  If t out, new t

out = min(t out, t hit)  If t in > t out => exit (no valid intersections) CI 0 t in t out t 1 Note: seeking smallest valid CI without t in crossing t out Source: http://www.doksinet Calculate choppped A and C  If valid t in, t out, calculate adjusted edge endpoints A, C as  A chop = A + t in ( C – A) (calculate for Ax,Ay, Az) C chop = A + t out ( C – A) (calculate for Cx,Cy,Cz)  0 A chop CI C chop 1 t t in t out Source: http://www.doksinet 3D Clipping Implementation     Function clipEdge( ) Input: two points A and C (in homogenous coordinates) Output:  0, if AC lies complete outside CVV  1, complete inside CVV  Returns clipped A and C otherwise Calculate 6 BCs for A, 6 for C 0 A C ClipEdge () 1 A chop, C chop Source: http://www.doksinet Store BCs as Outcodes   Use outcodes to track in/out  Number walls x = +1, ‐1; y = +1, ‐1, and z = +1, ‐1 as 0 to 5  Bit i of A’s outcode = 1 if A is outside

ith wall  1 otherwise Example: outcode for point outside walls 1, 2, 5 Wall no. 0 1 2 3 4 5 OutCode 0 1 1 0 0 1 Source: http://www.doksinet Trivial Accept/Reject using Outcodes  Trivial accept: inside (not outside) any walls Wall no. 0 A Outcode 0 1 2 3 4 5 0 0 0 0 0 C OutCode 0 0 0 0 0 0 Logical bitwise test: A | C == 0  Trivial reject: point outside same wall. Example Both A and C outside wall 1 Wall no. 0 A Outcode 0 1 2 3 4 5 1 0 0 1 0 C OutCode 0 1 1 0 0 0 Logical bitwise test: A & C != 0 Source: http://www.doksinet 3D Clipping Implementation    Compute BCs for A,C store as outcodes Test A, C outcodes for trivial accept, trivial reject If not trivial accept/reject, for each wall:  Compute tHit  Update t in, t out  If t in > t out, early exit Source: http://www.doksinet 3D Clipping Pseudocode int clipEdge(Point4& A, Point4& C) { double tIn = 0.0, tOut = 10, tHit; double aBC[6],

cBC[6]; int aOutcode = 0, cOutcode = 0; .find BCs for A and C .form outcodes for A and C if((aOutCode & cOutcode) != 0) // trivial reject return 0; if((aOutCode | cOutcode) == 0) // trivial accept return 1; Source: http://www.doksinet 3D Clipping Pseudocode for(i=0;i<6;i++) // clip against each plane { if(cBC[i] < 0) // C is outside wall i (exit so tOut) { tHit = aBC[i]/(aBC[i] – cBC[I]); // calculate tHit Aw  Ax tOut = MIN(tOut, tHit); t ( Aw  Ax)  (Cw  Cx) } else if(aBC[i] < 0) // A is outside wall I (enters so tIn) { tHit = aBC[i]/(aBC[i] – cBC[i]); // calculate tHit tIn = MAX(tIn, tHit); } if(tIn > tOut) return 0; // CI is empty: early out } Source: http://www.doksinet 3D Clipping Pseudocode Point4 tmp; // stores homogeneous coordinates If(aOutcode != 0) // A is outside: tIn has changed. Calculate A chop { tmp.x = Ax + tIn * (C.x – Ax); // do same for y, z, and w components } If(cOutcode != 0) // C is outside: tOut has changed. Calculate C

chop { C.x = Ax + tOut * (C.x – Ax); // do same for y, z and w components } A = tmp; Return 1; // some of the edges lie inside CVV } Source: http://www.doksinet Polygon Clipping  Not as simple as line segment clipping    Clipping a line segment yields at most one line segment Clipping a concave polygon can yield multiple polygons Clipping a convex polygon can yield at most one other polygon 23 Source: http://www.doksinet Clipping Polygons  Need more sophisticated algorithms to handle polygons:   Sutherland‐Hodgman: clip any given polygon against a convex clip polygon (or window) Weiler‐Atherton: Both clipped polygon and clip polygon (or window) can be concave Source: http://www.doksinet Tessellation and Convexity   One strategy is to replace nonconvex (concave) polygons with a set of triangular polygons (a tessellation) Also makes fill easier 25 Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 23: Viewport

Transformation & Hidden Surface Removal Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Viewport Transformation  After clipping, do viewport transformation User implements in Vertex shader Manufacturer implements In hardware Source: http://www.doksinet Viewport Transformation  Maps CVV (x, y) ‐> screen (x, y) coordinates glViewport(x,y, width, height) y Screen coordinates y 1 height -1 1 x -1 Canonical View volume (x,y) width x Source: http://www.doksinet Viewport Transformation: What of z?   Also maps z (pseudo‐depth) from [‐1,1] to [0,1] [0,1] pseudo‐depth stored in depth buffer,  Used for Depth testing (Hidden Surface Removal) y z x -1 0 1 Source: http://www.doksinet Hidden surface Removal     Drawing polygonal faces on screen consumes CPU cycles User cannot see every surface in scene To save time, draw only surfaces we see Surfaces we cannot see and

elimination methods? Back face 1. Occluded surfaces: hidden surface removal (visibility) 2. Back faces: back face culling Source: http://www.doksinet Hidden surface Removal  Surfaces we cannot see and elimination methods:  3. Faces outside view volume: viewing frustrum culling Clipped  Classes of HSR techniques:   Not Clipped Object space techniques: applied before rasterization Image space techniques: applied after vertices have been rasterized Source: http://www.doksinet Visibility (hidden surface removal)   Overlapping opaque polygons Correct visibility? Draw only the closest polygon  (remove the other hidden surfaces) wrong visibility Correct visibility Source: http://www.doksinet Image Space Approach     Start from pixel, work backwards into the scene Through each pixel, (nm for an n x m frame buffer) find closest of k polygons Complexity O(nmk) Examples:   Ray tracing z‐buffer : OpenGL Source:

http://www.doksinet OpenGL ‐ Image Space Approach  Paint pixel with color of closest object for (each pixel in image) { determine the object closest to the pixel draw the pixel using the object’s color } Source: http://www.doksinet Z buffer Illustration Z = 0.5 Z = 0.3 Correct Final image eye Top View Source: http://www.doksinet Z buffer Illustration Step 1: Initialize the depth buffer 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 Largest possible z values is 1.0 Source: http://www.doksinet Z buffer Illustration Step 2: Draw blue polygon (actually order does not affect final result) Z = 0.5 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.5 0.5 1.0 1.0 0.5 0.5 1.0 1.0 Z = 0.3 eye 1. Determine group of pixels corresponding to blue polygon 2. Figure out z value of blue polygon for each covered pixel (05) 3. For each covered pixel, z = 05 is less than 10 1. Smallest z so far = 05, color = blue Source: http://www.doksinet Z

buffer Illustration Step 3: Draw the yellow polygon 1.0 1.0 1.0 1.0 1.0 0.3 0.3 0.5 0.3 0.3 1.0 0.5 0.5 1.0 1.0 Z = 0.5 Z = 0.3 1.0 eye 1. Determine group of pixels corresponding to yellow polygon 2. Figure out z value of yellow polygon for each covered pixel (03) 3. For each covered pixel, z = 03 becomes minimum, color = yellow z-buffer drawback: wastes resources drawing and redrawing faces Source: http://www.doksinet OpenGL HSR Commands  3 main commands to do HSR  glutInitDisplayMode(GLUT DEPTH | GLUT RGB) instructs openGL to create depth buffer  glEnable(GL DEPTH TEST) enables depth testing  glClear(GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT) initializes depth buffer every time we draw a new picture Source: http://www.doksinet Z‐buffer Algorithm      Initialize every pixel’s z value to 1.0 rasterize every polygon For each pixel in polygon, find its z value (interpolate) Track smallest z value so far through each pixel As

we rasterize polygon, for each pixel in polygon   If polygon’s z through this pixel < current min z through pixel Paint pixel with polygon’s color Find depth (z) of every polygon at each pixel Source: http://www.doksinet Z (depth) Buffer Algorithm Depth of polygon being rasterized at pixel (x, y) Largest depth seen so far Through pixel (x, y) For each polygon { for each pixel (x,y) in polygon area { if (z polygon pixel(x,y) < depth buffer(x,y) ) { depth buffer(x,y) = z polygon pixel(x,y); } } } color buffer(x,y) = polygon color at (x,y) Note: know depths at vertices. Interpolate for interior z polygon pixel(x, y) depths Source: http://www.doksinet Perspective Transformation: Z‐Buffer Depth Compression  Pseudodepth calculation: Recall we chose parameters (a and b) to map z from range [near, far] to pseudodepth range[‐1,1] (1, 1, -1) y z (-1, -1, 1) x 2N    x max  x min  0    0   0  0 2N top  bottom 0 0 right

 left right  left top  bottom top  bottom  (F  N ) FN 1    x    0  y   z   2 FN   1 F  N   0  0 These values map z values of original view volume to [-1, 1] range Canonical View Volume Source: http://www.doksinet Z‐Buffer Depth Compression     This mapping is almost linear close to eye Non‐linear further from eye, approaches asymptote Also limited number of bits Thus, two z values close to far plane may map to same pseudodepth: Errors!! a   FF  NN Mapped z aPz  b  Pz 1 b   F2FNN N Actual z F -1 -Pz Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer Graphics, 6th edition Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Chapter 9 Source: http://www.doksinet Recall: OpenGL ‐ Image Space Approach  Paint pixel with color of closest object for (each pixel in image) {

determine the object closest to the pixel draw the pixel using the object’s color } Source: http://www.doksinet Recall: Z (depth) Buffer Algorithm Depth of polygon being rasterized at pixel (x, y) Largest depth seen so far Through pixel (x, y) For each polygon { for each pixel (x,y) in polygon area { if (z polygon pixel(x,y) < depth buffer(x,y) ) { depth buffer(x,y) = z polygon pixel(x,y); } } } color buffer(x,y) = polygon color at (x,y) Note: know depths at vertices. Interpolate for interior z polygon pixel(x, y) depths Source: http://www.doksinet Painter’s HSR Algorithm   Render polygons farthest to nearest Similar to painter layers oil paint Viewer sees B behind A Render B then A Source: http://www.doksinet Depth Sort  Requires sorting polygons (based on depth)   O(n log n) complexity to sort n polygon depths Not every polygon is clearly in front or behind other polygons Polygons sorted by distance from COP Source: http://www.doksinet

Easy Cases  Case a: A lies behind all polygons  Case b: Polygons overlap in z but not in x or y Source: http://www.doksinet Hard Cases cyclic overlap Overlap in (x,y) and z ranges penetration Source: http://www.doksinet Back Face Culling   Back faces: faces of opaque object that are “pointing away” from viewer Back face culling: do not draw back faces (saves resources) Back face  How to detect back faces? Source: http://www.doksinet Back Face Culling   Goal: Test if a face F is is backface How? Form vectors  View vector, V  Normal N to face F N N V Backface test: F is backface if N.V < 0 why?? Source: http://www.doksinet Back Face Culling: Draw mesh front faces void drawFrontFaces( ) { for(int f = 0;f < numFaces; f++) { if(isBackFace(f, .) continue; glDrawArrays(GL POLYGON, 0, N); } if N.V < 0 Source: http://www.doksinet View‐Frustum Culling o o Goal: Remove objects outside view frustum Done by 3D clipping algorithm

(e.g Liang‐Barsky) Clipped Not Clipped Source: http://www.doksinet Ray Tracing    Ray tracing is another image space method Ray tracing: Cast a ray from eye through each pixel into world. Ray tracing algorithm figures out: what object seen in direction through given pixel? Topic of grad class Source: http://www.doksinet Combined z‐buffer and Gouraud Shading (Hill)  Can combine shading and hsr through scan line algorithm for(int y = ybott; y <= ytop; y++) // for each scan line { for(each polygon){ find xleft and xright find dleft, dright, and dinc find colorleft and colorright, and colorinc for(int x = xleft, c = colorleft, d = dleft; x <= xright; x++, c+= colorinc, d+= dinc) if(d < d[x][y]) { put c into the pixel at (x, y) d[x][y] = d; // update closest depth } } ytop y4 color3 color4 color2 ys ybott color1 xleft xright Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 24: Rasterization: Line Drawing Prof Emmanuel Agu Computer

Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Rasterization    Rasterization produces set of fragments Implemented by graphics hardware Rasterization algorithms for primitives (e.g lines, circles, triangles, polygons) Rasterization: Determine Pixels (fragments) each primitive covers Fragments Source: http://www.doksinet Line drawing algorithm   Programmer specifies (x,y) of end pixels Need algorithm to determine pixels on line path 8 7 6 5 4 3 2 1 (9,6) ? (3,2) 0 1 2 3 4 5 6 7 8 9 10 11 12 Line: (3,2) -> (9,6) Which intermediate pixels to turn on? Source: http://www.doksinet Line drawing algorithm       Pixel (x,y) values constrained to integer values Computed intermediate values may be floats Rounding may be required. Eg (1048, 2051) rounded to (10, 21) Rounded pixel value is off actual line path (jaggy!!) Sloped lines end up having jaggies Vertical, horizontal lines, no jaggies Source:

http://www.doksinet Line Drawing Algorithm  Slope‐intercept line equation   y = mx + b Given 2 end points (x0,y0), (x1, y1), how to compute m and b? dy y1  y 0  m dx x1  x0 y 0  m * x0  b  b  y 0  m * x0 (x1,y1) dy (x0,y0) dx Source: http://www.doksinet Line Drawing Algorithm  Numerical example of finding slope m:  (Ax, Ay) = (23, 41), (Bx, By) = (125, 96) (125,96) dy (23,41) dx By  Ay 96  41 55 m   0.5392  Bx  Ax 125  23 102 Source: http://www.doksinet Digital Differential Analyzer (DDA): Line Drawing Algorithm Consider slope of line, m: m>1 (x1,y1) dy (x0,y0) m=1 m<1 dx o Step through line, starting at (x0,y0) o Case a: (m < 1) x incrementing faster o Step in x=1 increments, compute y (a fraction) and round o Case b: (m > 1) y incrementing faster o Step in y=1 increments, compute x (a fraction) and round Source: http://www.doksinet DDA Line Drawing Algorithm (Case a: m < 1)

y yk 1  yk yk 1  yk   m 1 x xk 1  xk  yk 1  yk  m (x1,y1) x = x0 y = y0 Illuminate pixel (x, round(y)) x=x+1 y=y+ m Illuminate pixel (x, round(y)) x=x+1 y=y+ m Illuminate pixel (x, round(y)) Until x == x1 (x0, y0) Example, if first end point is (0,0) Example, if m = 0.2 Step 1: x = 1, y = 0.2 => shade (1,0) Step 2: x = 2, y = 0.4 => shade (2, 0) Step 3: x= 3, y = 0.6 => shade (3, 1) etc Source: http://www.doksinet DDA Line Drawing Algorithm (Case b: m > 1) 1 y yk 1  yk   m x xk 1  xk xk 1  xk x = x0 y = y0 Illuminate pixel (round(x), y) x = x + 1/m y=y+1 1  xk 1  xk  m Illuminate pixel (round(x), y) (x1,y1) x = x + 1 /m y=y+1 Illuminate pixel (round(x), y) Until y == y1 (x0,y0) Example, if first end point is (0,0) if 1/m = 0.2 Step 1: y = 1, x = 0.2 => shade (0,1) Step 2: y = 2, x = 0.4 => shade (0, 2) Step 3: y= 3, x = 0.6 => shade (1, 3) etc

Source: http://www.doksinet DDA Line Drawing Algorithm Pseudocode compute m; if m < 1: { float y = y0; // initial value for(int x = x0; x <= x1; x++, y += m) setPixel(x, round(y)); } else // m > 1 { float x = x0; // initial value for(int y = y0; y <= y1; y++, x += 1/m) setPixel(round(x), y); }  Note: setPixel(x, y) writes current color into pixel in column x and row y in frame buffer Source: http://www.doksinet Line Drawing Algorithm Drawbacks  DDA is the simplest line drawing algorithm    Optimized algorithms typically used.    Not very efficient Round operation is expensive Integer DDA E.gBresenham algorithm Bresenham algorithm    Incremental algorithm: current value uses previous value Integers only: avoid floating point arithmetic Several versions of algorithm: we’ll describe midpoint version of algorithm Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm  Problem: Given endpoints (Ax, Ay) and (Bx,

By) of line, determine intervening pixels First make two simplifying assumptions (remove later):  (Ax < Bx) and  (0 < m < 1)  Define    (Bx,By) Width W = Bx – Ax Height H = By ‐ Ay H (Ax,Ay) W Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm (Bx,By) H (Ax,Ay)  Based on assumptions (Ax < Bx) and (0 < m < 1) W, H are +ve  H<W Increment x by +1, y incr by +1 or stays same Midpoint algorithm determines which happens    W Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm What Pixels to turn on or off? Consider pixel midpoint M(Mx, My) = (x + 1, y + ½) Build equation of actual line, compare to midpoint (x1,y1) Case a: If midpoint (red dot) is below line, Shade upper pixel, (x + 1, y + 1) M(Mx,My) (x1,y1) Case b: If midpoint (red dot) is above line, Shade lower pixel, (x + 1, y) (x0, y0) Source: http://www.doksinet References   Angel and Shreiner, Interactive Computer

Graphics, 6th edition Hill and Kelley, Computer Graphics using OpenGL, 3rd edition, Chapter 9 Source: http://www.doksinet Recall: Bresenham’s Line‐Drawing Algorithm  Problem: Given endpoints (Ax, Ay) and (Bx, By) of line, determine intervening pixels First make two simplifying assumptions (remove later):  (Ax < Bx) and  (0 < m < 1)  Define    (Bx,By) Width W = Bx – Ax Height H = By ‐ Ay H (Ax,Ay) W Source: http://www.doksinet Recall: Bresenham’s Line‐Drawing Algorithm (Bx,By) H (Ax,Ay)  Based on assumptions (Ax < Bx) and (0 < m < 1) W, H are +ve  H<W Increment x by +1, y incr by +1 or stays same Midpoint algorithm determines which happens    W Source: http://www.doksinet Recall: Bresenham’s Line‐Drawing Algorithm What Pixels to turn on or off? Consider pixel midpoint M(Mx, My) = (x + 1, y + ½) Build equation of actual line, compare to midpoint (x1,y1) Case a: If midpoint (red dot) is

below line, Shade upper pixel, (x + 1, y + 1) M(Mx,My) (x1,y1) Case b: If midpoint (red dot) is above line, Shade lower pixel, (x + 1, y) (x0, y0) Source: http://www.doksinet Build Equation of the Line (Bx,By)  Using similar triangles: y  Ay H  x  Ax W (x,y) (Ax,Ay) H W H(x – Ax) = W(y – Ay) ‐W(y – Ay) + H(x – Ax) = 0    Above is equation of line from (Ax, Ay) to (Bx, By) Thus, any point (x,y) that lies on ideal line makes eqn = 0 Double expression (to avoid floats later), and call it F(x,y) F(x,y) = ‐2W(y – Ay) + 2H(x – Ax) Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm  So, F(x,y) = ‐2W(y – Ay) + 2H(x – Ax)  Algorithm, If:  F(x, y) < 0, (x, y) above line  F(x, y) > 0, (x, y) below line  Hint: F(x, y) = 0 is on line Increase y keeping x constant, F(x, y) becomes more negative  Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm  Example: to find line segment

between (3, 7) and (9, 11) F(x,y) = ‐2W(y – Ay) + 2H(x – Ax) = (‐12)(y – 7) + (8)(x – 3)    For points on line. Eg (7, 29/3), F(x, y) = 0 A = (4, 4) lies below line since F = 44 B = (5, 9) lies above line since F = ‐8 (5,9) (4,4) Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm What Pixels to turn on or off? Consider pixel midpoint M(Mx, My) = (x0 + 1, Y0 + ½) (x1,y1) Case a: If M below actual line F(Mx, My) < 0 shade upper pixel (x + 1, y + 1) M(Mx,My) (x1,y1) (x0, y0) Case b: If M above actual line F(Mx,My) > 0 shade lower pixel (x + 1, y) Source: http://www.doksinet Can compute F(x,y) incrementally Initially, midpoint M = (Ax + 1, Ay + ½) F(Mx, My) = ‐2W(y – Ay) + 2H(x – Ax) i.e F(Ax + 1, Ay + ½) = 2H – W Can compute F(x,y) for next midpoint incrementally If we increment to (x + 1, y), compute new F(Mx,My) F(Mx, My) += 2H (Ax + 2, Ay + ½) i.e F(Ax + 2, Ay + ½) ‐ F(Ax + 1, Ay + ½) = 2H (Ax + 1, Ay + ½)

Source: http://www.doksinet Can compute F(x,y) incrementally If we increment to (x +1, y + 1) F(Mx, My) += 2(H – W) (Ax + 2, Ay + 3/2) i.e F(Ax + 2, Ay + 3/2) ‐ F(Ax + 1, Ay + ½) = 2(H – W) (Ax + 1, Ay + ½) Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm Bresenham(IntPoint a, InPoint b) { // restriction: a.x < bx and 0 < H/W < 1 int y = a.y, W = bx – ax, H = by – ay; int F = 2 * H – W; // current error term for(int x = a.x; x <= bx; x++) { setpixel at (x, y); // to desired color value if F < 0 // y stays same F = F + 2H; else{ Y++, F = F + 2(H – W) // increment y } } }  Recall: F is equation of line Source: http://www.doksinet Bresenham’s Line‐Drawing Algorithm  Final words: we developed algorithm with restrictions 0 < m < 1 and Ax < Bx  Can add code to remove restrictions  When Ax > Bx (swap and draw)  Lines having m > 1 (interchange x with y)  Lines with m < 0 (step x++, decrement y

not incr)  Horizontal and vertical lines (pretest a.x = bx and skip tests) Source: http://www.doksinet Computer Graphics CS 4731 Lecture 25 Polygon Filling & Antialiasing Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Defining and Filling Regions of Pixels  Methods of defining region    Pixel‐defined: specifies pixels in color or geometric range Symbolic: provides property pixels in region must have Examples of symbolic:    Closeness to some pixel Within circle of radius R Within a specified polygon Source: http://www.doksinet Pixel‐Defined Regions      Definition: Region R is the set of all pixels having color C that are connected to a given pixel S 4‐adjacent: pixels that lie next to each other horizontally or vertically, NOT diagonally 8‐adjacent: pixels that lie next to each other horizontally, vertically OR diagonally 4‐connected: if there is unbroken

path of 4‐adjacent pixels connecting them 8‐connected: unbroken path of 8‐adjacent pixels connecting them Source: http://www.doksinet Recursive Flood‐Fill Algorithm      Recursive algorithm Starts from initial pixel of color, intColor Recursively set 4‐connected neighbors to newColor Flood‐Fill: floods region with newColor Basic idea:    start at “seed” pixel (x, y) If (x, y) has color intColor, change it to newColor Do same recursively for all 4 neighbors (x, y+1) (x-1, y (x, y) (x, y-1) (x+1, y) Source: http://www.doksinet Recursive Flood‐Fill Algorithm  Note: getPixel(x,y) used to interrogate pixel color at (x, y) void floodFill(short x, short y, short intColor) { if(getPixel(x, y) == intColor) { setPixel(x, y); floodFill(x – 1, y, intColor); // left pixel floodFill(x + 1, y, intColor); // right pixel floodFill(x, y + 1, intColor); // down pixel floodFill(x, y – 1, intColor); // up pixel } } (x, y+1) (x-1, y (x, y) (x,

y-1) (x+1, y) Source: http://www.doksinet Recursive Flood‐Fill Algorithm       Recursive flood‐fill is blind Some pixels retested several times Region coherence is likelihood that an interior pixel mostly likely adjacent to another interior pixel Coherence can be used to improve algorithm performance A run: group of adjacent pixels lying on same scanline Fill runs(adjacent, on same scan line) of pixels Source: http://www.doksinet Region Filling Using Coherence  Example: start at s, initial seed Pseudocode: Push address of seed pixel onto stack while(stack is not empty) { Pop stack to provide next seed Fill in run defined by seed In row above find reachable interior runs Push address of their rightmost pixels Do same for row below current run } Note: algorithm most efficient if there is span coherence (pixels on scanline have same value) and scan-line coherence (consecutive scanlines similar) Source: http://www.doksinet Filling Polygon‐Defined

Regions  Problem: Region defined polygon with vertices Pi = (Xi, Yi), for i = 1N, specifying sequence of P’s vertices P2 P1 P3 P7 P5 P6 P4 Source: http://www.doksinet Filling Polygon‐Defined Regions     Solution: Progress through frame buffer scan line by scan line, filling in appropriate portions of each line Filled portions defined by intersection of scan line and polygon edges Runs lying between edges inside P are filled Pseudocode: for(each scan Line L) { Find intersections of L with all edges of P Sort the intersections by increasing x-value Fill pixel runs between all pairs of intersections } Source: http://www.doksinet Filling Polygon‐Defined Regions    Example: scan line y = 3 intersects 4 edges e3, e4, e5, e6 Sort x values of intersections and fill runs in pairs Note: at each intersection, inside‐outside (parity), or vice versa P2 P1 P3 P7 e6 e3 P5 3 P6 e5 e4 P4 Source: http://www.doksinet Data Structure Source:

http://www.doksinet Filling Polygon‐Defined Regions     Problem: What if two polygons A, B share an edge? Algorithm behavior could result in:  setting edge first in one color and the another  Drawing edge twice too bright Make Rule: when two polygons share edge, each polygon owns its left and bottom edges E.g below draw shared edge with color of polygon B B A Source: http://www.doksinet Filling Polygon‐Defined Regions   Problem: How to handle cases where scan line intersects with polygon endpoints to avoid wrong parity? Solution: Discard intersections with horizontal edges and with upper endpoint of any edge See 0 See 0 See 2 See 0 See 2 See 1 See 1 Source: http://www.doksinet Antialiasing   Raster displays have pixels as rectangles Aliasing: Discrete nature of pixels introduces “jaggies” Source: http://www.doksinet Antialiasing  Aliasing effects:     Distant objects may disappear entirely Objects can blink

on and off in animations Antialiasing techniques involve some form of blurring to reduce contrast, smoothen image Three antialiasing techniques:    Prefiltering Postfiltering Supersampling Source: http://www.doksinet References   Hill and Kelley, chapter 11 Angel and Shreiner, Interactive Computer Graphics, 6th edition Source: http://www.doksinet Recall: Antialiasing   Raster displays have pixels as rectangles Aliasing: Discrete nature of pixels introduces “jaggies” Source: http://www.doksinet Recall: Antialiasing  Aliasing effects:     Distant objects may disappear entirely Objects can blink on and off in animations Antialiasing techniques involve some form of blurring to reduce contrast, smoothen image Three antialiasing techniques:    Prefiltering Postfiltering Supersampling Source: http://www.doksinet Prefiltering  Basic idea:    Example: if polygon covers ¼ of the pixel   compute area of

polygon coverage use proportional intensity value Pixel color = ¼ polygon color + ¾ adjacent region color Cons: computing polygon coverage can be time consuming Source: http://www.doksinet Supersampling    Assumes we can compute color of any location (x,y) on screen Sample (x,y) in fractional (e.g ½) increments, average samples Example: Double sampling = increments of ½ = 9 color values averaged for each pixel Average 9 (x, y) values to find pixel color Source: http://www.doksinet Postfiltering     Supersampling weights all samples equally Post‐filtering: use unequal weighting of samples Compute pixel value as weighted average Samples close to pixel center given more weight Sample weighting 1/16 1/16 1/16 1/2 1/16 1/16 1/16 1/16 1/16 Source: http://www.doksinet Antialiasing in OpenGL      Many alternatives Simplest: accumulation buffer Accumulation buffer: extra storage, similar to frame buffer Samples are accumulated

When all slightly perturbed samples are done, copy results to frame buffer and draw Source: http://www.doksinet Antialiasing in OpenGL  First initialize:  glutInitDisplayMode(GLUT SINGLE | GLUT RGB | GLUT ACCUM | GLUT DEPTH);  Zero out accumulation buffer  glClear(GLUT ACCUM BUFFER BIT);  Add samples to accumulation buffer using  glAccum( ) Source: http://www.doksinet Antialiasing in OpenGL    Sample code jitter[] stores randomized slight displacements of camera, factor, f controls amount of overall sliding glClear(GL ACCUM BUFFER BIT); for(int i=0;i < 8; i++) { cam.slide(f*jitter[i].x, f*jitter[i].y, 0); display( ); jitter.h glAccum(GL ACCUM, 1/8.0); } -0.3348, 04353 glAccum(GL RETURN, 1.0); 0.2864, -03934 Source: http://www.doksinet Computer Graphics CS 4731 Lecture 26 Curves Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet So Far    Dealt with straight lines and flat

surfaces Real world objects include curves Need to develop:   Representations of curves (mathematical) Tools to render curves Source: http://www.doksinet Interactive Curve Design    Mathematical formula unsuitable for designers Prefer to interactively give sequence of points (control points) Write procedure:   Input: sequence of points Output: parametric representation of curve Source: http://www.doksinet Interactive Curve Design    1 approach: curves pass through control points (interpolate) Example: Lagrangian Interpolating Polynomial Difficulty with this approach:    Polynomials always have “wiggles” For straight lines wiggling is a problem Our approach: approximate control points (Bezier, B‐Splines) Source: http://www.doksinet De Casteljau Algorithm  Consider smooth curve that approximates sequence of control points [p0,p1,.] p (u )  (1  u ) p0  up1 Artist provides these points  0  u 1 System

generates this point using math Blending functions: u and (1 – u) are non‐negative and sum to one Source: http://www.doksinet De Casteljau Algorithm   Now consider 3 points 2 line segments, P0 to P1 and P1 to P2 p01 (u )  (1  u ) p0  up1 p11 (u )  (1  u ) p1  up2 Source: http://www.doksinet De Casteljau Algorithm Substituting known values of p01 (u ) and p11 (u ) p (u )  (1  u ) p01  up11 (u )  (1  u ) 2 p0  (2u (1  u )) p1  u 2 p2 b02 (u ) b12 (u ) b22 (u ) Blending functions for degree 2 Bezier curve b02 (u )  (1  u ) 2 b12 (u )  2u (1  u ) b22 (u )  u 2 Note: blending functions, non-negative, sum to 1 Source: http://www.doksinet De Casteljau Algorithm  Extend to 4 control points P0, P1, P2, P3 p (u )  (1  u ) 3 p0  (3u (1  u ) 2 ) p1  (3u 2 (1  u )) p2  u 3 b03 (u )  b13 (u ) b23 (u ) Final result above is Bezier curve of degree 3 b33 (u ) Source:

http://www.doksinet De Casteljau Algorithm p (u )  (1  u ) 3 p0  (3u (1  u ) 2 ) p1  (3u 2 (1  u )) p2  u 3  b03 (u ) b13 (u ) b23 (u ) b33 (u ) Blending functions are polynomial functions called Bernstein’s polynomials b03 (u )  (1  u )3 b13 (u )  3u (1  u ) 2 b23 (u )  3u 2 (1  u ) b33 (u )  u 3 Source: http://www.doksinet Subdividing Bezier Curves     OpenGL renders flat objects To render curves, approximate with small linear segments Subdivide surface to polygonal patches Bezier Curves can either be straightened or curved recursively in this way Source: http://www.doksinet Bezier Surfaces    Bezier surfaces: interpolate in two dimensions This called Bilinear interpolation Example: 4 control points, P00, P01, P10, P11,   2 parameters u and v Interpolate between     P00 and P01 using u P10 and P11 using u P00 and P10 using v P01 and P11 using v p (u , v)  (1  v)((1  u ) p00

 up01 )  v((1  u ) p10  up11 ) Source: http://www.doksinet Problems with Bezier Curves  Bezier curves elegant but to achieve smoother curve       = more control points = higher order polynomial = more calculations Global support problem: All blending functions are non‐zero for all values of u All control points contribute to all parts of the curve Means after modelling complex surface (e.g a ship), if one control point is moves, recalculate everything! Source: http://www.doksinet B‐Splines     B‐splines designed to address Bezier shortcomings B‐Spline given by blending control points Local support: Each spline contributes in limited range Only non‐zero splines contribute in a given range of u m p (u )   Bi (u ) pi i 0 B-spline blending functions, order 2 Source: http://www.doksinet NURBS      Non‐uniform Rational B‐splines (NURBS) Rational function means ratio of two polynomials Some

curves can be expressed as rational functions but not as simple polynomials No known exact polynomial for circle Rational parametrization of unit circle on xy‐plane: 1 u2 x(u )  1 u2 2u y (u )  1 u2 z (u )  0 Source: http://www.doksinet Tesselation tesselation Far = Less detailed mesh Near = More detailed mesh Simplification   Previously: Pre‐generate mesh versions offline Tesselation shader unit new to GPU in DirectX 10 (2007)   Subdivide faces on‐the‐fly to yield finer detail, generate new vertices, primitives Mesh simplification/tesselation on GPU = Real time LoD Source: http://www.doksinet Tessellation Shaders  Can subdivide curves, surfaces on the GPU Source: http://www.doksinet Where Does Tesselation Shader Fit? Fixed number of vertices in/out Can change number of vertices Source: http://www.doksinet Geometry Shader  After Tesselation shader. Can    Handle whole primitives Generate new primitives

Generate no primitives (cull) Source: http://www.doksinet References    Hill and Kelley, chapter 11 Angel and Shreiner, Interactive Computer Graphics, 6th edition, Chapter 10 Shreiner, OpenGL Programming Guide, 8th edition Source: http://www.doksinet Computer Graphics (CS 4731) Lecture 26: Image Manipulation Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Image Processing   Graphics concerned with creating artificial scenes from geometry and shading descriptions Image processing    Input is an image Output is a modified version of input image Image processing operations include altering images, remove noise, super‐impose images Source: http://www.doksinet Image Processing  Example: Sobel Filter Original Image  Sobel Filter Image Proc in OpenGL: Fragment shader invoked on each element of texture  Performs calculation, outputs color to pixel in color buffer Source:

http://www.doksinet Luminance   Luminance of a color is its overall brightness (grayscale) Compute it luminance from RGB as Luminance = R * 0.2125 + G * 0.7154 + B * 0.0721 Source: http://www.doksinet Image Negative  Another example Source: http://www.doksinet Edge Detection  Edge Detection  Compare adjacent pixels    If difference is “large”, this is an edge If difference is “small”, not an edge Comparison can be done in color or luminance Insert figure 11.11 Source: http://www.doksinet Embossing    Embossing is similar to edge detection Replace pixel color with grayscale proportional to contrast with neighboring pixel Add highlights depending on angle of change Insert figure 11.12 Source: http://www.doksinet Toon Rendering for Non‐Photorealistic Effects Source: http://www.doksinet Geometric Operations  Examples: translating, rotating, scaling an image Source: http://www.doksinet Non‐Linear Image Warps

Original Twirl Ripple Spherical Source: http://www.doksinet References      Mike Bailey and Steve Cunningham, Graphics Shaders (second edition) Wilhelm Burger and Mark Burge, Digital Image Processing: An Algorithmic Introduction using Java, Springer Verlag Publishers OpenGL 4.0 Shading Language Cookbook, David Wolff Real Time Rendering (3rd edition), Akenine‐Moller, Haines and Hoffman Suman Nadella, CS 563 slides, Spring 2005 Source: http://www.doksinet Computer Graphics CS 4731 – Final Review Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Exam Overview      Thursday, October 16, 2014 in‐class Midterm covered up to lecture 13 (Viewing & Camera Control) Final covers lecture 14 till today’s class (lecture 26) Can bring:  1 page cheat‐sheet, hand‐written (not typed)  Calculator Will test:  Theoretical concepts  Mathematics  Algorithms  Programming

 OpenGL/GLSL knowledge (program structure and commands) Source: http://www.doksinet Topics          Projection Lighting, shading and materials Shadows and fog Texturing & Environment mapping Image manipulation Clipping (2D and 3D clipping) and viewport transformation Hidden surface removal Rasterization (line drawing, polygon filling, antialiasing) Curves Source: http://www.doksinet Computer Graphics CS 4731 – Midterm Review Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Source: http://www.doksinet Exam Overview     Thursday, September 25, 2014, in‐class Will cover up to lecture 13 (Viewing) Can bring:  One page cheat‐sheet, hand‐written (not typed)  Calculator Will test:  Theoretical concepts  Mathematics  Algorithms  Programming  OpenGL/GLSL knowledge (program structure and some commands) Source: http://www.doksinet What am I Really Testing?  Understanding of

   concepts (NOT only programming) programming (pseudocode/syntax) Test that:    you can plug in numbers by hand to check your programs you did the projects you understand what you did in projects Source: http://www.doksinet General Advise        Read your projects and refresh memory of what you did Read the slides: worst case – if you understand slides, you’re more than 50% prepared Focus on Mathematical results, concepts, algorithms Try to predict subtle changes to algorithm. What ifs? Past exams: One sample midterm is on website All lectures have references. Look at refs to focus reading Do all readings I asked you to do on your own Source: http://www.doksinet Grading Policy     I try to give as much partial credit as possible In time constraints, laying out outline of solution gets you healthy chunk of points Try to write something for each question Many questions will be easy, exponentially harder to score

higher in exam Source: http://www.doksinet Introduction     Motivation for CG Uses of CG (simulation, image processing, movies, viz, etc) Elements of CG (polylines, raster images, filled regions, etc) Device dependent graphics libraries (OpenGL, DirectX, etc) Source: http://www.doksinet OpenGL/GLUT      High‐level:  What is OpenGL?  What is GLUT?  What is GLSL  Functionality, how do they work together? Design features: low‐level API, event‐driven, portability, etc Sequential Vs. Event‐driven programming OpenGL/GLUT program structure (create window, init, callback registration, etc) GLUT callback functions (registration and response to events) Source: http://www.doksinet OpenGL Drawing       Vertex Buffer Objects glDrawArrays OpenGL :  Drawing primitives: GL POINTS, GL LINES, etc (should be conversant with the behaviors of major primitives)  Data types  Interaction: keyboard, mouse (GLUT LEFT

BUTTON, etc)  OpenGL state GLSL Command format/syntax Vertex and fragments shaders Shader setup, How GLSL works Source: http://www.doksinet 2D Graphics: Coordinate Systems     Screen coordinate system/Viewport World coordinate system/World window Setting Viewport Tiling, aspect ratio Source: http://www.doksinet Fractals  What are fractals?    Mandelbrot set      Self similarity Applications (clouds, grass, terrain etc) Complex numbers: s, c, orbits, complex number math Dwell function Assigning colors Mapping mandelbrot to screen Koch curves, gingerbread man, hilbert transforms Source: http://www.doksinet Points, Scalars Vectors  Vector Operations:         Addition, subtraction, scaling Magnitude Normalization Dot product Cross product Finding angle between two vectors Standard unit vector Normal of a plane Source: http://www.doksinet Transforms         Homogeneous

coordinates Vs. Ordinary coordinates 2D/3D affine transforms: rotation, scaling, translation, shearing Should be able to take problem description and build transforms and apply to vertices 2D: rotation (scaling, etc) about arbitrary center:  T(Px,Py) R() T(‐Px,‐Py) * P Composing transforms OpenGL transform commands (Rotate, Translate, Scale) 3D rotation:  x‐roll, y‐roll, z‐roll, about arbitrary vector (Euler theorem) if given azimuth, latitude of vector or (x, y, z) of normalized vector Matrix multiplication!! Source: http://www.doksinet Modeling and 3D Viewing  Implementing transforms (what goes in .cpp, what goes in shader) Drawing with Polygonal meshes  Hierarchical 3D modeling  Finding vertex normals Lookat(Eye, COI, Up ) to set camera     How to build 3 new vectors for axes How to build world‐to‐eye transformation