IT Servicesregarding Penn State branding on my webpages; they redirected me to theOffice of Strategic Communications. I have yet to receive a response from the Office of Strategic Communications. I have requested the use of a small thumbnail image for all webpage icons (i.e. "favicon.ico") and the use of the full penn state logo on the "index.html" webpage. Additionally, I would include the Penn State Harrisburg campus logo on the "about.html" webpage. Furthermore, I have communicated to the server admin my desire to have this personal webspace have the HTTPS protocol enabled; I have yet to perceive that this feature have been enabled for the specified directory.
Additionally, I have been researching thePHP referencefor managing inter process communication in the hope that it would enable real-time communication in the "comments.php" webpage. I have updated the "SSE.php" webpage to utilize both the "set_time_limit" and "ignore_user_abort" functions (it will explicitly manage process lifetime); an invokation which should alieviate the frequent disconnections due to timeout which is evident in "SSE-view.php". Additionally, the output to the "logs.txt" file has been updated to include more verbose information.
This week I have researched CSS attributes for use in my visual webpages assignment, and I discovered that CSS supports animations natively. I have created a the "animations.html" webpage to experiment with animation properties. I assembled several stock images of fractals using the photo effects in the "paint.net" software for use in the visual webpages. The "animations.html" webpage uses the Julia fractal effect, and the "ambiance.html" webpage uses the Mandelbrot fractal effect layered over a cloud effect.
OpenGL is an open standard library for rendering computer graphics. The library provides an API for programmer interaction with graphics acceleration hardware (a.k.a. GPU). In particular, the OpenGL pipeline defines a vertex shader and a fragment shader which compose a shader program. This program is run by the hardware to process graphics primitives. Typically, the vertex shader of a program is run in parallel for many elements in a buffer of resources; this buffer usually comprises the vertex data of a 3D object. The output of the vertex shader is a vector which describes a point in the scene. As such the vertex shader is mostly used for applying transformations to each vertex such as translations and rotations. The usual configuration of the OpenGL library uses collections of three points to construct triangles. The fragment shader is run on each pixel lying within the interior of these triangles. The output of the fragment shader is a vector value indicating the color of the pixel.
Pixel fragments are recorded in a raster image buffer which includes a depth indicator. Mostly, the depth information is used to determine which pixel fragments should mask pixels from background objects; the pixel fragment overrides the value in the raster image if its depth value is less than the depth already present in the image. This depth is computed with the assumption that the three points form a triangle on a plane in 3D space; however, this is not always accurate when applying a perspective view transformation. Indeed, a perspective transformation will not even map triangular regions of the plane to triangular regions in the perspective space; however, this technique still generates very good approximations of perspective images.
I have been working to fix some of the issues with the "viewer.html" webpage. Specifically, there used to be some problems with the mapping between the clip space and normalized device coordinates. The previous effect of this problem manifested itself in having some of the geometry clipped by the far plane. I have fixed the issue by performing the transformation manually within the vertex shader.
Additionally, I have made some improvements to the interface for the "viewer.html" webpage. It now includes a menu for selecting a model to view in the window. Upon selecting a model from the drop-down list, the model is dynamically loaded; the progress of the load is displayed using a progress meter. The viewer now includes a "bunny.obj" model, a "dinosaur.obj" model, and a "armadillo.obj" model which can be selected from the drop-down menu.
I have been experimenting with the "_SESSION" variable in php which is used for easily managing persistent state between requests. A webpage session allows for stateful content to be managed within the HTTP protocol which is itself connectionless. A state is established during an HTTP request by using the "Set-Cookie" HTTP header; the contents of this header is a list of key/value pairs of the form "KEY=VALUE" with potentially some additional options. The browser interprets the contents of this header to include in the "Cookie" header of subsequent HTTP requests. By including a session identifier as an element of the list in the initial "Set-Cookie" HTTP header, the server can lookup the state of the previous communication based on a unique session identifier.
In PHP, each session is recorded in a separate file; the filename includes an identifying mark so that it may be easily located in the file system when subsequent requests arrive. The contents of this file is the serialized information from the "_SESSION" variable; any information saved in the "_SESSION" variable will be available during the next request from the same client. To demonstrate this effect, I have created a very simple "views.html" webpage which updates the value of a "views" variable in the "_SESSION" variable to indicate the number of times the the current client has loaded the page. Eventually, the both the browser and the server must decide on when they should discard the contents of their local session information. Therefore, the information in the "_SESSION" varaible should only be used for information that is persistent between connections; it should not be used to maintain information idefinitely.
This week, I have created a very simple ray-tracer implementation within a the "viewer.html" webpage. At this time, the application only renders an infinite plane with a simple "grid texture" to demonstrate the calculation of intersection points on the plane. The application has very poor efficiency so the resolution of the image is much worse than the "WebGL" render. The camera position is animated as moving forwards along the into the screen.
The ray-tracer algorithm for computer graphics rendering simulates the interaction of light particles with a virtual camera viewing window. In particular, instead of tracing particles of light from the scene to the camera, the camera casts virtual rays out into the world where intersection with world geometry is calculated; rays that are incident on a geometry may be recast into the world to simulate reflections. Each pixel in the view port casts a ray into the world, and the closest geometry intersection encodes the color for the source pixel.
Ray-tracing techniques generally generate images with better realism than those that are produced by most rasterization techniques; however, ray-tracing typically requires many more calculations, and is not usually supported by hardware accelerated graphics devices. As a result, rasterization techniques are much more commonly used in production software.
The "viewer.html" for the raytracer project has been updated to use gaussian elimination with partial pivoting to solve the linear system for the plane equation. Additionally, the project now supports the sphere geometry. Also The animation has been removed so that the image can be rendered at a higher resolution. A "save_image" link is now available to download a capture of the image generated by the experiment.
The "viewer.html" for the WebGL project has been updated with a new interface for manipulating the model. Specifically, the primary mouse button now rotates the model using householder transformations; the mouse is assumed to exist at the eye of the observer with corrdinates determined by the near plane which determines a vector that maps a fixed point in the space. The "arrow.obj" model was designed to test the "look at" functionality provided by the householder transformation; it is now included as a predefined model. Furthermore, the control bar has added the capability to halt the automatic animation, reset the orientation of the model, modify the field-of-view (FOV) of the observer, and set the displacement of the model. Additionally, the model loader now supports the loading of models from the local disk.
This week, I have created the "viewer.html" webpage which implements a very simple rasterization algorithm for rendering 3D graphics. The project includes a "fill_triangle_tester.html" webpage which demonstrates the "fill triangle" algorithm for drawing triangle primitives. Overall, the final project displays a 3D cube which is animated through continuous rotation about the diagonal of the cube; this transformation is an axis angle rotation using a composition of householder transformations and a givens rotation. In order to alieviate some of the computational complexity in the rendering loop to reduce animation delays, the verticies are mapped to screen space through a simple orthographic projection (i.e. ignoring the depth component) after the rotation.
The "fill_triangle_tester.htmlwebpage is an interface for interacting with a single triangle primitive. Dragging the cursor over the canvas moves the selected point within the view; each vertex can be selected using its corresponding key which inclues any of "1", "2", and "3" on the number keys. Dragging a vertex off of the ends of the display region will cause the drawing to wrap around to the next row of pixels because the pixel data is drawn to a linear image data array where the bounding width is not clipped. The "fill triangle" algorithm uses scan-line rasterization with simple lattice point containment to determine weather a pixel is contained in the triangle; it does not perform any explicity super-sampling, antialiasing, or interpolation of the lattice points. The algorithm supports back-face culling by determining the orientation of the boundary of the triangle; the triangle is not filled when the orientation is not right-handed.
Back-face culling is an optimization to the rasterization algorithm by which each face has exactly one visible side; when the face is directed away from the observer, it is not rendered thereby saving computation. As many 3D models are closed surfaces, there is almost always a set of faces which are directed away from the observer.
This week, I have updated the "viewer.html" webpage for the raytracer project to include reflections. There is now a gallery of pregenerated images provided on the "gallery.html" webpage.
The "viewer.html" webpage of the rasterizer project now performs a perspective transfomation on the model verticies before rendering them to the screen. Additionally, the project now includes a link to the "texture_map_tester.html" webpage; this webpage demonstrates texture mapping of image data onto the surface of a plane. The tilt of the plane can be modified by dragging the cursor over the canvas.
This is my final blog entry to satisfy the requirements ofART 101: Introduction to Web Designat the Pennsylvania State University.