• Category Archives Math
  • Gui transformation and curved busses

    Last post i’ve told you about Node Edit, the project i’ve recently started working on. The image i showed there had flawed positioning of node names and node pin buttons. This post i’ll explain what caused that and why it was that way. I’ll also introduce curved bus rendering. I’ll explain the math behind it and show the shader i use for rendering them.

    The problem

    Here is the image from the first version again:

    The problem here is that the text and pin buttons don’t align properly with the node background images. At first i thought the problem had to do with the text and buttons not rendering at the proper location. I was rendering the node backgrounds with an in-app renderer and the text and buttons were rendered using the engine’s gui system. However the engine’s gui system directly renders onto the screen while the in-app renderer makes the node backgrounds go though an application provided view matrix. I had already started using this view matrix to prepare for panning and zooming (something that’s most definitely needed for bigger graphs).

    The solution

    After some modifications, the gui system is now also using matrices to render to the screen, rather than relying on a custom shader to do the transformation from gui coordinates to window coordinates.
    Using the World/View/Projection matrices from the engine’s renderer the gui now applies the matrices to the renderer like this:

    Matrix4x4 viewMatrix = Matrix4x4::BuildSRT( Vector3f( 1.0f, -1.0f, 1.0f ), Vector3f(),
    	                                    Vector3f( screenSize[ 0 ] * 0.5f, screenSize[ 1 ] * 0.5f, 0.0f ) ).Inverse();
    Matrix4x4 projMatrix = Matrix4x4::BuildOrthoMatrix( screenSize[ 0 ], screenSize[ 1 ], 0.0f, 100.0f );
    renderer.ApplyMatrix( Render::MT_WORLD, Matrix4x4() );
    renderer.ApplyMatrix( Render::MT_VIEW, viewMatrix );
    renderer.ApplyMatrix( Render::MT_PROJECTION, projMatrix );

     

    I think the projection matrix speaks for itself. It is just an ordinary orthographic matrix mapping the amount of screen pixels to -1..+1 and having a depth range from 0.0 to 100.0. The world matrix is being set to an identity matrix. For now widgets still manually translate and rotate themselves, later on they could build a matrix for their transformation and fill that into the world matrix.
    The most interresting parts about the matrices is that the view matrix can be used to solve two problems:

    • The center of the screen has coordinate 0, 0. For ui we would like the top-left corner of the window to be 0, 0.
    • In OpenGL the y-axis is going up in the window. Meaning that if we increase the y location of a widget, the widget would be rendered more towards the top of the window. This is the opposite of what we want it to be. We want an increase in y position to move the widget down.

    The first problem is solved by mixing in a Translation matrix into the view matrix. We can make a translation matrix that translates everything up half of the screen height, and left half of the screen width. If we apply this matrix into the view matrix we would effectively reposition the origin from the center of the screen to the top-left of the screen.
    The second problem can be solved by having the view matrix invert the y-axis. This is as simple as just negating the y scale. This renders the world up-side-down, which is exactly what we want for the gui.
    After we’ve built this matrix we only need to invert it so that it wont be a normal world matrix but an actual view matrix that can be used to ‘subtract’ a transformation from the widgets.

    Result

    When i had done this and properly matched the target size of the gui rendering with the size that was used for the node backgrounds the text and gui buttons perfectly aligned with the node backgrounds:

    So there we go. The node titles now properly fit into their title boxes, and the pin buttons properly show up inside the nodes themselves.

    Curved lines

    As you can see i’ve also worked on the bus lines. I didn’t like the straight lines anymore so i wanted to make them curve a bit at the beginning and the end. It is quite hard to determine where to draw line segments for a curved line, so i used a little help from Mr.Sine:

     

     

     

     

     

     

     

    A sinewave is nicely curved around the peaks and valleys, so why not borrow that property to define our curves. If we just take the area between 0.5π and 1.5π we can use this to determine where we should draw our lines:

     

     

     

     

     

     

     

    If we can sample the sinewave between these areas and use it’s value to determine the height difference we should use for a specific line segment then we’re set. However, i didn’t like one property of the area i need to sample to do this, and that is that it doesn’t start at 0, but at 0.5π. This would result in the math always needing to use this offset when it wants to sample the sinewave. There’s one easy solution to this though, Mr.Cos:

     

     

     

     

     

     

     

    The cosine wave nicely starts at it’s maximum intensity if we sample it at position 0. If we were to use the cosine wave instead we dont have to apply this offset and we can still get exactly the same shape for our busses.

    Vertex streaming

    Since node/pin positions can change every frame we need a dynamic system that allows rendering the lines from/to any arbitrary position. The easiest way is to just create a vertex buffer and put vertices at the correct positions in here for every bus we want to draw. We can iterate over the number of line segments we want to draw the bus with and calculate the phase for each line edge. This phase can then be passed into the cosine evaluation function and we can use that to determine the vertex position. Then we would have to upload the vertex buffer and render it out.

    The problem is that if we would do this for every bus / every frame we would create a major pipeline stall. This stall will be caused by us uploading new data into the vertex buffer over and over again, the driver doesn’t properly buffer our vertices so it will make us wait for uploading untill the gpu is done rendering and can accept the change.

    Shader based lines

    A better idea is to use constant vertex data and then render the lines with a shader using uniforms for the start/end position. You can make a simple vertex buffer containing vertices with just xyz coordinates. You have to fill in one of these coordinates with the phase of that point (i choose z).
    Then during rendering you set the uniforms and let the vertex shader apply the transformation for the lines:

    uniform mat4 ViewProj;
    uniform vec2 startPos;
    uniform vec2 deltaPos;
    
    void main()
    {
    	vec4 vPos = vec4( 0.0, 0.0, 0.0, 1.0 );
    
    	vPos.x = startPos.x + deltaPos.x * gl_Vertex.z;
    	vPos.y = startPos.y + deltaPos.y * ( -cos( gl_Vertex.z * 3.1415 ) * 0.5 + 0.5 );
    
    	gl_Position = ViewProj * vPos;
    }

     

    As i’ve chosen the vertex z coordinate to contain the phase i have to get the phase using gl_Vertex.z. You can see that for the x position you just use the phase to linearly interpolate from start to end. For the y position however you go from start to end using the factor of the cosine wave. One important thing to note is that the result of the cosine evaluation needs to go from 0.0 to 1.0. To make that happen you need to negate the result of the cosine to make it go from -1 to +1, and not from +1 to -1. You can see in the image above that a cosine of 0 yields 1, not -1. Then after this flipping, only a rescale is required. Multiply by 0.5 to get it in a range of -0.5 to 0.5 and then add 0.5 to make it range from 0.0 to 1.0.


  • Ray to Plane intersection

    Sometimes you may want to find the intersection point between a ray and a plane. Finding the point of intersection is really easy if you approach it mathematically. We’ve got the formulas for any point on the ray and the plane:

    Ray: P = P0 + tV
    Plane: 0 = P dot N + d

     The formulas above are true if P represents the point on the ray or plane, P0 represents the ray’s origin, V represents the ray’s direction, N represents the plane’s normal and d is the plane’s distance. The variable t would represent the distance from ray origin to any point on the ray.

    If we want to find the intersection P between the ray and the plane we’ll need to integrate the two formulas, we do this by substituting P:

    0 = (P0 + tV) dot N + d

    We can now simplify this to:

    0 = P0 dot N + tV dot N + d

    Shift tV dot N to the other side:

    tV dot N = 0 – (P0 dot N + d)

    Extract t:

    t = -(P0 dot N + d) / (V dot N)

    We have now found t which represents the distance from the ray’s origin to the point on the ray. This result might be usefull for other things but you can also use it to find the intersection between the ray and the plane by just multiplying it by the ray’s direction and adding that to the ray’s origin.




©2017 Echo Gaming Entries (RSS) and Comments (RSS)  Raindrops Theme