• Category Archives Node Edit Editor
  • Texture stretching

    This post i’ll be writing about a technique i call texture stretching. I use this technique to be able to draw nodes of various sizes using a single texture. This approach solves problems with differing texture scale factors that are caused by the varying sizes of the nodes.

    The problem

    The problem when scaling 2d images while rendering them to the screen is that you need to retain the original image’s aspect ratio. The aspect ratio is the ratio between the image’s width and it’s height. An image’s aspect ratio needs to be retained because otherwise the image’s contents will be deformed. For example, if you have an image of a circle which you scale without preserving it’s aspect ratio, the scaled image would show an oval.

    To scale an image while retaining it’s aspect ratio you can only use uniform scaling. This means that you have to scale the width of the image by just as much as you scale the height. The problem with nodes is that these have all sorts of widths and heights. The width is determined by the length of the node name and the names of it’s pins. This is required to properly fit the pin names within the nodes. The height of the node is determined by it’s number of input and output pins. Since nodes themselves can specify their name, their pin names and their number of pins this means that we cannot use uniform scaling for nodes. We need to use a scaling mechanism that is non-uniform, but retains the aspect ratio for important parts of the image.

    The solution

    I’ve already hinted a bit at the solution in the problem description. What you want to do is target specific parts of the image that need to retain their aspect ratio, and then use ‘unimportant’ parts of  the image to do the scaling. Our node textures are made of rounded rectangles. The most important parts of the image are the corners. Those cannot be scaled non-uniformly because that would cause the roundings to go out of their proportions. Between the corners there’s four edges, these contain borders at the sides of the node where we show pins and to the top of the node where we show the node’s title. We want these edges to only be scaled in a single direction. The horizontal edges can only be scaled horizontally, while the vertical edges can only be scaled vertically. If we apply these scaling restrictions we’re sure that the size of the area in between the borders doesn’t change in the directions that matter.

    To respect these restrictions we can no longer render a single quad that stretches the entire texture over it. Instead we’re now required to render nine quads. Four quads for the corners, four for the edges, and the remaining quad in the center. Here’s now it’ll look:

    In the image above you can see two samples of nodes rendered with a single quad, and nodes rendered with nine quads each respecting the scaling restrictions of the image.
    Take for example the Add node. On the left you can see that if we’re rendering the node texture using a single quad that the corners are blown up and that the borders in between the areas we defined before are much thicker than intended. In the second Add node you can see that nine quads are used. The corner quads aren’t scaled at all which leave the roundings  nice and smooth. The horizontal scaling is only applied in the horizontal edges and the center area, thus no thick borders appear in the vertical edges. Similarly the vertical scaling is only applied in the vertical edges and the center area, thus no thick borders appear in the horizontal edges either. The center area is scaled in both the horizontal and vertical directions, but since this area doesn’t contain any shape that needs to retain it’s aspect ratio you dont see any issue here.
    As a reference the Sine Wave node is in the image aswell, you can see that it shows the exact same corners and borders as the Add node, but it does provide extra room for the longer pin names.


  • Gui transformation and curved busses

    Last post i’ve told you about Node Edit, the project i’ve recently started working on. The image i showed there had flawed positioning of node names and node pin buttons. This post i’ll explain what caused that and why it was that way. I’ll also introduce curved bus rendering. I’ll explain the math behind it and show the shader i use for rendering them.

    The problem

    Here is the image from the first version again:

    The problem here is that the text and pin buttons don’t align properly with the node background images. At first i thought the problem had to do with the text and buttons not rendering at the proper location. I was rendering the node backgrounds with an in-app renderer and the text and buttons were rendered using the engine’s gui system. However the engine’s gui system directly renders onto the screen while the in-app renderer makes the node backgrounds go though an application provided view matrix. I had already started using this view matrix to prepare for panning and zooming (something that’s most definitely needed for bigger graphs).

    The solution

    After some modifications, the gui system is now also using matrices to render to the screen, rather than relying on a custom shader to do the transformation from gui coordinates to window coordinates.
    Using the World/View/Projection matrices from the engine’s renderer the gui now applies the matrices to the renderer like this:

    Matrix4x4 viewMatrix = Matrix4x4::BuildSRT( Vector3f( 1.0f, -1.0f, 1.0f ), Vector3f(),
    	                                    Vector3f( screenSize[ 0 ] * 0.5f, screenSize[ 1 ] * 0.5f, 0.0f ) ).Inverse();
    Matrix4x4 projMatrix = Matrix4x4::BuildOrthoMatrix( screenSize[ 0 ], screenSize[ 1 ], 0.0f, 100.0f );
    renderer.ApplyMatrix( Render::MT_WORLD, Matrix4x4() );
    renderer.ApplyMatrix( Render::MT_VIEW, viewMatrix );
    renderer.ApplyMatrix( Render::MT_PROJECTION, projMatrix );


    I think the projection matrix speaks for itself. It is just an ordinary orthographic matrix mapping the amount of screen pixels to -1..+1 and having a depth range from 0.0 to 100.0. The world matrix is being set to an identity matrix. For now widgets still manually translate and rotate themselves, later on they could build a matrix for their transformation and fill that into the world matrix.
    The most interresting parts about the matrices is that the view matrix can be used to solve two problems:

    • The center of the screen has coordinate 0, 0. For ui we would like the top-left corner of the window to be 0, 0.
    • In OpenGL the y-axis is going up in the window. Meaning that if we increase the y location of a widget, the widget would be rendered more towards the top of the window. This is the opposite of what we want it to be. We want an increase in y position to move the widget down.

    The first problem is solved by mixing in a Translation matrix into the view matrix. We can make a translation matrix that translates everything up half of the screen height, and left half of the screen width. If we apply this matrix into the view matrix we would effectively reposition the origin from the center of the screen to the top-left of the screen.
    The second problem can be solved by having the view matrix invert the y-axis. This is as simple as just negating the y scale. This renders the world up-side-down, which is exactly what we want for the gui.
    After we’ve built this matrix we only need to invert it so that it wont be a normal world matrix but an actual view matrix that can be used to ‘subtract’ a transformation from the widgets.


    When i had done this and properly matched the target size of the gui rendering with the size that was used for the node backgrounds the text and gui buttons perfectly aligned with the node backgrounds:

    So there we go. The node titles now properly fit into their title boxes, and the pin buttons properly show up inside the nodes themselves.

    Curved lines

    As you can see i’ve also worked on the bus lines. I didn’t like the straight lines anymore so i wanted to make them curve a bit at the beginning and the end. It is quite hard to determine where to draw line segments for a curved line, so i used a little help from Mr.Sine:








    A sinewave is nicely curved around the peaks and valleys, so why not borrow that property to define our curves. If we just take the area between 0.5π and 1.5π we can use this to determine where we should draw our lines:








    If we can sample the sinewave between these areas and use it’s value to determine the height difference we should use for a specific line segment then we’re set. However, i didn’t like one property of the area i need to sample to do this, and that is that it doesn’t start at 0, but at 0.5π. This would result in the math always needing to use this offset when it wants to sample the sinewave. There’s one easy solution to this though, Mr.Cos:








    The cosine wave nicely starts at it’s maximum intensity if we sample it at position 0. If we were to use the cosine wave instead we dont have to apply this offset and we can still get exactly the same shape for our busses.

    Vertex streaming

    Since node/pin positions can change every frame we need a dynamic system that allows rendering the lines from/to any arbitrary position. The easiest way is to just create a vertex buffer and put vertices at the correct positions in here for every bus we want to draw. We can iterate over the number of line segments we want to draw the bus with and calculate the phase for each line edge. This phase can then be passed into the cosine evaluation function and we can use that to determine the vertex position. Then we would have to upload the vertex buffer and render it out.

    The problem is that if we would do this for every bus / every frame we would create a major pipeline stall. This stall will be caused by us uploading new data into the vertex buffer over and over again, the driver doesn’t properly buffer our vertices so it will make us wait for uploading untill the gpu is done rendering and can accept the change.

    Shader based lines

    A better idea is to use constant vertex data and then render the lines with a shader using uniforms for the start/end position. You can make a simple vertex buffer containing vertices with just xyz coordinates. You have to fill in one of these coordinates with the phase of that point (i choose z).
    Then during rendering you set the uniforms and let the vertex shader apply the transformation for the lines:

    uniform mat4 ViewProj;
    uniform vec2 startPos;
    uniform vec2 deltaPos;
    void main()
    	vec4 vPos = vec4( 0.0, 0.0, 0.0, 1.0 );
    	vPos.x = startPos.x + deltaPos.x * gl_Vertex.z;
    	vPos.y = startPos.y + deltaPos.y * ( -cos( gl_Vertex.z * 3.1415 ) * 0.5 + 0.5 );
    	gl_Position = ViewProj * vPos;


    As i’ve chosen the vertex z coordinate to contain the phase i have to get the phase using gl_Vertex.z. You can see that for the x position you just use the phase to linearly interpolate from start to end. For the y position however you go from start to end using the factor of the cosine wave. One important thing to note is that the result of the cosine evaluation needs to go from 0.0 to 1.0. To make that happen you need to negate the result of the cosine to make it go from -1 to +1, and not from +1 to -1. You can see in the image above that a cosine of 0 yields 1, not -1. Then after this flipping, only a rescale is required. Multiply by 0.5 to get it in a range of -0.5 to 0.5 and then add 0.5 to make it range from 0.0 to 1.0.

©2017 Echo Gaming Entries (RSS) and Comments (RSS)  Raindrops Theme