• Category Archives Node Edit
  • Texture stretching

    This post i’ll be writing about a technique i call texture stretching. I use this technique to be able to draw nodes of various sizes using a single texture. This approach solves problems with differing texture scale factors that are caused by the varying sizes of the nodes.

    The problem

    The problem when scaling 2d images while rendering them to the screen is that you need to retain the original image’s aspect ratio. The aspect ratio is the ratio between the image’s width and it’s height. An image’s aspect ratio needs to be retained because otherwise the image’s contents will be deformed. For example, if you have an image of a circle which you scale without preserving it’s aspect ratio, the scaled image would show an oval.

    To scale an image while retaining it’s aspect ratio you can only use uniform scaling. This means that you have to scale the width of the image by just as much as you scale the height. The problem with nodes is that these have all sorts of widths and heights. The width is determined by the length of the node name and the names of it’s pins. This is required to properly fit the pin names within the nodes. The height of the node is determined by it’s number of input and output pins. Since nodes themselves can specify their name, their pin names and their number of pins this means that we cannot use uniform scaling for nodes. We need to use a scaling mechanism that is non-uniform, but retains the aspect ratio for important parts of the image.

    The solution

    I’ve already hinted a bit at the solution in the problem description. What you want to do is target specific parts of the image that need to retain their aspect ratio, and then use ‘unimportant’ parts of  the image to do the scaling. Our node textures are made of rounded rectangles. The most important parts of the image are the corners. Those cannot be scaled non-uniformly because that would cause the roundings to go out of their proportions. Between the corners there’s four edges, these contain borders at the sides of the node where we show pins and to the top of the node where we show the node’s title. We want these edges to only be scaled in a single direction. The horizontal edges can only be scaled horizontally, while the vertical edges can only be scaled vertically. If we apply these scaling restrictions we’re sure that the size of the area in between the borders doesn’t change in the directions that matter.

    To respect these restrictions we can no longer render a single quad that stretches the entire texture over it. Instead we’re now required to render nine quads. Four quads for the corners, four for the edges, and the remaining quad in the center. Here’s now it’ll look:

    In the image above you can see two samples of nodes rendered with a single quad, and nodes rendered with nine quads each respecting the scaling restrictions of the image.
    Take for example the Add node. On the left you can see that if we’re rendering the node texture using a single quad that the corners are blown up and that the borders in between the areas we defined before are much thicker than intended. In the second Add node you can see that nine quads are used. The corner quads aren’t scaled at all which leave the roundings  nice and smooth. The horizontal scaling is only applied in the horizontal edges and the center area, thus no thick borders appear in the vertical edges. Similarly the vertical scaling is only applied in the vertical edges and the center area, thus no thick borders appear in the horizontal edges either. The center area is scaled in both the horizontal and vertical directions, but since this area doesn’t contain any shape that needs to retain it’s aspect ratio you dont see any issue here.
    As a reference the Sine Wave node is in the image aswell, you can see that it shows the exact same corners and borders as the Add node, but it does provide extra room for the longer pin names.

     


  • Bus data traversal

    Recently i’ve been working on getting data to traverse the graph. I’ve already made the nodes, input and ouput pins and busses. So all i have to do now is to make output pins output data which can be consumed by input pins. The problem is that the type of data that pins will be outputting isn’t know to the engine, so we cant simply  get the data from one pin and pass it to another.

    Data interfaces

    To enable nodes to output any kind of data i’ve introduced a single shared interface which needs to be implemented by every datatype. This interface enables the engine to pass around data objects between pins without knowing the actual data type. Within the nodes the shared interface can be queried for the interface to the actual data type. For example, when a node needs to use the Real datatype it would look something like this:

     

     

     

     

     

     

     

     

     

    By using this interfacing aproach we can support an undetermined amount of different data types. This is because plugins can just register a datatype and the engine doesn’t need to be updated as long as that datatype inherits from the shared interface.

    Graph traversal

    There’s several steps that nodes and the engine need to take to make data traverse the graph:

    1. A loaded plugin needs to expose a datatype. This is done by making the plugin implement a IDataFactory  interface. When loading the plugin the engine will get the data factory and then it will ask the data factory what kinds of datatypes it exposes. The data factory returns Universally Unique IDentifiers for each exposed datatype.
    2. The node’s output pins need to request an instance of a datatype. It does this by asking the engine’s plugin library if it can create a data object for a certain UUID. The engine will invoke the data factory from plugin that exposed the datatype. The data factory is responsible for actually instantiating the datatype.
    3. While the graph is running nodes can just set values to their output pin’s data instances. They do this from within the node’s Process call.
    4. When the engine encounters a bus between an output and an input pin it will get the data instance from the output pin. In normal circumstances it will directly set this instance to the node’s input pin.
    5. Sometimes datatypes for output pins dont match those of the input pins to which it is connected. In this case the engine will ask the datatype to convert itself to the required datatype. If this is not possible it will instead ask the required datatype to convert itself from the provided data type. This way when new datatypes are added later they can still be converted to/from old datatypes.

    Here is the first graph where data actually traverses from node to node:

    What you see here is the Sine Wave node outputting a Real value. The engine is moving this value to the In pin of the Grapher node, which outputs it as a graph to the console.


  • Gui transformation and curved busses

    Last post i’ve told you about Node Edit, the project i’ve recently started working on. The image i showed there had flawed positioning of node names and node pin buttons. This post i’ll explain what caused that and why it was that way. I’ll also introduce curved bus rendering. I’ll explain the math behind it and show the shader i use for rendering them.

    The problem

    Here is the image from the first version again:

    The problem here is that the text and pin buttons don’t align properly with the node background images. At first i thought the problem had to do with the text and buttons not rendering at the proper location. I was rendering the node backgrounds with an in-app renderer and the text and buttons were rendered using the engine’s gui system. However the engine’s gui system directly renders onto the screen while the in-app renderer makes the node backgrounds go though an application provided view matrix. I had already started using this view matrix to prepare for panning and zooming (something that’s most definitely needed for bigger graphs).

    The solution

    After some modifications, the gui system is now also using matrices to render to the screen, rather than relying on a custom shader to do the transformation from gui coordinates to window coordinates.
    Using the World/View/Projection matrices from the engine’s renderer the gui now applies the matrices to the renderer like this:

    Matrix4x4 viewMatrix = Matrix4x4::BuildSRT( Vector3f( 1.0f, -1.0f, 1.0f ), Vector3f(),
    	                                    Vector3f( screenSize[ 0 ] * 0.5f, screenSize[ 1 ] * 0.5f, 0.0f ) ).Inverse();
    Matrix4x4 projMatrix = Matrix4x4::BuildOrthoMatrix( screenSize[ 0 ], screenSize[ 1 ], 0.0f, 100.0f );
    renderer.ApplyMatrix( Render::MT_WORLD, Matrix4x4() );
    renderer.ApplyMatrix( Render::MT_VIEW, viewMatrix );
    renderer.ApplyMatrix( Render::MT_PROJECTION, projMatrix );

     

    I think the projection matrix speaks for itself. It is just an ordinary orthographic matrix mapping the amount of screen pixels to -1..+1 and having a depth range from 0.0 to 100.0. The world matrix is being set to an identity matrix. For now widgets still manually translate and rotate themselves, later on they could build a matrix for their transformation and fill that into the world matrix.
    The most interresting parts about the matrices is that the view matrix can be used to solve two problems:

    • The center of the screen has coordinate 0, 0. For ui we would like the top-left corner of the window to be 0, 0.
    • In OpenGL the y-axis is going up in the window. Meaning that if we increase the y location of a widget, the widget would be rendered more towards the top of the window. This is the opposite of what we want it to be. We want an increase in y position to move the widget down.

    The first problem is solved by mixing in a Translation matrix into the view matrix. We can make a translation matrix that translates everything up half of the screen height, and left half of the screen width. If we apply this matrix into the view matrix we would effectively reposition the origin from the center of the screen to the top-left of the screen.
    The second problem can be solved by having the view matrix invert the y-axis. This is as simple as just negating the y scale. This renders the world up-side-down, which is exactly what we want for the gui.
    After we’ve built this matrix we only need to invert it so that it wont be a normal world matrix but an actual view matrix that can be used to ‘subtract’ a transformation from the widgets.

    Result

    When i had done this and properly matched the target size of the gui rendering with the size that was used for the node backgrounds the text and gui buttons perfectly aligned with the node backgrounds:

    So there we go. The node titles now properly fit into their title boxes, and the pin buttons properly show up inside the nodes themselves.

    Curved lines

    As you can see i’ve also worked on the bus lines. I didn’t like the straight lines anymore so i wanted to make them curve a bit at the beginning and the end. It is quite hard to determine where to draw line segments for a curved line, so i used a little help from Mr.Sine:

     

     

     

     

     

     

     

    A sinewave is nicely curved around the peaks and valleys, so why not borrow that property to define our curves. If we just take the area between 0.5π and 1.5π we can use this to determine where we should draw our lines:

     

     

     

     

     

     

     

    If we can sample the sinewave between these areas and use it’s value to determine the height difference we should use for a specific line segment then we’re set. However, i didn’t like one property of the area i need to sample to do this, and that is that it doesn’t start at 0, but at 0.5π. This would result in the math always needing to use this offset when it wants to sample the sinewave. There’s one easy solution to this though, Mr.Cos:

     

     

     

     

     

     

     

    The cosine wave nicely starts at it’s maximum intensity if we sample it at position 0. If we were to use the cosine wave instead we dont have to apply this offset and we can still get exactly the same shape for our busses.

    Vertex streaming

    Since node/pin positions can change every frame we need a dynamic system that allows rendering the lines from/to any arbitrary position. The easiest way is to just create a vertex buffer and put vertices at the correct positions in here for every bus we want to draw. We can iterate over the number of line segments we want to draw the bus with and calculate the phase for each line edge. This phase can then be passed into the cosine evaluation function and we can use that to determine the vertex position. Then we would have to upload the vertex buffer and render it out.

    The problem is that if we would do this for every bus / every frame we would create a major pipeline stall. This stall will be caused by us uploading new data into the vertex buffer over and over again, the driver doesn’t properly buffer our vertices so it will make us wait for uploading untill the gpu is done rendering and can accept the change.

    Shader based lines

    A better idea is to use constant vertex data and then render the lines with a shader using uniforms for the start/end position. You can make a simple vertex buffer containing vertices with just xyz coordinates. You have to fill in one of these coordinates with the phase of that point (i choose z).
    Then during rendering you set the uniforms and let the vertex shader apply the transformation for the lines:

    uniform mat4 ViewProj;
    uniform vec2 startPos;
    uniform vec2 deltaPos;
    
    void main()
    {
    	vec4 vPos = vec4( 0.0, 0.0, 0.0, 1.0 );
    
    	vPos.x = startPos.x + deltaPos.x * gl_Vertex.z;
    	vPos.y = startPos.y + deltaPos.y * ( -cos( gl_Vertex.z * 3.1415 ) * 0.5 + 0.5 );
    
    	gl_Position = ViewProj * vPos;
    }

     

    As i’ve chosen the vertex z coordinate to contain the phase i have to get the phase using gl_Vertex.z. You can see that for the x position you just use the phase to linearly interpolate from start to end. For the y position however you go from start to end using the factor of the cosine wave. One important thing to note is that the result of the cosine evaluation needs to go from 0.0 to 1.0. To make that happen you need to negate the result of the cosine to make it go from -1 to +1, and not from +1 to -1. You can see in the image above that a cosine of 0 yields 1, not -1. Then after this flipping, only a rescale is required. Multiply by 0.5 to get it in a range of -0.5 to 0.5 and then add 0.5 to make it range from 0.0 to 1.0.


  • Node Edit a Graphical Programming Environment

    What started it

    Recently at work i had to start working on writing a DirectShow video codec. Since DirectShow is now pretty old i started off by researching the possibility of using other, newer, techniques. For example i knew there’s now the new MediaFoundation api. During my research i found out about DMO’s (DirectX Media Objects). DMO’s are classes that are generally implemented in a class library. These classes inherit from a publicly known interface, which makes them usable by other programs. Using COM you can even create instances of these classes from within another program.
    The great part about this is that this is a great realization of  modularity. You can add / update these DMO’s without changing the software that uses them. You will then still affect that software without making any change to it. For example if you use one of these DMO’s in your program but the creator of the DMO finds a bug or knows how to do the actual implementation of the DMO better, he can update the class library and you get to benefit from this without having to make a change.

    DirectShow Filter Graph

    Lets visualize how you would use these classes with a very simple yet powerfull DirectShow FilterGraph. A FilterGraph is just a set of filters (DMO’s) that are connected to eachother. By combining multiple filters you can create a program. For example a simple video player would look something like this:
     On the left you can see a File Source filter. All this filter does is read data from your Hard drive and output this onto it’s output pin. Then you see the AVI Splitter which takes the File Source’s output data and splits it into video data and audio data. The remaining filters play the audio data over your speakers and show the video data on your screen (after decompression).
    As you can see it only requires a couple of filters to make a simple video player. Each of these filters are very simple and understandable on their own as well. The benefit of this is that when implementing such a filter there’s little chance for bugs, and the filters are reusable too. For example we could replace the File Source by a Web Source which would enable us to play video from the internet.

    The Idea

    After learning these concepts they inspired me to start working on a proof of concept to see if this could be used to reach other goals as well. For example wouldn’t it be alot easier that if i want to make a simple program that i do this by connecting some sort of these filters instead of writing all the hairy code myself?
    Well, lets try and make that possible. As long as we make the nodes simple yet powerfull enough we should be able to create a lot of different applications just by connecting them in a certain way.

    Proof of Concept

    Since i already have my game engine around i have a great framework to start with a quick test. Here’s what i came up with after a weekend:

     So what is all this? Well i thought that at the very least we would need three areas. One area for editing the graph (dark purple), one area that lists all available nodes (green), and an area that shows node properties. So what you see here is just a dummy graph (no actual node interaction yet) with a couple of nodes and how they could be connected.

    Dont look at the graphical issues with the node names and pin buttons, i’ll fix those next post :)




©2017 Echo Gaming Entries (RSS) and Comments (RSS)  Raindrops Theme