Nodal geometry
Layers
At this stage the conceptual structure of the project's interface is a 2D layer model, where layers are stacked in z-order and can exhibit transparency. Layers are arranged as nested data structures allowing each to be composed from a list of n child layers. The position and clipping area of a child layer are governed by its parent. The peerd.c interface uses SDL's OpenGL context with an orthographic projection matrix.
This milestone was completed some time back and "retro-fitted" for motivational reasons! It has extended the basic IO functionality up to allow the ordered stacking of rectangular layers which can each exhibit a fill colour and image.
The next milestone is recursive rectangles which integrates the layers (rectangles) with the nodal model giving them containment as a (tree of layers), and integrates the mouse input with tree allowing a particular rectangle in the tree to be clicked on. The render method of rectangles and the collision-detection test method for the mouse click are both nodal workflows passed around the tree until completed.
Textures
Later we'll allow any layer to be cached as a texture, rather than only the layers at the root level. This way processing load would be reduced by utilising the large unused resource of texture memory on the video card - usually around 256MB. These cached layer-composites are 32bit having alpha and are called sprites.
Nodal reduction
The root-level layers are hooked into the reduction loop permanently, and they in turn hook in various children (based on redraw or collision queries) or metrics updates which must be resolved: these queries are guaranteed to be resolved by the end of the frame. The queries exhibit workflow and so are passed around amongst the nodes to be processed, and passed on again until resolved.
Each layer receives a quantum and does the following single operations set (considered reasonably atomic):
Sprites
typedef struct { int x,y,w,h,t; double scale,rotation; Uint32 fill; GLuint texture; } spriteInfo;
Container
|
Collision and Blitting
Bit blitting is the low-level method of building sprite functionality. The details of blitting aren't currently necessary since our two first target interface environments are OpenGL and SWF which both include sprite-like functionality.
Collision detection is a means of determining which items rectangular bounds intersect with a given rectangle. Collision detection is used for determining which object should receive mouse-click events, or which objects are encompassed by a selection rectangle.
Both blitting and collision detection are similar processes and are both nodal workflows which propagate amongst children and siblings in the loops of a node space until resolved. There's no actual movement involved in this "passing around" process though, because its all achieved by adjusting references to the workflow node, not manipulating the workflow itself. The workflow is guarunteed to be resolved by the end of the frame.
Mouse click event
When a mouse click is detected by communications, the click process is hooked into the first child of the desktop node. When a click first occurs, the recipient which should act on it is unknown, so the collision workflow described above is first resolved which results in a known recipient. If this recipient node has no click association, then check parent and so on up to desktop.
Interface event model
The interface aspects of the event model are handled by Geometry because its oriented around which container exhibits the keyboard input focus, and which containers bounds the mouse pointer is within. So this is essentially a collision detection issue which although modelled nodally (so that execution recursion can be shared amongst recursive processes), it is essentially just the usual Octree method.
See also
- Box Model (W3C)
- Layout Engines (Wikipedia)