The nodal model

From Organic Design wiki
Revision as of 09:12, 22 July 2011 by Nad (talk | contribs) (noincludes and summary)
Glossary.svg This page describes a concept which is part of our glossary
Broom icon.svg The content of this article requires cleaning up to meet OD's quality standards. Check the wiki best practices for guidelines on improving article and categorisation quality.

The nodal model is a distributed computing environment which has been designed for the project in accord with its philosophical foundations. These philosophies claim that all change in the universe, right down to the functionality of space and time, work in accord with a single unified principle. Distributed computing is a perfect context in which to work with these philosophies, because distributed computing is all about creating an actual real space and defining the laws that determine how its content changes.

In the nodal model, the philosophy is used to unify the runtime environment with the class hierarchy yielding a single global, self contained network of concepts (classes, prototypes, templates, forms) and their occurrences (instances), each a unique node. The physical resource aspect of this network is similarly unified; from RAM, file system and WAN, so all applications, data and their persistent storage are all part of a single changing space of nodes called the nodal network. The nodes play a similar role to object's in the common Object Oriented paradigm.

Any object is related to other associated objects (associations a.k.a key-value pairs, attributes, properties etc) which can represent active threads operating on a schedule. Active threads consume processing resource to manipulate various resources within their context in accord with their scheduled workflow. Processes at all levels are built from these workflow rules whether they're part of an application or a real world organisation.

This functionality which is common to all the objects of the nodal network is called the nodal reduction and is essentially a scheduler which replaces the traditional program flow system. Even though the nodal network is a complex application in terms of its functional requirements, it yet is able to be modelled completely as a nodally reducible structure of nodes, which means that any other applications or organisations can also be modelled in this way. In fact, the components of the nodal network are specifically designed as a re-usable universal template called generic organisation which can be extended and refined for the needs of any context.

Foundations

The nodal model is designed to offer a high level environment appropriate for representing semantic networks and executing applications directly from high-level semantic definitions. Another important aspect is the ability for the nodal-model's definition to be as context-independent as possible. This is achieved by building the nodal model entirely from a small set of conceptually simple "atomic" components similar in nature to machine code instructions. There are two main abstraction layers involved, list space and node space.

The node space level is the high-level semantic-network like environment in which we can define executable models for organisations and applications. All the functionality required to implement a node space is defined in terms of the list space functions.

List space provides a small set of functions which operate on a binary address space and offer a generic associative array. Generic means that in any item in the space, the keys of it's key:value pairs (associations) can be any type such as URI's, MD5-hashes, names (i.e. sequences of items from a particular set of symbols) or most importantly references to other items. An associative array in which keys are references to other items in the array is fundamental to semantic networks, but very few modern programming languages allow their array keys to be object references which makes runtime implementations of semantic networks inefficient. List space provides an associative memory specifically designed for runtime semantic networks, and is defined to integrate with systems directly at the machine level. It has very few functions and all are simple to port to any executable environment.

Although the list space functions are very simple conceptually, they cannot be created directly out of list-space functions as these are specific to the building of a list-space, not to describing algorithms no matter how simple they may be. But the next level up, node space is designed to generically describe algorithms, so the implementation and interfaces making up the list-space functions can be implemented within a node space.

At first glance that seems like a futile idea since having both layers defined in terms of the other is a paradox. But it's actually not a paradox because each is using the the description for a different purpose; the node-space executes it's list-space functions in "real time", but the list-space uses it's node space definitions only at compile-time. The semantic web is all about connecting with existing software in a context-independent and service-oriented way, and this used in conjunction with self-description allows a running node-space to build other node-spaces in any environment that implements its compilation process as a semantic web service. Our first priority is for each node-space to be capable of using a local C compiler to maintain it's own binary, then later we would extend this functionality to allow compilation for foreign environments and/or remote locations via SSH etc.

Abstraction layers

Distributed spaces exhibit two fundamental abstraction layers called the logical and the physical. The logical layer is a set of unique nodes which conform to a unified addressing mechinism and make up the content of the network, this layer is akin to the objects at runtime, or the files in a filesystem or network. The physical layer is composed of a network of peers which together constitute the actual storage, processing and bandwidth resource from which maintains the existence of the logical content.

In the case of the nodal model, each peer maintains a microcosmic version of the whole network called a node space. The node space is locally available in RAM resource and is essentially a cut-down cache of the network content. Each peer runs the Nodal Reduction algorithm which determines the way the node space content undergoes change, and these changes then propagate out from local RAM to other peers and persistent storage resource. This means that the network architecture can be (and is) a nodal application which is defined as nodal reducable structure and is executed by nodal reduction.

Node

A node is like a hash table or associative array. Another way of looking at a node is as a file folder. Internally, nodes are referred to by GUIDs or node references, which are meaningful only by their distinction from each other. In the user interface nodes are referred to by names appropriate to the context in which they're being referred to. GUID's are used for referring to nodes persistently for storage and communications, but direct implementation-specific memory references are used at runtime.

Like a hash table, nodes are containers of keys and values, but because all runtime node referencing is direct, the keys and values are references, not textual names like usual.

Tree rewriting

Quote.pngIt is perceived that one of the biggest problems in maintaining software quality is the above linear growth in complexity compared to the size of a program, resulting in the programmer's cognitive loss of an overview and ultimately the degradation of the quality of software. This thesis tries to counter that by introducing a language (Aardeppel) with a new sharing model that makes dependencies in a program explicit at the language level, and local to one specific language construct (the tree space). We introduce the language which is based on tree rewriting and Linda style tuple spaces and comes with a graphical programming notation. We discuss the worth of its design, precisely specify it using a formal semantics, and report on experience with the model using a real world implementation.
Wouter van Oortmerssen (abstract from PHD thesis)