Difference between revisions of "Foundation ontology"

From Organic Design wiki
(Implementing System Requirements)
m
 
(28 intermediate revisions by the same user not shown)
Line 1: Line 1:
<noinclude>{{glossary}}</noinclude>A [[system]] of operating as an [[Evolution|evolving]] organisation is common to all nodes in the [[Ontology]], the conceptual structure of this kind of organisation that captures these principles of collaboration and [[self-governance]] is considered to be the [[Foundation Ontology]] for the [[OrganicDesign]] system. The Foundation Ontology is a common form of organisation that spans both real-world organisation, and informational systems alike, it defines what it is to be a node in the unified [[Ontology]].<noinclude>
+
{{glossary}}<onlyinclude>A [[system]] of operating as an [[Evolution|evolving]] organisation is common to all nodes in the [[Ontology]], the conceptual structure of this kind of organisation that captures these principles of collaboration and [[self-governance]] is considered to be the [[Foundation Ontology]] for the [[Organic Design]] system. The Foundation Ontology is a common form of organisation that spans both real-world organisation, and informational systems alike, it defines what it is to be a node in the unified [[Ontology]]. Since the Foundation Ontology defines the attributes that are common to all organisations in the [[Platform network|network]], it's important that the [[bottom line]]s designed in its [[system]] remain [[alignment|aligned]] with the [[core values]] and [[common vision]].</onlyinclude>
  
 
The [[W:OOP|OOP]] paradigm was designed to allow software and systems to be designed where the description of the program which is interpreted and acted upon by the computer has a direct relationship with the high-level ''model'' of the system. [[W:Prototype-based language|prototype-based language]]s make objects even more isomorphic to the real world by allowing any collections of functionality and information to be used as either an [[instance]] or a [[class]] on which other instances are based. The [[web3|semantic web]] also extends the object paradigm by creating a universal concept network which can be knitted together in a uniform way to create standard ontologies.
 
The [[W:OOP|OOP]] paradigm was designed to allow software and systems to be designed where the description of the program which is interpreted and acted upon by the computer has a direct relationship with the high-level ''model'' of the system. [[W:Prototype-based language|prototype-based language]]s make objects even more isomorphic to the real world by allowing any collections of functionality and information to be used as either an [[instance]] or a [[class]] on which other instances are based. The [[web3|semantic web]] also extends the object paradigm by creating a universal concept network which can be knitted together in a uniform way to create standard ontologies.
  
== Semantic Structure of the Foundation Ontology ==
+
== 2023 description ==
*Globally unique references (URI)
+
The most fundamental concepts common to all cognitive agency can be described by a recursive dichotomy. A process model based on two orthogonal dichotomies that give rise to subjective agency in four-quadrant form.
*Typed relationships between nodes (triple-space)
 
*Relative addressing
 
*Templates ([[class]]es and [[instance]]s)
 
  
== Implementing System Requirements ==
+
More specifically, the process extends itself into a self-organising singleton instance, inside which there exist many subjective four-quadrant POVs all embedded within a unified shared arena. The shared arena is formed from established subjective meaning, but seen from all perspectives as "objective external reality" because it's common to all, and not under direct control of any (it's the result of an unseen collaborative process called the bottom-up synchronous domain).
The next thing after this general semantic layer is defined in to define the components common to [[system]]s so that we can then start defining the higher level functionality required by the [[software architecture]] and [[platform specification]].
 
{{section zero|System}}
 
== Nodal model notes ==
 
In the [[Nodal model]], [[list space]] enables these semantic networks of nodes to be operated on efficiently at runtime. [[Node space]] is a particular ''application'' of list space. It defines an environment which is similar to a normal runtime object tree, except that it is global and persistent, and semantic (the keys of key:value pairs point to other nodes), but perhaps most importantly of all, the keys are ''processes''.
 
  
Normal objects contain properties and methods which can be executes either when an event occurs, or when the instance requires it. But in a node the keys can be connected into the schedule of processing resource. The actual functionality that a node's methods take on is defined by the ''key'', i.e. key:value pairs (associations) are actually ''class:instance''. This is a much more high-level and intuitive way of thinking about objects. The associations in a node are its roles, resources and information and they're persistent and continuous in nature, undergoing change over time.
+
This is a modern formulation of an ancient idea that can be found in many philosophical and spiritual traditions throughout history. For example, the ancient Chinese Yin/Yang dichotomy which derives a system of patterns (classes) and images (instances) interacting in accord with inner world (local) and outer-world (non-local) dynamics.
  
All organisations, no matter how complicated are still completely describable as a set of [[role]]s, [[resource]]s and [[process]]es which all work in accord with a common scheduling and booking scheme.
+
To create a software process that reflects this general system of agency does not require that we define the system from the most fundamental dichotomy (self-reference). Because all computational contexts offer access to high-level data structures, which means we can start our definition at a higher level of abstraction leaving the lower levels for philosophical discussions.
  
There is a certain level of granularity in an organisations definition which can't be broken down into smaller parts - it's considered to be indivisible, or ''atomic''. These units of indivisible work must be carried out by the people who fill the roles, and they are expected to understand and act on the instruction. The organisational system which handles all the scheduling and booking doesn't need to understand the information itself.
+
Process in general depends on space (material, information) and energy (change, agency) resource, and our system of agency needs to be defined in terms of interactions involving information and agency. In other words, space and attention in their most fundamental forms can be re-organised into an evolving agent-arena complex.
  
Computer programs are also organisations, and any software could be completely described in the terms of an organisation, the only difference would be that the roles indivisible tasks would be filled by computer functions instead of Human beings.
+
The purely dichotomous form of this process and the universal nature of the four quadrants it yields, lend strong support to the case that the four-quadrant evolving agent-arena system of subjective meaning, is a kind of ''universal default application of the fundamental resources of information and agency''.
  
The idea of "generic organisation" is to have a single unified organisational system that handles the scheduling, booking, process definition/execution etc for all systems modelled within it - both real world organisations, and software applications (including itself - see [[self containment]]).
+
== 2016 description ==
 +
As of April 2016, a new attempt at refining all of the concepts of [[the nodal model]] within a more recent context where the idea is more refined and there is technology available now which covers nearly all of its requirements. Technologies like [[Telehash]], [http://nanomsg.org NanoMsg], [[Filament]] and [[Docker]].
  
== Universal machines ==
+
The role of the foundation ontology is to define the [[system]] by which we can all organise our resources together in a fair and harmonious way (i.e. [[alignment|aligned]] with the [[core values]]). We can think of this ontology as defining a ''multiplexer'' or ''scheduler'' which is a tree of events triggering connections between roles and resources (also in the same tree). Any resource that needs to be connected with this multiplexer needs to be accessible by a compatible API. This unified tree of changing resource connection is the tree of moments and can be clearly defined in terms of information technology, see [[moment]].
The ability to read and write content is all the functionality necessary to build up a turing complete system from, because the foundation functionality on top of which all higher level functionality can be defined is simply the concept of the ''lookup table'' which we've already defined for reading and writing our binary sequences. From lookup tables, [[w:Zeroth-order logic|zeroth-order logic]] can be defined, and then these logic components can be used for building arbitrarily complex patterns of functionality.
 
  
[[w:Turing machine|Turing machines]] are extremely basic symbol-manipulating devices which — despite their simplicity — can be adapted to simulate the logic of any computer that could possibly be constructed. They were described in 1936 by [[w:Alan Turing|Alan Turing]]. Though they were intended to be technically feasible, Turing machines were not meant to be a practical computing technology, but a thought experiment about the limits of mechanical computation; thus they were not actually constructed. Studying their [[w:abstract machine|abstract machine]] properties yields many insights into computer science and [[w:computational complexity theory|complexity theory]].
+
The foundation ontology is the basic functionality common to all the network that is inherited by every node (moment) in the tree. The ontology can be thought of as a fundamental group of operating patterns that apply regardless of the specifics of the resources involved, and thus is the foundation of all node's functionality. These patterns together perform the task of resource allocation, which requires the ability to know the current state in terms of resource, and the potential patterns made available by this in terms of their resource cost/benefit.
  
A Turing machine that is able to simulate any other Turing machine is called a [[w:Universal Turing machine|Universal Turing machine]] ('''UTM''', or simply a '''universal machine'''). A more mathematically-oriented definition with a similar "universal" nature was introduced by [[w:Alonzo Church|Alonzo Church]], whose work on [[w:lambda calculus|lambda calculus]] intertwined with Turing's in a formal theory of computation known as the [[w:Church–Turing thesis|Church–Turing thesis]]. The thesis states that Turing machines indeed capture the informal notion of effective method in logic and mathematics, and provide a precise definition of an [[w:algorithm|algorithm]] or 'mechanical procedure'.
+
This article represents the [[:Category:Foundation ontology|Foundation ontology category]] which contains all the concepts which make up the meaning and functionality of the foundation ontology. This set of concepts form the initial nodes that make up the foundation ontology and are also the root of the whole [[unified ontology]]. Note that not all these nodes necessarily perform a function, so of them are simply descriptions for words we find important and have a definite meaning within the context of this system.
  
There are many aspects to organisations, but no matter how complex, they can all be broken down to a few common patterns and concepts. All the programming languages form a huge spectrum of such break-downs, but we don't want to create another specialists' language, we want to create a small set of concepts which are turing-complete, allowing us to represent any processes we need, but that are based completely on Human organisational needs and on Human concepts.
+
*[[Truth]], [[Harmony]] & [[Entropy]]
 +
*[[Role]] & [[Resource]]
 +
*[[Direction]]
 +
*[[Moment]]
  
This ''universal machine'' shows that a concept being described by a computer program is independent of the program itself, and that the concept could be translated between any Turing complete language without any loss of meaning. But what exactly ''is'' the concept then? The concept itself is purely abstract and any description of it no matter how concise is still only a description - just like no word which means "apple" will ever actually ''be'' an apple. But we can think, feel and visualise concepts without the use of language constructs, which shows that there is some fundamental strutural way of representing them which our minds and bodies use.
+
== In relation to Organic Design (from 2011) ==
 +
Many of the original ideals of [[the project]] are now manifest in the technologies we see in use today. For example [[w:Distributed Hash Tables|Distributed Hash Tables]] are now being used as semantic [[w:overlay network|overlay network]]s. The new distributed computational spaces are now moving from older [[w:Tuple space|tuple space]] models to modern semantic ''triple-space'' or ''triple-store'' models using [[w:RDF triple|RDF triple]]-based structure as associations. All this is complemented by [[w:Grid computing|grid]] [[w:middleware|middleware]] technology which is built on exactly defined [[w:Service Level Agreement|Service Level Agreement]]s between all entities; Human, machine, resource, processes etc which allow the automated balanced reduction of workload.
  
Some universal Turing machines are made from only a few exceedingly simple components, for instance any computer program can be described completely with only one kind of logic operator similar to ''x AND y''. In the project, our aim is to create such a core set of basic components with which to internally describe all the applications and organisations made within the development environment (the development environment is also an application described internally this same way). The internal language allows the system to implement itself and all its applications in any language, platform or technology as necessary.
+
The "supreme ultimate killer application" that seems to be trying to emerge from this whole ''semantic-p2p-grid-OS'' movement is nowadays a question about [[w:upper ontology|foundation ontologies]]  and ''universal middleware''.
  
But what is the best set of basic components to use? Ideally we'd want our internal descriptions to be more compact and exhibit a less complex structure than the languages it can implement them in, so that we can move away from specialisation and make creating and collaborating on applications and organisations more accessible. In essence, we want our internal desciption of our concepts to work in a similar way to our minds where there's no arbitrary syntax, but rather a completely structural approach is taken.
+
A foundation ontology (also called a ''top-level ontology'', or ''foundation ontology'') is an attempt to create an [[w:Ontology (computer science)|ontology]] which describes very general concepts that are the same across all domains of knowledge and organisation. The aim is to have a large number on ontologies accessible under this upper ontology. It is usually a [[w:hierarchy|hierarchy]] of entities and associated rules that attempts to describe those general entities that do not belong to a specific problem domain.
  
== Generic organisation ==
+
There are many contenders for the position as the most general top-level ontology, but surely all would be describing the same set of fundamental concepts? such as defining resources, processes, roles, relationships, space and time etc. So at the end of the day, one can just choose the upper ontology that suits the situation best and all patterns can be easily mapped across to other upper ontologies when necessary.
Applications and organisations are essentially very simple in structure compared to the kinds of problems that computability theory and Turing machines were devised to deal with. We know from experience that such structures are already heavily recursive in that a department, branch office or even a role of an organisation can itself be seen as an organisation, and this can be carried on down into the programming structure of the applicational tools used by the organisation. So the idea is to find a set of components which apply to organisations in general which can be used recursively so that the same set of components can be used to describe all levels from programming details up to management decisions.
 
  
When we take a closer look at the workings of organisations and project management, we see a definite set of generic concepts with which to begin our unification process: budgetting, cashflow, stock, storage & distribution, scheduling, production etc. These concepts really are generic to all organisation no matter what the specifics of their operations, and we could also describe any computer application as such an organisation too. But we haven't gone to basic enough components yet because the preceeding list is really just another list of organisations which still need more refined description before our basic components are revealed.
+
The direction of [[Organic Design]] in its goal of using the available technology to implement the principles of [[the project]] is shifted from development of the lower-level, to the modelling of the application using one of these new ontologies. The application to be modelled is an [[RDF|RDF-triple-space]] built on an existing [[DHT]] (probably Chimera). The processing layer reduces the workload generated by the interface which is a wiki-like organisational system with a lean towards more application-level content such as accounts and contacts rather than just text and media.
  
== Resource & Spectrum ==
+
The question to be answered then is what is the best upper ontology to use? Here's some information on a couple of the key contenders, see the [[semantic web]] article for more links.
Breaking down these sub-organisations moves us into more abstract teritory - booking, statistics, processing, transport, storage, which themselves can really be thought of as specifically refined versions of [[Resource]] and [[Spectrum]].
 
*Resource is the division and allocation process based on previous and future cycles from its connections into spectrum (field of cycles)
 
*But the structure of spectrum is itself very similar to resource because its the ''booking'' from the field of cycles.
 
*There can by any number of "clients" associated with each cycle (they're loops which are hooked into reduction).
 
  
== Real-world organisation ==
+
;General Formal Ontology (GFO)
This is primarily aimed at the small business scale at the beginning, not corporations. A number of organisations in different sectors have been observed so that some useful and complete templates can be made.
+
The [http://www.onto-med.de/en/theories/gfo/index.html GFO] is an upper ontology integrating processes and objects. GFO has been developed by Heinrich Herre, Barbara Heller and collaborators (research group [http://www.onto-med.de/ Onto-Med]) in Leipzig. Although GFO provides one taxonomic tree, different axiom systems may be chosen for its modules. In this sense, GFO provides a framework for building custom, domain-specific ontologies. GFO exhibits a three-layered meta-ontological architecture consisting of an abstract top level, an abstract core level, and a basic level.
* Computer repair
 
* Network administration
 
* Retail (online and shopfront)
 
* Catering
 
* Tuition and Lecturing
 
* Functions and Seminars
 
  
All solutions and ways come from the metaphors of computerless organisations of all sorts; factories, warehouses, offices, businesses etc.
+
;Suggested Upper Merged Ontology (SUMO)
It's not in any way against technology, it's simply that any system no matter how complicated doesn't need to depart from the fundamentals for completeness of description, even quantum field theory.
+
[http://www.ontologyportal.org/ SUMO] and its domain ontologies form the largest formal public ontology in existence today. They are being used for research and applications in search, linguistics and reasoning. SUMO is the only formal ontology that has been mapped to all of the [http://www.cogsci.princeton.edu/~wn/ WordNet] lexicon. SUMO is written in the [http://sigmakee.cvs.sourceforge.net/*checkout*/sigmakee/sigma/suo-kif.pdf SUO-KIF] language. SUMO is free and owned by the IEEE. The ontologies that extend SUMO are available under [http://www.gnu.org/copyleft/gpl.html GPL]. [http://home.earthlink.net/~adampease/professional/ Adam Pease] is the Technical Editor of SUMO.
 
 
Concepts like space, time, energy, matter/information while being very low-level are still well known to everyone. Or higher level ones like
 
resource, role, commitment, schedule are easily understood. It's only when we need to depart from such general terms to describe the inner workings of these things that confusion begins. But here we set up a simple system consisting of only such general terms. The aim is to have an accessible method of modelling and managing organisations in terms of this common pattern.
 
 
 
Taken as a whole, the system is very complex, but really it's the same pattern operating over many levels at once. Organisations modelled in
 
this way work in harmony both within and outwards, so within the context of this project and the philosophies its based on, the words organisation
 
and organism are practically synonymous.
 
 
 
An important aspect of the [[:Category:Philosophy|philosophical foundations]] of the project is [[self containment]], independence. One special part of self-containment for development environments is self-description. Afterall, any runtime environment is really an organisation a lot like a factory, and since the software is about modelling organisations, it's own description can then also be its own proof of concept, from the fact that its running from that description.
 
 
 
 
 
== Class/Instance ==
 
 
 
== Nodal Reduction ==
 
In the [[nodal model]] the [[nodal reduction]] process distributes [[quantum|quanta]] of executional energy amongst the [[node]]s in the tree of [[current loop]]s starting at the [[root]] node of the [[node space]].
 
 
 
These [[current loop]]s can be considered as a collection of [[role]]s which perform particular jobs within their context of [[association]]s. In this way, roles can be "passed around" amongst the contexts that need them, or items needing specialised kinds of work done on them could be placed in to a context containing the roles required.
 
 
 
There's no actual movement involved in this "passing around" process though, because its all achieved by adjusting references to the workflow or role nodes, not manipulating the nodes themselves.
 
  
 
== See also ==
 
== See also ==
*[[Nodal reduction]]
+
*[[Ontology]]
*[[Workflow]]
+
*[[Unified ontology]]
*[[Web 3.0]]
+
*[[Holarchy]]
*[[Grid]]
+
*[[Bottom line]]
*[http://www.pinkandaint.com/oldhome/comp/dominoes Making logic from dominoes]
+
*[[Wikipedia:One-instruction set computer]] ''- subtract and branch if negative''
 
 
== Whiteboard Notes ==
 
[[File:FoundationOntologyNotes.jpg]]
 
</noinclude>
 

Latest revision as of 16:54, 22 July 2023

Glossary.svg This page describes a concept which is part of our glossary

A system of operating as an evolving organisation is common to all nodes in the Ontology, the conceptual structure of this kind of organisation that captures these principles of collaboration and self-governance is considered to be the Foundation Ontology for the Organic Design system. The Foundation Ontology is a common form of organisation that spans both real-world organisation, and informational systems alike, it defines what it is to be a node in the unified Ontology. Since the Foundation Ontology defines the attributes that are common to all organisations in the network, it's important that the bottom lines designed in its system remain aligned with the core values and common vision.

The OOP paradigm was designed to allow software and systems to be designed where the description of the program which is interpreted and acted upon by the computer has a direct relationship with the high-level model of the system. prototype-based languages make objects even more isomorphic to the real world by allowing any collections of functionality and information to be used as either an instance or a class on which other instances are based. The semantic web also extends the object paradigm by creating a universal concept network which can be knitted together in a uniform way to create standard ontologies.

2023 description

The most fundamental concepts common to all cognitive agency can be described by a recursive dichotomy. A process model based on two orthogonal dichotomies that give rise to subjective agency in four-quadrant form.

More specifically, the process extends itself into a self-organising singleton instance, inside which there exist many subjective four-quadrant POVs all embedded within a unified shared arena. The shared arena is formed from established subjective meaning, but seen from all perspectives as "objective external reality" because it's common to all, and not under direct control of any (it's the result of an unseen collaborative process called the bottom-up synchronous domain).

This is a modern formulation of an ancient idea that can be found in many philosophical and spiritual traditions throughout history. For example, the ancient Chinese Yin/Yang dichotomy which derives a system of patterns (classes) and images (instances) interacting in accord with inner world (local) and outer-world (non-local) dynamics.

To create a software process that reflects this general system of agency does not require that we define the system from the most fundamental dichotomy (self-reference). Because all computational contexts offer access to high-level data structures, which means we can start our definition at a higher level of abstraction leaving the lower levels for philosophical discussions.

Process in general depends on space (material, information) and energy (change, agency) resource, and our system of agency needs to be defined in terms of interactions involving information and agency. In other words, space and attention in their most fundamental forms can be re-organised into an evolving agent-arena complex.

The purely dichotomous form of this process and the universal nature of the four quadrants it yields, lend strong support to the case that the four-quadrant evolving agent-arena system of subjective meaning, is a kind of universal default application of the fundamental resources of information and agency.

2016 description

As of April 2016, a new attempt at refining all of the concepts of the nodal model within a more recent context where the idea is more refined and there is technology available now which covers nearly all of its requirements. Technologies like Telehash, NanoMsg, Filament and Docker.

The role of the foundation ontology is to define the system by which we can all organise our resources together in a fair and harmonious way (i.e. aligned with the core values). We can think of this ontology as defining a multiplexer or scheduler which is a tree of events triggering connections between roles and resources (also in the same tree). Any resource that needs to be connected with this multiplexer needs to be accessible by a compatible API. This unified tree of changing resource connection is the tree of moments and can be clearly defined in terms of information technology, see moment.

The foundation ontology is the basic functionality common to all the network that is inherited by every node (moment) in the tree. The ontology can be thought of as a fundamental group of operating patterns that apply regardless of the specifics of the resources involved, and thus is the foundation of all node's functionality. These patterns together perform the task of resource allocation, which requires the ability to know the current state in terms of resource, and the potential patterns made available by this in terms of their resource cost/benefit.

This article represents the Foundation ontology category which contains all the concepts which make up the meaning and functionality of the foundation ontology. This set of concepts form the initial nodes that make up the foundation ontology and are also the root of the whole unified ontology. Note that not all these nodes necessarily perform a function, so of them are simply descriptions for words we find important and have a definite meaning within the context of this system.

In relation to Organic Design (from 2011)

Many of the original ideals of the project are now manifest in the technologies we see in use today. For example Distributed Hash Tables are now being used as semantic overlay networks. The new distributed computational spaces are now moving from older tuple space models to modern semantic triple-space or triple-store models using RDF triple-based structure as associations. All this is complemented by grid middleware technology which is built on exactly defined Service Level Agreements between all entities; Human, machine, resource, processes etc which allow the automated balanced reduction of workload.

The "supreme ultimate killer application" that seems to be trying to emerge from this whole semantic-p2p-grid-OS movement is nowadays a question about foundation ontologies and universal middleware.

A foundation ontology (also called a top-level ontology, or foundation ontology) is an attempt to create an ontology which describes very general concepts that are the same across all domains of knowledge and organisation. The aim is to have a large number on ontologies accessible under this upper ontology. It is usually a hierarchy of entities and associated rules that attempts to describe those general entities that do not belong to a specific problem domain.

There are many contenders for the position as the most general top-level ontology, but surely all would be describing the same set of fundamental concepts? such as defining resources, processes, roles, relationships, space and time etc. So at the end of the day, one can just choose the upper ontology that suits the situation best and all patterns can be easily mapped across to other upper ontologies when necessary.

The direction of Organic Design in its goal of using the available technology to implement the principles of the project is shifted from development of the lower-level, to the modelling of the application using one of these new ontologies. The application to be modelled is an RDF-triple-space built on an existing DHT (probably Chimera). The processing layer reduces the workload generated by the interface which is a wiki-like organisational system with a lean towards more application-level content such as accounts and contacts rather than just text and media.

The question to be answered then is what is the best upper ontology to use? Here's some information on a couple of the key contenders, see the semantic web article for more links.

General Formal Ontology (GFO)

The GFO is an upper ontology integrating processes and objects. GFO has been developed by Heinrich Herre, Barbara Heller and collaborators (research group Onto-Med) in Leipzig. Although GFO provides one taxonomic tree, different axiom systems may be chosen for its modules. In this sense, GFO provides a framework for building custom, domain-specific ontologies. GFO exhibits a three-layered meta-ontological architecture consisting of an abstract top level, an abstract core level, and a basic level.

Suggested Upper Merged Ontology (SUMO)

SUMO and its domain ontologies form the largest formal public ontology in existence today. They are being used for research and applications in search, linguistics and reasoning. SUMO is the only formal ontology that has been mapped to all of the WordNet lexicon. SUMO is written in the SUO-KIF language. SUMO is free and owned by the IEEE. The ontologies that extend SUMO are available under GPL. Adam Pease is the Technical Editor of SUMO.

See also