Difference between revisions of "Linearizability"

From Organic Design wiki
m (not sure what the underlined bits mean)
m
Line 1: Line 1:
On Wikipedia it is [[Wikipedia:Linearizability|stated that]]:
+
On Wikipedia it is [[Wikipedia:Linearizability|stated]] that:
  
 
"The easiest way to achieve linearizability is by forcing groups of primitive operations to run sequentially using critical sections and mutexes. Strictly independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers of mutexes against the benefits of increased parallelism.
 
"The easiest way to achieve linearizability is by forcing groups of primitive operations to run sequentially using critical sections and mutexes. Strictly independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers of mutexes against the benefits of increased parallelism.
  
 
Another approach, favoured by researchers but usually ignored by real programmers as too complex, is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs. Unfortunately, it also generally requires correctness proofs that are publishable results, as almost every conference on concurrent programming since the start of the 90s has demonstrated."
 
Another approach, favoured by researchers but usually ignored by real programmers as too complex, is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs. Unfortunately, it also generally requires correctness proofs that are publishable results, as almost every conference on concurrent programming since the start of the 90s has demonstrated."
:Actually this method is what the project is using - the nodal-core provides the atomic primitives which integrate with the hardware level as closely as we can. --[[User:Nad|Nad]] 11:33, 28 Jul 2006 (NZST)
 
  
The Project, however, proposes a third alternative. They key is the capacity of the operating system to function using entirely atomic processes, <u>and the CAS invoked by nodal reduction also transmit phase information</u>, which the operating system uses to entirely obviate starvation and lockups. The design of the nodal core, ensures that the same harmonic principles are observed at every level, leading to the capacity to reliably predict future events (<u>because of the phase information, they can only happen that way!</u>).
+
This is the method is that the project is using - the nodal-core provides the atomic primitives which integrate with the hardware level as closely as possible.  
:The OS part of the project (peerix) is not actually a new kind of OS build with nodal reduction - that's why it retains the ''ix'' in the name its a specialised Linux distribution, which will use the local nodal space as its filing system, and use a nodal interface. But it still runs over the Linux kernel and therefore we have no control over the wait/lock performance of that level. --[[User:Nad|Nad]] 11:33, 28 Jul 2006 (NZST)
 
:Re harmony - when you say phase, do you mean position within a cycle (ie every wednesday of the week)? Starvation and lockups remember are reduced by the ability to distribute the use of resource over the network - ie optimisation of the parts is handled by the whole.
 
  
At no point does any manager need to know which bit of information is stored where and for how long, the very fact that it is there denotes that it need to be processed using nodal reduction, which doesn't have to understand the role of that particular piece of information in the great scheme of things, it just has to do its job. The overall meshing of harmonics sorts out the priorities.
+
Starvation and lockups remember are reduced by the ability to distribute the use of resource over the network - ie optimisation of the parts is handled by the whole.
 +
 
 +
The OS part of the project (peerix) will use the local nodal space as its filing system, and use a nodal interface, but it still runs over the Linux kernel and therefore we have no control over the wait/lock performance of that level.

Revision as of 00:48, 31 July 2006

On Wikipedia it is stated that:

"The easiest way to achieve linearizability is by forcing groups of primitive operations to run sequentially using critical sections and mutexes. Strictly independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers of mutexes against the benefits of increased parallelism.

Another approach, favoured by researchers but usually ignored by real programmers as too complex, is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs. Unfortunately, it also generally requires correctness proofs that are publishable results, as almost every conference on concurrent programming since the start of the 90s has demonstrated."

This is the method is that the project is using - the nodal-core provides the atomic primitives which integrate with the hardware level as closely as possible.

Starvation and lockups remember are reduced by the ability to distribute the use of resource over the network - ie optimisation of the parts is handled by the whole.

The OS part of the project (peerix) will use the local nodal space as its filing system, and use a nodal interface, but it still runs over the Linux kernel and therefore we have no control over the wait/lock performance of that level.