From Organic Design

Progress report 1 Nov 2006

Compiled a custom kernel today. This is what allowed me to test the fbcon.c program in basic frame buffer mode. Now that i've done the whole process of building the OS I can see some ways it could be made more streamlined. There is still too much scripting involved and too many make files to edit. Those that use configure are usually OK (those that use it properly), but the problem is with poorly set up projects that have too much hard wired stuff. The ld phase of building does not always go smoothly.

I have some ideas about how to optimise this.

This first step of getting a frame buffer working, however low rez is good. We're placed around c1989-1992. The next step is to go beyond VESA and into proper hardware acceleration. For this we need userland drivers - geexbox is good for this. Also i'm surprised how much hardware video code is in the kernel. It seems it might not be as difficult as I first thought. Rob 20:39, 1 Nov 2006 (NZDT)

Progress report Oct 2006

A short note to document the state of things. After reviewing this talk page I see that many of these ideas are resulted in useful work and we now have a repository that contains the necessary toolchain and sources to build the OS. At present this is maintained on a University server for their purposes (boot-from-PXE-mage)

At this stage we have results in these areas:

  • Build from source: we use BuildRoot with some customised packages to give us toolchain, dependancy checking, and file system creation.
  • Results
    • Tested with basic unix functionality, network, ssh
    • boots nice and fast (5 sec)
    • shuts down nice and fast (1 sec)
    • small at around 7MB uncompressed
  • Next steps
    • Include native toolchain (trivial - turn a switch on)
    • create BuildRoot package for the peer (should be trivial once I have the makefile)
    • framebuffer SDL - currently supported by BuildRoot but untested
    • framebuffer Open GL - will be a mission - needs Mesa + video drivers for the card in use.
    • sound - should be reasonably straight foward

Self containment

The resulting OS will contain binary versions of all the tools need to build itself - give the source. The sources need not be provided in the basic image, provided we have access to a peer to obtain them.

  • This is untested but I am confident that it should not present a major problem

The suit of binaries required to build the majorty of GNU (and other c based) software is small. Under 10 programs. The toolchain bacomes larger when the software source tree in use requires other tools, like perl for example. In most cases this is unnecessary or the source tree could be simplified to use the basic tools. In the nodal enviroment we could evolve the build process more easily in this direction.

Implementation steps

How to bind a compiler into the nodal space? This seems difficult but I think it will be easier to implement that some of the GUI aspects. The traditional Unix pipe/stream workflow is well suited to the nodal enviroment. In some cases we may be able to completly replace GNU programs with nodal logic. make is a prime candidate for this.

Once this is complete the peer software will conform to the "closed loop" of self-contained-ness. It contains all it needs to build itself.

  • This step, at least in it's non-nodal implementation is not too far away

Current thinking

After reviewing this article I see that there is quite a bit that is inconsistent it does not really reflect my ideas. It does accurately reflect the rapidly shifting conceptual process that is my attempt to solve this problem.

Right now my feeling is summarised thus:

It is not enough to be able to generate an usable operating system once, it must be able to be built dynamically from source packages that are subject to change.

With the large number of source packages needed to build even a simple OS, it is safe to assume that in even one development cycle something will change. Pre-built packages are out (debian etc) because they simply do not give us the flexability needed to create a cutting edge OS. Debian in particular is necessarcly conservative in their adoption of new software.

My analysis of the GeeXbox build process has given me an insight into how a project comparable to the Peerix project would be maintained. More work is needed here to simplify the build process. This is ongoing. make seems significant to the logic behind any build system. The logical depends state machine is something that perhaps could work in a wiki-like way.

Branches of the OS tree should be able to be synced with their sources and downstream modifications re-applied if possible. This technology is mature in patch, tar and diff, but needs some kind of friendly interface to control it. If we patch the kernel, for example, we will want to apply our patch to a new version of the source, when it comes out, and be told in a nice way if this is not possible, that manual examination of the code is required. GeeXbox maintains a series of diff patches that are applied in order to a source, if it is updated from an external repository. Maybe this is the way to go.

Up to here I'm totally with you (except changed "wiki" to "wiki-like"), and also that even make is a specific instance of this maintaining-build-trees idea since it would work for any kinds of compilers and procedures/rules --Nad 15:18, 26 May 2006 (NZST)
I figured that would be the case. After all, make is simple compared to what the nodal model can do.--Rob 15:24, 26 May 2006 (NZST)

Perhaps wiki could be used to manage builds by interacting with svn/arch/cvs and in that way version not only the source, object and binary files, but the script logic that builds them. It seems that at present XML-wiki could not reasonably be expected to handle the array of documents required to build a linux kernel, for example. So some of this could be delegated to the versioning system.

But my concept of managing these builds would not use a wiki-engine or any of the current versioning systems. I'm focussing all development on the proper nodal solution, and managing builds, file-structures and runtime-object-models is part of its core functionality. --Nad 15:18, 26 May 2006 (NZST)

Whether or not this is practical? Probably not at this stage. However, it seems some kind of system is required to collaborate on a build (svn or whatever). --Rob 15:07, 26 May 2006 (NZST)

Logo idea

Penguin logo made from a patchwork of fabrics each visually linking through use of colour to other distros. Eg yellow for linux, red for bsd, coffee for debian, etc.

See archive for line art of penguin stencil.