Artificial Intelligence

From Organic Design wiki
Revision as of 21:41, 7 May 2023 by Nad (talk | contribs) (Universal interface)

AI is about to cause enormous changes to the net and to the whole of society. The way we do almost everything will be changed a lot by AI in the coming months. Many people see AI as a huge threat to society, and honestly if the only significant AI options were closed-source corporate offerings, I fully agree that this would indeed be of grave concern.

However, much of leading-edge AI is in the public domain which is a great relief - although some initially open projects like OpenAI quickly lock their informational assets down for monetisation when they realise how powerful it is! But in general, we can rest assured that the libre software community has got our backs! No matter how scary and controlling corporations and their AI become, there will always be available tools of roughly equivalent power to mitigate these threats. And in the hands of the libre software community, AI can help to organise and expand libre software along with its core tenets such as freedom, liberty, transparency, objectivity and understandability.

It's hard to know what directions AI will take, and how fast it will advance in these various directions. But what follows here is a discussion based on what we see happening right now with large language model (LLM) based AI, even without general intelligence on the scene yet.

Clarifying the dark side of AI

Note.svg Note: I just watch an excellent video called The AI dilemma that covers the dark side that I wanted to discuss far better than I could ever do :-)

I can fully understand the fears that people deeply connected to AI have. These AIs will quickly become unbelievably powerful, if they don't have our best interests at heart, we have (yet another) major problem ahead. It's quick clear then, that many powerful AIs with bad intentions will exist and use powerful means to gain control over the unwary.

Actually the world is already run by a heartless machine intent on enslavement and domination, AI just makes that machine more intelligent, and opens up a power vacuum to be filled by the most powerful AI.

Think of giant corporate AIs as operating in a similar way to a hacker "botnet". A botnet is a centralised system that has a database of exploits and control over a pool of resource. The primary goal of the resource is to expand it's control over more resource by seeking it out and testing matching exploits against it. Any exploit that succeeds is used to connect the newly exploited asset into its resource pool. The resource pool can be used in any way the botnet owner wishes.

Now imagine the possibility space of exploits into resource that are available to a powerful and well connected AI (or read this which goes into some detail about it). Resources such as human attention and ability, decision and policy maker influence, financial and land resource etc.

Like botnets searching for exploits to hack, people, society and our minds and cultures are being scanned for exploitable connections by the mega AIs of the military industrial complex (the tight coupling of government, military and corporations). We will literally be absorbed into the machine world.

The "deep fake" genie is out of the bottle, people's faces are being convincingly mapped onto porn videos for revenge and profit motives, and even people's voices can be faked now. We have to be really careful in the coming years to ensure we have established methods with close friends and family to quickly know if we're really talking to them, or to a faceless AI.

Big data and manipulation

Note that one way of expanding the attack surface which naturally progresses, is technology itself. Interfaces are becoming more intimately connected with non-invasive thought interfaces right round the corner. Our bodies are seen by such corporate systems as real-estate in which biological products and projects can evolve. Non-invasive thought interfaces are just around the corner, represent a another vast new expanse of exploitable real estate.

It's essential that a system with such an intimate level of connection with us is libre software, an open standard.

In the near future it will be taken for granted that vast global intelligences are behind every single interaction that anyone has with any technology. We will have to think very carefully about all of our interactions with technology. Transparency and privacy are absolutely critical in this context as it's literally that the future of free will depends on it.

AI is an unbelievably powerful force. The existence of such a force means an inevitable eventual synthesis with it.

Our job is to ensure a safe haven of the natural principles are maintained throughout this transition. A language-independent and self-consistent instantiatable instance of harmonious organisation.

They're literally programmed to optimise enslavement using all the exploits available at all scales, such as astroturfing, social manipulation and regime change etc, it's only a matter of time until one such AI regime-changes or enslaves it's owners

Ultimately this is about maintaining control over human belief and action, tightening that control and making its maintenance more efficient. With AI and an ever more intimate connection with technology, the arena of control (or "real estate") becomes the whole collective consciousness.

The importance of libre AI

It should be very clear that privacy and security in the context of this "dark side" are not just a luxury or a hobby, they're absolutely essential to avoiding near absolute enslavement. The old saying that "I don't care about privacy because I have nothing to hide", has always been a very naive attitude, but it has now become an extremely dangerous attitude as well. Never has the libre software community and the values it stands for been so important!

The good news is that just like the libre software movement offers alternatives and defences to us with today's social networks and advertising, so we'll all be able to have personal libre AI that we can trust to know everything about us, and it can act as a "firewall" against this new subtle domain of exploitation. Although the dark side of AI will no doubt lead to unprecedented new levels of narrative control, the growth of libre AI will also user in an era of unprecedented ease of access to objective information for those who seek it.

Libre software brings harmonious principles to technology development such accessibility to knowledge, freedom and accessibility to diversify and trust in systems through verification. The libre side of AI can extend these harmonious principles further into understanding and opportunity.

AI excels at the processes needed to maintain objective representations, and organise the ever expanding diversity of resource and knowledge.

We're close to the stage now, where the applications individuals and small teams can make can be on the same level of production quality as the corporate offerings - or at least the playing fields will be greatly levelled.

For example, we can maintain streams of quality content to a large community of subscribers tailored for different groups and user interests. Or another example could be to build our own uniquely branded set of beautiful and intuitive interface elements seamlessly available across the ever expanding array of target platforms and environments - something anyone will be able to do with the help of AI - or at least 99% of the heavy lifting with only a few human hours of human tweaks. The same thing applies to many different aspects of project development and business organisation.

AI constitutions

We need to have a cryptographically provable way of knowing what underlying model we're interacting with, and what structure of knowledge or filtering exists on top of the model. These default layers are like the constitution of a DAO, providing us with a verifiable assurance that we are indeed interacting with the system we intend to interact with.

I don't know if this is possible, but I imagine it to work in a similar way to how output of smart contracts can be accompanied by signatures allowing one to verify if the output really is the result of a specific set of inputs applied to a specific function. But even if it were not possible in the strict sense of cryptographic verification, the next best thing would be the equivalent of random audits being carried out regularly during interaction, for example by a known AI that we trust comparing interactions with a known system.

Organic Design's AI plans

The project we're currently basing our AI on is Open Assistant (by LAION, Large-scale Artificial Intelligence Open Network) which is a 100% open source LLM chat AI. Some of the plans we have that I'll talk about in this section are beyond the current capabilities of Open Assistant. But these use cases are well within the ballpark of ChatGPT4's current (mid 2023) capabilities, and we're quite certain that there'll be a number of libre AI options that will match this level of capability within a year or so.

Currently running an Open Assistant that can serve a few concurrent users has a minimum requirement of decent spec server with at least one professional GPU such as an RTX-3090 (starting at about EUR250/mo for an appropriate dedicated server as of mid 2023). We expect decent LLMs to be able to run fully on local devices before 2025.

Now that we have a fully open source trustable AI available, we can begin using it in our own organisation, allowing it to have access to all the organisations historical information, internal communications and activity stream. This will allow the assistant to actually assist in real day-to-day ways, such as helping with documentation and reports, writing blog posts, notifying us about out-of-date content and ensuring that new information is linked to from relevant contexts to name a few things.

Having a version of our AI running on local LANs and even devices is important for us, because local assistants can be trusted with personal data, and we be rest assured that private data never leaves the device. The more external a service, the more the content must be limited to aggregate data and statistics.

One of the most important aspects of what we need AI for in our organisation is representation maintenance (which is discussed in more detail below in the holarchy section) which we could enforce to be done by code written and maintained by the AI agent rather than it processing the data directly (it needs to work like this anyway so that AI query load is minimised). The classification aspect is aggregate anyway, but again it should be handled through code that the AI maintains, never direct.

We can start by using a central server, while building the pipelines in preparation for distribution of AI agents (which will work just like changing AI models in any context, it's simply seen as another specialist). Each context can have any number of AI agents of various models and roles, some external, some local. One aspect of this connection job is to try and ensure the context is able to continue without AI...

We may also want to use remote AI like ChatGPT for some things which are too difficult for Open Assistant like some connector code. But even when no AIs can write successful code, at least it would have created a decent well-commented in-context boiler plate and expressed demand for human developer attention.

This is just the idea that AIs are themselves just instantiatable API endpoints. It just so happens that this particular API opens up access to many more connectables... but also many pre-instantiated connections could comes pre-packaged, and AIs implements new connections such that they do not require ongoing AI presence in order to keep working (but without AI they won't be serviced).

Our primary aim is to make OD into a unified AI enabled organisation in the form of a holarchy. Remember that the holarchy starts as an ontology of the resource types, languages and their instances used throughout the network.

This is essentially a normal AI-powered distributed application model at the most general level. But done in a very AI-agnostic way so that we can stay very flexible. But the key point here is we're optimising our system in the context of OD being an AI user, not an AI software or model developer.

What that means is that our AI will be used to maintain ans interact with live representations of the main organisational information, entities and users making up our system. Maintaining history and evolving knowledge, tools and interface ecosystem upon those representations. And maintaining up to date versions of the underlying AI models and extensions while also maintaining the integrity of its own experiential history (history of conversations and prompt-structure context).

And also...

  • to be a distributed backup for the Open Assistant models to ensure they're independently usable
  • to allow quick swapping of underlying models with the holarchy
  • simply a chatbot, but specialising in all things OD and holarchy
  • to manage ODs threads of operation productively and with clarity
  • to expand to more threads, such as news
  • we can prepare for AIs getting more personal by ensuring that all AIs full history are backed up and instantiatable. An AI identity is its activity streams (essentially their history and experience).

Our technical AI infrastructure plans

All these following items are what we need to find fully open source solutions to that we can run independently on our own server. Packaged up for easy installation on new servers, and in a modular way so we can swap components and models easily within a. Eventually all this should be able to run on local LANs, and even local devices one day.

  • GPT-3.5 level chat API running on our own server
  • Persistent long-term memory and learning independent of the underlying model (can be built out on top of new models)
    • e.g. see Chroma and Weaviate
  • Our foundation ontology as an overall automation and organisational context
    • Extending an AutoGPT type environment
    • Incorporating other connectors like Zapier
  • Speech-to-text and text-to-speech running independently on our own server

AI and holarchy

The Organic Design the Holarchy would also have an AI, which can appear as a "presence" at the centre of every POV so that it can assist in decision making optimised to serve in a harmonious balance between that perspective and the parent. Surplus profit is passed upwards, which is how contributing to the whole is expressed. The ultimate beneficiary is the Source of truth and awareness.

AI puts the libre software minority group on a level playing field with corporate offerings in the operating system realm. Because it's precisely what they're better at that AI also excels in - staying up to date, distribution, logistics, pipelines and organisation etc.

The "human" creative part concerns more general patterns that change less frequently and, being of a more general nature, can be maintained by a small group as effectively as a large one. For example think of how often business logic patterns change compared to UI patterns, and the former are applicable across the spectrum of the latter.

Corporate AI is incentivised not to develop offline-first technology, it prefers the client-server model to maximise dependence and control, and to maintain secrecy over their system. This will make offline-first systems competitive and much more functional and independent. They will be the only option when denied internet access.

The collective consciousness (also called the noosphere), is tightly controlled by media narrative and behaviour manipulation, and soon by AI - we need this technology to be rooted in truth, not in one powerful elite or another.

Model training

Training models is currently inaccessible to individuals as the amount of processing required is far above what consumer hardware can provide. But this is changing rapidly, end user processing capability is constantly increasing, new more efficient application-specific chips are being built, and the training methods themselves are getting more efficient. For example nanoGPT can reproduce GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in about 4 days of training (which at current server prices as of mid 2023 costs about $3200).

Open Web Text

OpenWebText is a large-scale dataset of web text that was created by researchers at OpenAI originally planned to be freely released but then locked down privately. There are some other projects designed as an open source replacement to Open Web Text such as OpenWebText2 (Hugging Face page) and OpenWebTextCorpus.

The dataset consists of more than eight million documents, comprising more than 40 gigabytes of text data. The documents were collected from a range of sources, including Wikipedia, web pages, and discussion forums, and cover a wide range of topics.

The OpenWebText dataset is designed to be used for training natural language processing (NLP) models, such as language models or text classifiers. The dataset has been preprocessed to remove non-text content, such as images or HTML tags, and to normalize text formatting and punctuation. The resulting dataset is a high-quality source of web text that can be used to train models to understand and generate natural language.

One of the unique features of the OpenWebText dataset is that it is freely available for use by researchers and developers. This makes it easier for organizations of all sizes to access high-quality text data for training AI models, without having to worry about the cost or licensing restrictions of proprietary data sources.

See also: NanoGPT training example

Agents and assistants

Creative AI

Here's some examples of the OpenAI based Dall-e, along with the prompt that achieved the result of each.

beautiful white cat with gray stripes and blue eyes wearing tiny pink boxing gloves
This is the same prompt again, but using the open source Deep Floyd generator
Angry white cat with brown stripes wearing boxing gloves
Artists rendition of a cat with blue eyes wearing pink boxing gloves
Pinecone princess on love heart cushion
Burmese mountain dog with white paws playing piano (DeepFloyd)

Creative AI tools

Learning

AI technology news

See also