I've seen that a lot of projects and organisations are using Docker now. Containerisation in Linux has been around since 2008 in the form of LXC, but it wasn't until Docker was released into the open source environment in March 2013 that it's been a generally accessible and understandable enough concept to start gaining widespread use.
Useful docker commands
To list all running containers, use ps, you can add the -a options to see stopped ones as well, and use --no-trunc to see all output without chopping it for the screen size.
docker ps -a --no-trunc
To list all images:
To upgrade an image, stop it, pull it again and start a new container with the same launch command, it will choose the latest image that matches the repo name. You may want to delete the old image, but to do that you'll also need to delete the old container that uses it first, so best to make sure everything's working first. Use docker rmi to remove an image, and docker rm to remove a container.
docker stop abcd1234 docker pull foo docker run -d --restart unless-stopped foo
Note: It's best to always use unless-stopped instead of always for the --restart option, so that you can stop containers manually anytime you want to without them restarting on reboot etc.
You can enter inside a running container to see what's going on in there and execute commands from within its context:
docker exec -it <CONTAINER_ID> bash
or if you need root in the container:
docker exec -u root -it <CONTAINER_ID> bash
You can check the logs of a running container by running the docker logs command on the main container ID, there are --follow, --tail, --timestamp options.
docker logs <CONTAINER-ID>
To nuke all networks, volumes, containers and images not in use docker system prune:
docker system prune -a
Organic design work-flow
Docker can make Organic Design's day-to-day work a lot more efficient in a number of key areas:
Server: The most obvious is to have Docker images for all the main components of the server such as code, web, mail, IRC, blockchain etc which would allow for a very quick server installation or migration process. It would also make it much simpler to work on the site locally to refine the image or work on configuration changes etc.
Clients: Each client or project should also have it's own Docker container so that it's easy to work on their system locally and offline. Each client system should include both server and client aspects so that the image can make team development better and also make the server environment more portable and reproducible.
Relation to the Organic Design vision
It's only recently (as of April 2016) that Sven mentioned that it's really important and that I should know about it and incorporate it into our processes. So I started looking in to it, and it quickly got me thinking about how it might apply to the Organic Design vision statement:
|Our vision is to see all of our world's inhabitants governing ourselves with an open, accessible and understandable global system which has as its bottom line the common good, and which we define and operate ourselves by effectively utilising and allocating our common expertise and resource.|
Containerisation as a simple to use component is an important piece to the ultimate puzzle of how we at Organic Design could implement a system that supports this vision. For example, containerisation offers a means for implementing self containment and All aspects changeable which are core values for the Organic Design project.
The core values of openness and completeness refer to the fact that every facet of our reality and how we relate to it is covered by this system, and that state is available to all and made as understandable as possible. Of course that doesn't mean that nothing is private, this just refers to the ways we go about things, not to the personal details being managed in these ways.
The core value of "all aspects changeable" is that any part of the system, including the system environment itself, is able to be developed on by the community. An important part of being able to understand and change things is to be able to have an exact copy of the thing locally that you can use and experiment with, and Linux containers allow us to do this. Within this context is the core value of "think global, act local" is a general guide line to keep the system generally evolving in a selfless way that has the common good as its bottom line.
Of course containers aren't the whole picture, for the system to be truly open and transparent we must "own" it ourselves, which is where peer-to-peer comes in. And to be truly independent we must be able to maintain and own our identities, accounts and reputations together, which requires the best methods available for our privacy and security within this system. This requirement wasn't possible in a fully distributed environment until 2008 with Satoshi Nakamoto's proposed his solution to the "Byzantine generals" problem.
What we have here is quite a clear requirement: a peer-to-peer environment that we can log in to with our own distributed identity to access a network of public and private containers (i.e. complete environments) that we can collaborate on together. This network is formed by relationships where the basic geometry of relationships is defined to give a general class-instance relationship (i.e. what is an instance, or live copy, of what, and what implements what). We can see the combinations that others make for any aspect and have enough information to make informed decisions about how we'd like those things to be in our view.
One cool thing about this idea is that we don't need to define the ultimate set of components, we can just define any set of things that we know about and can get to work together. Once we have something functional, then its own collaborative nature can apply to every aspect of itself, so new ways of using it with better components will emerge and become dominant.
Initially in addition to Docker, I think I'd choose something like the InterPlanetary File System (IPFS) for the peer-to-peer storage layer with EncFS to encrypt private content and OneName] for authenticating users to unlock access to their content.
Things to refine/define
But what other aspects are required to pull this off and what options are their for those?
- Class/instance relation (including is_a and implements relations)
- Reporting on state including adjustable options (voting) and defaults (e.g. defer to group, majority etc)
Docker broken on Linodes after upgrade
Linodes use custom kernels for a lot of their images which can sometimes break things that depend on certain kernel modules being in place. In the case of docker this happened, but even after they had put the missing module (called overley) in place things were still broken. You can change to the normal Grub2 selection of kernel by editing your Lindoes configuration.
In my case I still had trouble after that and the docker daemon would still not start, but I was able to get it going by explicitly setting the host to the unix socket the docker client was expecting to see the daemon on by setting DOCKER_OPTS to "-H unix:///var/run/docker.sock" in /etc/init.d/docker, and ExecStart to "/usr/bin/dockerd -H unix:///var/run/docker.sock" in /lib/systemd/system/docker.service.
Pulling an old image
You can't pull old images by tag or the immutable digest, for example:
docker pull ahdinosaur/ssb-pub@sha256:5104c28ab05d6c33cfdd0c516add72259b1bc3802450feeed52d6f13bd5ac47c docker pull ahdinosaur/ssb-pub:v1.0.8
As far as I know, You can only get the digests while you have the image locally though with docker images --digests, but you can use the registry API to get a list of tags:
- Dockerfile reference
- Understanding volumes in Docker
- Running an IPFS node in a Docker container and mount with Fuse
- Docker on Raspberry Pi - by Resin
- Docker in Raspberry Pi - by Hypriot
- rPi IPFS - Docker image for IPFS on Raspberrry Pi