Difference between revisions of "Docker"

From Organic Design wiki
(Useful docker commands: copying/editing files inthe ocntainer)
 
(42 intermediate revisions by the same user not shown)
Line 1: Line 1:
I've seen that a lot of projects and organisations are using [[w:Docker (software)|Docker]] now. [[w:Operating-system-level virtualization|Containerisation]] in Linux has been around since 2008 in the form of [[w:LXC|LXC]], but it wasn't until Docker was released into the open source envriontment in March 2013 that it's been a generally accessible and understandable enough concept to start gaining widespread use.
+
{{#ev:youtube|PivpCKEiQOQ|300|right}}
 +
I've seen that a lot of projects and organisations are using [[w:Docker (software)|Docker]] now. [[w:Operating-system-level virtualization|Containerisation]] in Linux has been around since 2008 in the form of [[w:LXC|LXC]], but it wasn't until Docker was released into the open source environment in March 2013 that it's been a generally accessible and understandable enough concept to start gaining widespread use.
  
It's only recently (as of April 2016) that [[User:Sven|Sven]] told that it's really important and that I should know about it and incorporate it into our processes. So I started looking in to it, and it started really getting me thinking about how it might apply to the Organic Design [[vision statement]]:
+
== Installation ==
''{{quote|{{:vision statement}}}}''
+
Installing Docker for Debian is done by installing the ''docker.io'' package, or from the [https://store.docker.com/editions/community/docker-ce-server-debian docker store]. You may also want to install [https://docs.docker.com/compose/install Docker Compose] which is used by a number of projects we use such as [[Mastodon]].
Containerisation as a simple to use component is an important piece to the ultimate puzzle of how we at Organic Design could implement a [[system]] that supports our vision. Containerisation offers a means for implementing [[self containment]] and [[All aspects changeable]] which are core [[values]] for the Organic Design project.
 
  
The core values of openness and completeness refer to the fact that every facit of our reality and how we relate to it is covered by this system, and that state is available to all and made as understandable as possible. Of course that doesn't mean that nothing is private, this just refers to the ways we go about things, not to the personal details being managed in these ways.
+
== Useful docker commands ==
 +
To list all running containers, use '''ps''', you can add the '''-a''' options to see stopped ones as well, and use '''--no-trunc''' to see all output without chopping it for the screen size.
 +
<source lang="bash">
 +
docker ps -a --no-trunc
 +
</source>
 +
 
 +
To list all images:
 +
<source lang="bash">
 +
docker images
 +
</source>
 +
 
 +
To upgrade an image, stop it, pull it again and start a new container with the same launch command, it will choose the latest image that matches the repo name. You may want to delete the old image, but to do that you'll also need to delete the old container that uses it first, so best to make sure everything's working first. Use ''docker rmi'' to remove an image, and ''docker rm'' to remove a container.
 +
 
 +
<source lang="bash">
 +
docker stop abcd1234
 +
docker pull foo
 +
docker run -d --restart unless-stopped foo
 +
</source>
 +
'''Note:''' It's best to always use ''unless-stopped'' instead of ''always'' for the ''--restart'' option, so that you can stop containers manually anytime you want to without them restarting on reboot etc.
 +
 
 +
 
 +
You can enter inside a running container to see what's going on in there and execute commands from within its context:
 +
<source lang="bash">
 +
docker exec -it <CONTAINER_ID> bash
 +
</source>
 +
 
 +
or if you need root in the container:
 +
<source lang="bash">
 +
docker exec -u root -it <CONTAINER_ID> bash
 +
</source>
 +
 
 +
If you just need to view or modify specific files within the container, you can use ''docker cp'':
 +
<source lang="bash">
 +
docker cp <CONTAINER_ID>:/path/to/foo.txt foo.txt
 +
nano foo.txt
 +
docker cp foo.txt <CONTAINER_ID>:/path/to/foo.txt foo.txt
 +
</source>
 +
 
 +
 
 +
You can check the logs of a running container by running the '''docker logs''' command on the main container ID, there are ''--follow'', ''--tail'', ''--timestamp'' options.
 +
<source lang="bash">
 +
docker logs <CONTAINER-ID>
 +
</source>
 +
 
 +
 
 +
To nuke all networks, volumes, containers and images not in use [https://docs.docker.com/engine/reference/commandline/system_prune/ docker system prune]:
 +
<source lang="bash">
 +
docker system prune -a
 +
</source>
 +
 
 +
== Organic design work-flow ==
 +
Docker can make Organic Design's day-to-day work a lot more efficient in a number of key areas:
 +
 
 +
'''Server:''' The most obvious is to have Docker images for all the main components of the server such as code, web, mail, IRC, blockchain etc which would allow for a very quick server installation or migration process. It would also make it much simpler to work on the site locally to refine the image or work on configuration changes etc.
 +
 
 +
'''Clients:''' Each client or project should also have it's own Docker container so that it's easy to work on their system locally and offline. Each client system should include both server and client aspects so that the image can make team development better and also make the server environment more portable and reproducible.
 +
 
 +
== Relation to the Organic Design vision ==
 +
It's only recently (as of April 2016) that [[User:Sven|Sven]] mentioned that it's really important and that I should know about it and incorporate it into our processes. So I started looking in to it, and it quickly got me thinking about how it might apply to the Organic Design [[vision statement]]:
 +
<i>{{quote|{{:vision statement}}}}</i>
 +
 
 +
Containerisation as a simple to use component is an important piece to the ultimate puzzle of how we at Organic Design could implement a [[system]] that supports this vision. For example, containerisation offers a means for implementing [[self containment]] and [[All aspects changeable]] which are core [[values]] for the Organic Design project.
 +
 
 +
The core values of openness and completeness refer to the fact that every facet of our reality and how we relate to it is covered by this system, and that state is available to all and made as understandable as possible. Of course that doesn't mean that nothing is private, this just refers to the ways we go about things, not to the personal details being managed in these ways.
  
 
The core value of "all aspects changeable" is that any part of the system, including the system environment itself, is able to be developed on by the community. An important part of being able to understand and change things is to be able to have an exact copy of the thing locally that you can use and experiment with, and Linux containers allow us to do this. Within this context is the core value of "think global, act local" is a general guide line to keep the system generally evolving in a selfless way that has the common good as its bottom line.
 
The core value of "all aspects changeable" is that any part of the system, including the system environment itself, is able to be developed on by the community. An important part of being able to understand and change things is to be able to have an exact copy of the thing locally that you can use and experiment with, and Linux containers allow us to do this. Within this context is the core value of "think global, act local" is a general guide line to keep the system generally evolving in a selfless way that has the common good as its bottom line.
  
 
Of course containers aren't the whole picture, for the system to be truly open and transparent we must "own" it ourselves, which is where [[peer-to-peer]] comes in. And to be truly [[Independence|independent]] we must be able to maintain and own our identities, accounts and reputations together, which requires the best methods available for our privacy and security within this system. This requirement wasn't possible in a fully distributed environment until 2008 with Satoshi Nakamoto's proposed his solution to the "Byzantine generals" problem.
 
Of course containers aren't the whole picture, for the system to be truly open and transparent we must "own" it ourselves, which is where [[peer-to-peer]] comes in. And to be truly [[Independence|independent]] we must be able to maintain and own our identities, accounts and reputations together, which requires the best methods available for our privacy and security within this system. This requirement wasn't possible in a fully distributed environment until 2008 with Satoshi Nakamoto's proposed his solution to the "Byzantine generals" problem.
 +
 +
What we have here is quite a clear requirement: a peer-to-peer environment that we can log in to with our own distributed identity to access a network of public and private containers (i.e. complete environments) that we can collaborate on together. This network is formed by relationships where the basic geometry of relationships is defined to give a general class-instance relationship (i.e. what is an instance, or live copy, of what, and what implements what). We can see the combinations that others make for any aspect and have enough information to make informed decisions about how we'd like those things to be in our view.
 +
 +
One cool thing about this idea is that we don't need to define the ultimate set of components, we can just define any set of things that we know about and can get to work together. Once we have something functional, then its own collaborative nature can apply to every aspect of itself, so new ways of using it with better components will emerge and become dominant.
 +
 +
Initially in addition to Docker, I think I'd choose something like the [[w:InterPlanetary File System|InterPlanetary File System]] (IPFS) for the peer-to-peer storage layer with [[w:EncFS|EncFS]] to encrypt private content and [http://onename.io OneName]] for authenticating users to unlock access to their content.
 +
 +
== Things to refine/define ==
 +
But what other aspects are required to pull this off and what options are their for those?
 +
*Class/instance relation (including is_a and implements relations)
 +
*Reporting on state including adjustable options (voting) and defaults (e.g. defer to group, majority etc)
 +
 +
== Troubleshooting ==
 +
=== Docker broken on Linodes after upgrade ===
 +
Linodes use custom kernels for a lot of their images which can sometimes break things that depend on certain kernel modules being in place. In the case of docker this happened, but even after they had put the missing module (called ''overley'') in place things were still broken. You can change to the normal Grub2 selection of kernel by editing your Lindoes configuration.
 +
 +
In my case I still had trouble after that and the docker daemon would still not start, but I was able to get it going by explicitly setting the host to the unix socket the docker client was expecting to see the daemon on by setting ''DOCKER_OPTS'' to "-H unix:///var/run/docker.sock" in ''/etc/init.d/docker'', and ''ExecStart'' to "/usr/bin/dockerd -H unix:///var/run/docker.sock" in ''/lib/systemd/system/docker.service''.
 +
 +
=== Pulling an old image ===
 +
You can't pull old images by tag or the immutable digest, for example:
 +
<source>
 +
docker pull ahdinosaur/ssb-pub{!@sha256:5104c28ab05d6c33cfdd0c516add72259b1bc3802450feeed52d6f13bd5ac47c!}
 +
docker pull ahdinosaur/ssb-pub{!:v1.0.8!}
 +
</source>
 +
 +
As far as I know, You can only get the digests while you have the image locally though with '''docker images --digests''', but you can use the registry API to get a list of tags:
 +
<source>
 +
https://registry.hub.docker.com/v2/repositories/{!<AUTHOR>!}/{!<REPO>!}/tags/
 +
</source>
 +
 +
== Docker resources ==
 +
*[https://docs.docker.com/engine/reference/builder/ Dockerfile reference]
 +
*[http://container-solutions.com/understanding-volumes-docker/ Understanding volumes in Docker]
 +
*[https://github.com/jesseduffield/lazydocker/blob/master/README.md LazyDocker] ''- a simple terminal UI for docker and docker-compose''
 +
*[https://ipfs.io/blog/1-run-ipfs-on-docker/ Running an IPFS node in a Docker container] and [https://github.com/ipfs/go-ipfs/blob/master/docs/fuse.md mount with Fuse]
 +
*[https://resin.io/blog/docker-on-raspberry-pi-in-4-simple-steps/ Docker on Raspberry Pi] ''- by Resin''
 +
*[http://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/ Docker in Raspberry Pi] ''- by Hypriot''
 +
*[https://hub.docker.com/r/mrcstvn/rpi-ipfs/ rPi IPFS] ''- Docker image for IPFS on Raspberrry Pi''
 +
*[https://www.docker.com/blog/understanding-docker-networking-drivers-use-cases/ Docker network drivers] & [https://docs.docker.com/compose/networking/ Networking in Compose]
 +
 +
== News ==
 +
*2019-11-13: [https://techcrunch.com/2019/11/13/mirantis-acquires-docker-enterprise/ Mirantis acquires Docker Enterprise]
  
 
== See also ==
 
== See also ==
*[[Core values]]
+
*[[IPFS]]
*[[Self containment]]
+
*[https://github.com/ipfs/notes/issues/57 Muneeb-ali talking about IPFS backend to OneName]
*[[Independence]]
+
*[https://github.com/firecracker-microvm/firecracker/blob/main/docs/design.md Firecracker] ''- a new light VM container model''
*[[Software]]
+
[[Category:Software]][[Category:Linux]]

Latest revision as of 00:06, 10 May 2022

I've seen that a lot of projects and organisations are using Docker now. Containerisation in Linux has been around since 2008 in the form of LXC, but it wasn't until Docker was released into the open source environment in March 2013 that it's been a generally accessible and understandable enough concept to start gaining widespread use.

Installation

Installing Docker for Debian is done by installing the docker.io package, or from the docker store. You may also want to install Docker Compose which is used by a number of projects we use such as Mastodon.

Useful docker commands

To list all running containers, use ps, you can add the -a options to see stopped ones as well, and use --no-trunc to see all output without chopping it for the screen size.

docker ps -a --no-trunc

To list all images:

docker images

To upgrade an image, stop it, pull it again and start a new container with the same launch command, it will choose the latest image that matches the repo name. You may want to delete the old image, but to do that you'll also need to delete the old container that uses it first, so best to make sure everything's working first. Use docker rmi to remove an image, and docker rm to remove a container.

docker stop abcd1234
docker pull foo
docker run -d --restart unless-stopped foo

Note: It's best to always use unless-stopped instead of always for the --restart option, so that you can stop containers manually anytime you want to without them restarting on reboot etc.


You can enter inside a running container to see what's going on in there and execute commands from within its context:

docker exec -it <CONTAINER_ID> bash

or if you need root in the container:

docker exec -u root -it <CONTAINER_ID> bash

If you just need to view or modify specific files within the container, you can use docker cp:

docker cp <CONTAINER_ID>:/path/to/foo.txt foo.txt
nano foo.txt
docker cp foo.txt <CONTAINER_ID>:/path/to/foo.txt foo.txt


You can check the logs of a running container by running the docker logs command on the main container ID, there are --follow, --tail, --timestamp options.

docker logs <CONTAINER-ID>


To nuke all networks, volumes, containers and images not in use docker system prune:

docker system prune -a

Organic design work-flow

Docker can make Organic Design's day-to-day work a lot more efficient in a number of key areas:

Server: The most obvious is to have Docker images for all the main components of the server such as code, web, mail, IRC, blockchain etc which would allow for a very quick server installation or migration process. It would also make it much simpler to work on the site locally to refine the image or work on configuration changes etc.

Clients: Each client or project should also have it's own Docker container so that it's easy to work on their system locally and offline. Each client system should include both server and client aspects so that the image can make team development better and also make the server environment more portable and reproducible.

Relation to the Organic Design vision

It's only recently (as of April 2016) that Sven mentioned that it's really important and that I should know about it and incorporate it into our processes. So I started looking in to it, and it quickly got me thinking about how it might apply to the Organic Design vision statement:

Quote.pngOur vision is to see all of our world's inhabitants governing ourselves with an open, accessible and understandable global system which has as its bottom line the common good, and which we define and operate ourselves by effectively utilising and allocating our common expertise and resource.

Containerisation as a simple to use component is an important piece to the ultimate puzzle of how we at Organic Design could implement a system that supports this vision. For example, containerisation offers a means for implementing self containment and All aspects changeable which are core values for the Organic Design project.

The core values of openness and completeness refer to the fact that every facet of our reality and how we relate to it is covered by this system, and that state is available to all and made as understandable as possible. Of course that doesn't mean that nothing is private, this just refers to the ways we go about things, not to the personal details being managed in these ways.

The core value of "all aspects changeable" is that any part of the system, including the system environment itself, is able to be developed on by the community. An important part of being able to understand and change things is to be able to have an exact copy of the thing locally that you can use and experiment with, and Linux containers allow us to do this. Within this context is the core value of "think global, act local" is a general guide line to keep the system generally evolving in a selfless way that has the common good as its bottom line.

Of course containers aren't the whole picture, for the system to be truly open and transparent we must "own" it ourselves, which is where peer-to-peer comes in. And to be truly independent we must be able to maintain and own our identities, accounts and reputations together, which requires the best methods available for our privacy and security within this system. This requirement wasn't possible in a fully distributed environment until 2008 with Satoshi Nakamoto's proposed his solution to the "Byzantine generals" problem.

What we have here is quite a clear requirement: a peer-to-peer environment that we can log in to with our own distributed identity to access a network of public and private containers (i.e. complete environments) that we can collaborate on together. This network is formed by relationships where the basic geometry of relationships is defined to give a general class-instance relationship (i.e. what is an instance, or live copy, of what, and what implements what). We can see the combinations that others make for any aspect and have enough information to make informed decisions about how we'd like those things to be in our view.

One cool thing about this idea is that we don't need to define the ultimate set of components, we can just define any set of things that we know about and can get to work together. Once we have something functional, then its own collaborative nature can apply to every aspect of itself, so new ways of using it with better components will emerge and become dominant.

Initially in addition to Docker, I think I'd choose something like the InterPlanetary File System (IPFS) for the peer-to-peer storage layer with EncFS to encrypt private content and OneName] for authenticating users to unlock access to their content.

Things to refine/define

But what other aspects are required to pull this off and what options are their for those?

  • Class/instance relation (including is_a and implements relations)
  • Reporting on state including adjustable options (voting) and defaults (e.g. defer to group, majority etc)

Troubleshooting

Docker broken on Linodes after upgrade

Linodes use custom kernels for a lot of their images which can sometimes break things that depend on certain kernel modules being in place. In the case of docker this happened, but even after they had put the missing module (called overley) in place things were still broken. You can change to the normal Grub2 selection of kernel by editing your Lindoes configuration.

In my case I still had trouble after that and the docker daemon would still not start, but I was able to get it going by explicitly setting the host to the unix socket the docker client was expecting to see the daemon on by setting DOCKER_OPTS to "-H unix:///var/run/docker.sock" in /etc/init.d/docker, and ExecStart to "/usr/bin/dockerd -H unix:///var/run/docker.sock" in /lib/systemd/system/docker.service.

Pulling an old image

You can't pull old images by tag or the immutable digest, for example:

docker pull ahdinosaur/ssb-pub@sha256:5104c28ab05d6c33cfdd0c516add72259b1bc3802450feeed52d6f13bd5ac47c
docker pull ahdinosaur/ssb-pub:v1.0.8

As far as I know, You can only get the digests while you have the image locally though with docker images --digests, but you can use the registry API to get a list of tags:

https://registry.hub.docker.com/v2/repositories/<AUTHOR>/<REPO>/tags/

Docker resources

News

See also