Tahoe-LAFS is a Least Authority File Store. This project is a good system for backing up your data safely with redundancy as the files are encrypted and split across many servers (think RAID). The standard setup usually involves:
- A client that is usually setup on your computer but may be setup on a server that you trust with your data.
- Several storage nodes, these will be setup on servers that you don't need to trust (other than making sure the data is available).
- An introducer node, this node is the one that the client and storage nodes will connect to to find each other (so the client doesn't need to find all the storage nodes manually.
Tahoe keeps track of root files and folders by special URI's (they differ from normal URI's and are often included in a standard URI when using the web API) that are a representation of 'read-caps' and 'write-caps'. If you lose one of these special URI's that data is no longer retrievable.
Mint seems to be missing the package for APT and the python install fails so I found it needed to be built from source.
Sneakernet in my opinion is not a particularly strong point of Tahoe although it does technically support it. Sneakernet involves setting up a storage node (on a server or locally) that uses a removable Drive for storage. The removable drive can be removed and plugged into another Tahoe system and the data can be pulled off that system. The main problem with this is that you need to transport enough shares of the data (depending on how you have configured redundancy) to rebuild the data on the other system.
Tahoe's infrastucture needs 3 different kinds of nodes setup:
- Client - This is the node you connect from, probably just locally on your computer.
- Introducer - This node facilitates comunication between the client nodes and the storage nodes.
- Storage - This is a node that will be used for storage.
# If you have a domain: tahoe create-introducer --hostname=example.net # Otherwise: tahoe create-introducer --port=PORT --location=IP:PORT
Then to start run:
Note: Your furl needed by the client and storage nodes can be found in: ~/.tahoe/private/introducer.furl.
# If you have a domain: tahoe create-introducer --hostname=example.net --introducer=pb://ID@HOST:PORT/ID2 # Otherwise: tahoe create-introducer --port=PORT --location=IP:PORT --introducer=pb://ID@HOST:PORT/ID2
Then to start run:
Then edit the introducers file ~/.tahoe/private/introducers.yaml
introducers: petname: furl: "pb://ID@HOST:PORT/ID2"
Then to start run:
And navigate to http://127.0.0.1:3456
Configuration for a Single Node
Edit ~/.tahoe/tahoe.cfg on your client and change the following under the [client] section:
shares.needed = 1 shares.happy = 1 shares.total = 1
Configuration for SSL
Edit ~/.tahoe/tahoe.cfg on your client and change the following under the [node] section:
web.port = ssl:3456:privateKey=mykey.pem:certKey=cert.pem
Configuration for the nodes can be found in ~/.tahoe/tahoe.cfg
The only case I see for sneakernet in this system is to run enough (removable) storage nodes (to rebuild the data) with the rest of your setup and transport all those drives to another (physical) location with a similar setup. Seems like a pretty painful way to do it while it could possibly be simpler to encrypt the files directly and transport them on a single drive.
The only way I see to do something along the lines of "sneakernet" is to run a local storage server with the config option "storage_dir" (under "[storage]") set to a removeable drive, then it can be unplugged and transported - not really sure if this really has a whole lot of point to it.
Tahoe can be used from the client. When you create a directory or upload a file you MUST write down the URI for it (Not web URI but Tahoe URI), if the URI is lost I don't know if there is a way to easily retrieve a file. The URI is the handle for interacting with a file.
curl -X POST \ 127.0.0.1:3456/uri?t=mkdir # Returns directory cap/URI
curl -X POST \ 127.0.0.1:3456/uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir # Creates all the sub-directories need to make that path valid.
Query a directory:
curl -X GET \ 127.0.0.1:3456/uri/$DIRCAP?t=json
Read a file:
curl -X GET \ 127.0.0.1:3456/uri/$FILECAP?t=json curl -X GET \ 127.0.0.1:3456/uri/$DIRCAP/[SUBDIRS../]FILENAME?t=json
Upload a file:
curl -X POST \ -F "file=@/path/to/file" \ -F "name=filename" \ 127.0.0.1:3456/uri/$DIRCAP?t=upload # Re-uploading a file will update it but... # Note: the filecap/uri may change.