Libre software (blog)

From Organic Design wiki

Fixing my laptop microphone[edit]

Posted by Nad on 17 February 2019 at 00:02
This post has the following tags: Libre software
Last year I got a Purism Librem 13 laptop which is made for Linux and is very privacy focused hardware. But about a month after I got it, the microphone stopped working. It wasn't a problem with the built in kill switch that allows you to completely disable the camera and microphone at hardware level, because it still activated and deactivated the camera properly. I took it in to some laptop specialists we know in Caxias who are very reliable, and they said that the problem was most likely that the wire had come off the microphone since all the connections on the main board were fine. This is a big problem because, the microphone is next to the camera above the screen and is very inaccessible, so I left it because I didn't want to send it all the way back to the US which would take months and probably also involve a huge tax when it returned into Brazil!

But then we got another Purism for Beth, and after about a month of use exactly the same thing happened!!! We asked the Purism engineers if they could explain better how to access the microphone so that maybe our laptop guys could have a go at fixing them. But then later an idea occurred to me - why not just replace the microphone with another one that's in a much more convenient location?

So first I located the connectors which Purism show clearly on their site here. It looked a little different on my model as can be seen in the first photo below, but it seemed like it would be doable as there was a convenient space right next to the connector where I could fit a new microphone that would be to the left of the keyboard which would do just fine. Next I pulled apart some old unused earphones that had a microphone on them, and removed the microphone and some of the wire.

Purism-microphone-fix-2.jpg
Purism-microphone-fix-1.jpg
Purism-microphone-fix-1.5.jpg


Then I removed the current microphone and kill switch from the connector and soldered on my replacement microphone via a short piece of the earphone wire, and then tested that it worked. It did work! So the next step was to make it into a permanent feature of the laptop! To do this I drilled a small hole in my chosen location to the left of the keyboard, and then I made a small housing for the new microphone out of a rubber grommet from a roofing nail. I then superfixed the housing to the inside of the case so that the microphone would be positioned right over the hole when inserted into the housing.

Purism-microphone-fix-3.jpg
Purism-microphone-fix-3.5.jpg
Purism-microphone-fix-5.jpg

And that's all there was to it, it works really well!

I should really have connected the kill switch up to the new microphone, but I decided to quit while I was ahead because my eye sight's too bad and my soldering iron's too big! Later I'll take both our laptops in to our guys in Caxias to fix Beth's and get both kill switches working. As a final finishing touch I countersunk the hole a little to tidy it up a bit and dabbed a black permanent marker on it to make it a bit more subtle :-)

Purism-microphone-fix-4.jpg

Mastodon[edit]

Posted by Nad on 28 October 2018 at 14:44
This post has the following tags: Libre software
Mastodon is a free, open-source social network server based on ActivityPub. Follow friends and discover new ones. Publish anything you want: links, pictures, text, video. All servers of Mastodon are interoperable as a federated network, i.e. users on one server can seamlessly communicate with users from another one. This includes non-Mastodon software that also implements ActivityPub such as GNUsocial, Friendica, Hubzilla and PeerTube! The easiest way to get started on Mastodon is to join one of the existing instances, but here at OD we're running our own to get familiar with it all which we're documenting here.

Mastodon is federated which means that you can interact with users who reside on other servers that also use the ActivityPub protocol. Users can export their data including connections and toots, and can easily move everything over to a new instance at any time. It's becoming increasingly clear that corporate controlled "walled gardens" like Twitter and Facebook are not worth spending time and effort building up a following in since they can delete accounts at any time for whatever reasons they like, including simply not falling in line with the mainstream narrative.

Personally I never had much of a following on Facebook or Twitter - it felt like a futile things to do since all the data I built up was in somebody else's hands, but in the "fediverse" (the universe of connections using the AcitviyPub protocol) you control all the information that composes your posts and connections yourself.

Mastodon is similar to Twitter but has some differences. It uses "toots" instead of "tweets", and favourite stars instead of Twitter's hearts or Facebook's likes. Instead of retweets, Mastodon uses a concept called boosting which works the same way except that you can't add you own text to the boosted message. This has been done deliberately so that only the original message's intent gets spread and trolling in the network is reduced. To mention a remote user in a toot you need to also include their domain, such as @fred@example.com. Another thing they've done differently is that the favourite and reply count is not shown, they didn't want to create the competitive rating behaviour on toots which I guess I can understand. Here's @Gargron talking about the reasoning behind this.

To follow a remote user, you go to the user's profile page on their server and follow them there, that brings up an option to enter your @name@server ID so the remote server can request the follow action from your local server that you're logged in to. To unfollow a remote user, you can do it from the follow notification if they've followed you as well, otherwise you need to block and unblocking them.

For a more detailed look into how to use Mastodon and why it's been made the way it has, see this excellent introductory guide by @kev@fosstodon.org. The rest of this article will focus on the installation and administration of a Mastodon instance.

General architecture[edit]

The main application is written in Rails and runs in the "web" container. There are also two other containers for the application, "streaming" which handles the long-term web-socket connections, and "sidekiq" for background processing such as mailing and push notifications. The main database is postgresql, but there's also a redis database which is heavily used throughout the application and is best backed up as well even though the loss of its data can be survived. The web-server is not in the Docker containers, instead it's expected that your main web-server will reverse proxy into the ports exposed by the application containers.

Installation[edit]

Mastodon has a lot of dependencies that we don't have installed on our server such as PostgreSQL and Ruby, so for us the Docker image is definitely the preferred route, but it's till quite complicated and needs to be done via Docker Compose. This is our procedure which is based on the official installation.

Clone the Docker repo[edit]

First, create a mastodon group with number 991 which is used by the project, then create a directory for the persistent data that will be used by the containers (we're putting our repo and data in /var/www/domains along with other web applications - this is not under our document root!), clone the Mastodon Docker repo and checkout the latest stable version.

groupadd -g 991 mastodon
useradd -u 991 -g 991 -c "Mastodon User" -s /usr/bin/nologin -d /var/www/domains/mastodon-data mastodon
mkdir /var/www/domains/mastodon-data
cd /var/www/domains
git clone https://github.com/tootsuite/mastodon.git mastodon-docker
chown -R mastodon:mastodon /var/www/domains/mastodon*
cd mastodon-docker
git checkout $(git tag -l | grep -v 'rc[0-9]*$' | sort -V | tail -n 1)

Dockerfile[edit]

The Dockerfile has a chown -R command in it that takes up to an hour to run whenever docker-compose build is run. This issue is known about, but they've chosen to keep it like this for now so that it doesn't break on older versions of Docker. But this is a real show stopper, so to get around it you can comment out the slow separate chown command and add it as an option to the previous copy command instead as follows:

COPY --chown=mastodon:mastodon . /mastodon
# RUN chown -R mastodon:mastodon /mastodon

docker-compose.yml[edit]

docker-compose.yml is the file that determines what services will be included in the instances and their versions and data locations. This file should be backed up in case a full rebuild of your Mastodon instance is required at some point. There is nothing private in this file so you can store it in your configuration repo or wiki etc.

Before running any docker-compose commands we need to edit the docker-compose.yml file. Change all the images to use the version of the repo you chose above, e.g. "image: tootsuite/mastodon:v2.5.2". Note also that there are three services that use the mastodon container, web, streaming and sidekiq - all need to have the version added. You may want to enable the elastic search section too. I like to change the restart options from "always" to "unless-stopped" as well.

Uncomment all the volume path lines for data persistence. By default the host part (the path before the colon) of each is just a relative path which means that the data will end up residing in directories within the docker repo directory. We've decided to use a separate mastodon-data directory instead to keep the data separate from the main codebase, so the relative paths need to changed to the absolute paths we set above (/var/www/domains/mastodon-data).

But, since the data is in another location, a symlink needs to be created in the repo public directory pointing to the public public/system directory in the data directory, since the web-server's document_root will be pointing at the repo's public directory which contains all the web files.

.env.production[edit]

.env.production is the file that represents your Mastodon instance's basic configuration such as domain, secret keys, database and SMTP connections and other services. This file is best backed up (at least the non-reproducible parts such as the secrets and keys) in a secure location as it is needed if you ever need to do a complete rebuild of your mastodon instance.

Copy the .env.production.sample to .env.production and run the setup wizard. Note that most of the questions can be just set as default by entering nothing. Answer "yes" to save the configuration, create the schema and admin user etc.

docker-compose run --rm web bundle exec rake mastodon:setup

For the email configuration, it's best to run through the live tests, because the settings can be very temperamental. I've found that localhost doesn't work and an actual external domain that resolves to the SMTP server is needed. No login details are needed though because the mail server still sees that the request is local and allows it to be sent without credentials or TLS.

The configuration that will be used is output to the screen as well, and I've found that it's best to copy this so that if there's any problems you can manually put these into the .env.production and then run docker-compose build to make the changes take effect. You can also re-run the setup script by deleting all the persistent data from mastodon-data to create a fresh install, and then building again.

Note: There's a step that runs the chown -R command over the whole mastodon directory structure which for some reason takes a very long time to run, just let it be and it eventually ends.

Start the instance[edit]

Then if all has gone well, you can now run the main Mastodon instance with docker-compose up -d which should give something like the following. To stop the instance use docker-compose down. Note that the docker-compose commands must be run from within the mastodon-docker directory.

# docker-compose up -d
Creating network "mastodon-docker_internal_network" with the default driver
Creating network "mastodon-docker_external_network" with the default driver
Creating mastodon-docker_es_1    ... done
Creating mastodon-docker_redis_1 ... done
Creating mastodon-docker_db_1    ... done
Creating mastodon-docker_sidekiq_1   ... done
Creating mastodon-docker_web_1       ... done
Creating mastodon-docker_streaming_1 ... done


You can now see all the containers running with docker ps, the container into which you can enter or log for debugging is the mastodon_docker_web_1 container which is shown in the last column of the docker ps table.

CONTAINER ID  IMAGE                       COMMAND                  PORTS                                NAMES
01d48ec588c0  tootsuite/mastodon:v2.5.2   "/sbin/tini -- bundl…"   3000/tcp, 4000/tcp                   mastodon-docker_sidekiq_1
1aeb232bfd5d  tootsuite/mastodon:v2.5.2   "/sbin/tini -- yarn …"   3000/tcp, 127.0.0.1:4000->4000/tcp   mastodon-docker_streaming_1
6973bc5bb637  tootsuite/mastodon:v2.5.2   "/sbin/tini -- bash …"   127.0.0.1:3000->3000/tcp, 4000/tcp   mastodon-docker_web_1
6335b6ff1f8e  docker.elastic.co/elas...   "/usr/local/bin/dock…"                                        mastodon-docker_es_1
ede8d06a420f  postgres:9.6-alpine         "docker-entrypoint.s…"                                        mastodon-docker_db_1
0ca51e27180c  redis:4.0-alpine            "docker-entrypoint.s…"                                        mastodon-docker_redis_1

Connecting with Nginx[edit]

Now that we have a running Mastodon instance in a container, we need to connect it to our web-server outside the container. This simply involves creating an appropriate server block to connect request to our Mastodon domain to the ports exposed by the containers. I'm basing my server block on this page of the official documentation, see also this excellent guide by Dave Lane (but note that CSP and XSS headers are added by Mastodon automatically now).

Theming[edit]

The basics of adjusting themes is shown here, but these instructions only cover how to make adjustments to the default theme. We're running the mastodon-light theme, so in our case we created an app/javascript/styles/organicdesign.scss containing our new CSS rules and change variables etc. The variable settings from your existing theme go at the top before your custom variable changes, and the other CSS files your parent theme originally included go at the bottom after your variable changes, check the Github repo to see what your original theme includes. You can also see all the variables used by your theme and the common application variables in the repo. You may also want to add your own custom CSS rules which can go before or after the original CSS includes as required. Here's a short example, our full scss file is here.

/* Original theme variable definitions */
@import 'mastodon-light/variables';

/* My new variable definitions */
$classic-secondary-color: #f2f2f9;
$ui-base-color: #f2f2f9;
$ui-highlight-color: #4e507f;

/* Additional CSS imported by the theme */
@import 'application';
@import 'mastodon-light/diff';

For these changes to take effect, you also need to edit the config/themes.yml file and change the mastodon-light entry to point to your newly created custom scss file instead of the original one. After you've made your changes, you'll need to stop the system, rebuild it and bring it up again.

docker-compose build
docker-compose down
docker-compose up -d

Images: One slightly annoying thing about the way the Mastodon skin is done is that it uses a lot of img elements instead of using div elements with background images. But you can actually make an image invisible and change it to show only a css background image as shown here by Marcel Shields.

Using the API[edit]

Mastodon comes with a web API so that bots and other apps can interact with instances and their content and users. The main documentation is here. Some calls are publicly available, but others such as posting a status require an access key. Although there is a workflow for creating via the API, the easiest way is to just log in to the account (or create one) manually, then go to the "development" item in the accounts settings and add an application. Clicking on the newly created application in the list shows a Client Key, a Client Secret and the API Access Token.

Administration[edit]

The first thing to do is add a cronjob to remove old (default 7 days) remote media from the local cache because it grows huge over time:

0    1 * * * root cd /var/www/domains/mastodon-docker && docker exec -it mastodon-docker_web_1 bin/tootctl media remove --days=N


You can check the logs of the Mastodon instance by running the docker logs command on the main container ID, there are --follow, --tail, --timestamp options. See the Docker page for other Docker-specific commands.

docker logs mastodon-docker_web_1


To list the available rake tasks (admin functions):

docker-compose run --rm web rake -T


But note that now most of the application-specific tasks have been migrated into the tootctl utility, do the following to get a list of available tasks:

docker exec -it mastodon-docker_web_1 bin/tootctl help

Upgrade[edit]

See the upgrading section of the documentation. It basically just involves updating the repo and checking out the latest tag while preserving your docker-compose.yml settings (but changing the image versions in it). Then run the db:migrate task too in case any changes have been made to the database schema.

Note: I like to manually back up the changed files (check with git status what's changed, but mainly docker-compose.yml and Dockerfile are the important ones to get right), then do a git reset --hard then fetch and checkout the new tag. You can then compare the new config files with the old ones and migrate the necessary changes across. This is better than simple stashing and popping because, the configs can undergo a lot of change from version to version.

docker-compose build
docker-compose run --rm web bundle exec rake db:migrate
docker-compose up -d

Backup[edit]

The mastodon-data directory holds everything needed for a backup, but it's important to back up the postgresql database with a proper dump as well since the files can become corrupted and unusable. The redis database is also very important, but a dump is automatically maintained in the redis/dump.rdb file in the data directory and is safe to use for live backup, see this post about backing up and restoring redis databases. Both databases should be backed up at least daily, but the rest of the files can be backed up less regularly.

docker exec mastodon-docker_db_1 pg_dump -Fc -U postgres postgres > dump.pgsql
7za a mastodon-backup.7z dump.pgsql /path/to/data/redis/dump.rdb

Note: To backup the directories, it's probably best to do a docker-compose stop and the a start after, and you'll need to stop everything but start in the postgresql in order to to a db restore.

Tips & Tricks[edit]

  • To see a list of you muted/blocked users click the triple-dot more menu above the compose toot input. This is not really a tip, but it can be really hard to find!
  • To do a remote unfollow, either use the icon on the notification if they've followed you too, or block/unblock them.
  • Blocking: Blocking someone also stops them from following you, but you have to first follow them to be able to then block them. If you then unblock them again, they're still not following you but you can still see things from them in you feed via other people, and they can still follow you again later or message you etc.
  • You can follow an account from your own instance (which is useful for sites like friendica which don't provide a public follow link) by using the https://organicdesign.pub/authorize_interaction?acct=user@instance URL format.
  • Currently there is no way unfollow remote users, you have to block them and then unblock again (you can see your blocked and muted contacts in the lists option)

Related Articles[edit]

Fediverse bots[edit]

Mastodon tools & resources[edit]

Mastodon plugins[edit]

See also[edit]

LimeSurvey[edit]

Posted by Nad on 24 September 2018 at 17:58
This post has the following tags: Libre software
LimeSurvey is a PHP application aimed at creating and managing surveys and their result sets. It's a great open source replacement for Google Forms which is nothing more than yet another sinister way Google uses to get access to more of people's private information.

Creating surveys[edit]

Surveys can be created or existing ones modified from the main page. Surveys are divided into two main aspects in LimeSurvey, the settings and the structure which are available the main tabs on the left sidebar of the survey creation screen. Settings are where the form is given a title, description, welcome and thank you message etc. Structure is the actual content of the survey in the form of groups of questions where each group is effectively a separate page of questions in the final survey. Each question by default expects just a plain text answer, but there are many other answer types available such as numerical, multi-choice, dropdown lists, checkboxes, dates or even file uploads.

You need to add a question-group before you can add any questions. You can add all the groups at once and then start adding the questions or start with the first. You can edit the details of any group or question within a group at any time by clicking on it in the left sidebar when the structure tab is selected. the right panel of the screen is for editing whatever is selected in the left sidebar, and the top bar of the right panel are the buttons related to it for saving the changes etc.

When you're editing or creating a question, the main panel on the right is divided into to sides. The left half is the title, description and text of the question itself, and the answer part where the answer type (type of input) is specified. After you've selected the question text and the input type, you save and close the question. If the question type was a list type, you can edit the list options text with the "edit answer options" button on the top bar. The top bar is also where you can preview the question you've just made to see if it looks right, or even preview the whole group or survey. Previews open in another tab.

Running surveys[edit]

After you're happy with the survey, you go back to the settings tab in the left sidebar and go to the "Publication and access" option where you can specify various final details about the survey such as how long it's available for and whether it's public or restricted. Finally you click "Activate this survey" which locks it from further changes, gives you the link to your survey and makes it available to the participants.

You can see the data that's been collected any time from the "Notifications and data" option in the settings tab of the sidebar. You can use the "Export" button in the top bar to export the results as a spreadsheet for final analysis, or to display the results in table format on screen, click the "Responses" dropdown button in the top bar and select "Responses and statistics" then "Display responses" from the top bar.

See also[edit]

Red Matrix[edit]

Posted by Nad on 23 June 2013 at 17:11
This post has the following tags: Libre software
Red is kind of like a decentralised social network (along the lines of identi.ca, Friendica, and Diaspora), but they've thrown away the rule book. Red has no concept of "people" or "friends" or "social". Red is a means of creating channels which can communicate with each other and to allow other channels permission to do things (or not). These channels can look like people and they can look like friends and they can be social.

They can also look like a great many other things - forums, groups, clubs, online websites, photo archives and blogs, wikis, corporate and small business websites, etc. They are just channels - with permissions that extend far beyond a single website. You can make them into whatever you wish them to be. You can associate web resources and files to these channels or stick with basic communications. There are no inherent limits. There is no central authority telling you what you can and cannot do. Any filtering that happens is by your choice. Any setting of permissions is your choice and yours alone.

You aren't tied to a single hub/website. If your own site gets shut down due to hardware or management issues or political pressure, the communication layer allows you to pop up anywhere on the Internet and resume communicating with your friends, by inserting a thumb drive containing your vital identity details or importing your account from another server.

Your resources can be access controlled to allow or deny any person or group you wish - and these permissions work across the Red network no matter what provider hosts the actual content. Red "magic-auth" allows anybody from any Red site to be identified before allowing them to see your private photos, files, web-pages, profiles, conversations, whatever. To do this, you only login once to your own home hub. Everything else is, well - magic.

Red is free and open source and provided by volunteers who believe in freedom and despise corporations which think that privacy extortion is a business model. The name is derived from Spanish "la red" - e.g. "the network".

Our installation[edit]

We have a test installation running at red.organicdesign.co.nz. The installation process was a simple standard LAMP application installation, just git clone the code-base and add a few rewrite rules as follows - they recommmend using their .htaccess, but we have this functionality disbaled and added the following rules to our global Apache configuration instead.

RewriteCond %{HTTP_HOST} ^red\.
RewriteCond /var/www/domains/red%{REQUEST_FILENAME} !-f
RewriteCond /var/www/domains/red%{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /red/index.php?q=$1 [E=REMOTE_USER:%{HTTP:Authorization},L,QSA]
RewriteCond %{HTTP_HOST} ^red\.
RewriteRule (.*) /red/$1 [L]

When I first installed it, the installer pointed out a number of environment options that needed fixing which was quite straight forward, but for some reason the application wouldn't run so I left it for a couple of months and then did a git pull to update the code-base and tried again. Something must have been fixed, because now our test installation is up and running :-)

Path bug[edit]

For some reason our installation had a problem whereby any pages in sub-categories such as /profile/foo or help/intro wouldn't load their CSS or JS as the base-url setting would become a sub-directory instead of the root of the domain. I isolated this to the setting of $path on line 634 of boot.php which was basing its value on $_SERVER['SCRIPT_NAME'] to test if the installation was running in a sub-directory. I think perhaps it should be testing $_SERVER['SCRIPT_FILENAME'] instead but I'm not sure. I've contacted the lead developer with this info and just forced $path to an empty string for our installation since we're not in a sub-directory.

Articles about Red[edit]

See also[edit]

Nextcloud[edit]

Posted by Nad on 18 February 2018 at 00:06
This post has the following tags: Libre software
Nextcloud is a kind of personal Dropbox solution which is completely free and open source. There are many add-on applications available such as calendar, tasks, chat and collaborative file editing.

Installation[edit]

LOOL.jpg
This is our procedure for installing Nextcloud and LibreOffice Online on a Debian-based server. LibreOffice has included a component to allow it to be served over HTTP since version 5.3 but to use it you need to integrate it with a cloud file system that supports it. Nextcloud which is a brilliant groupware suite in its own right supports LibreOffice Online and integrates perfectly with it.

I'm following the instructions created by Collabora and Nextcloud from here, and more specifically the Nginx variation here. I'm documenting here my specific configuration to include the LetsEncrypt and other specific configuration aspects that are out of the scope of those instructions so that we have a more easily reproducible procedure.

I'm using the office.organicdesign.host domain here which you'll need to change for your own purposes.

Set up the server[edit]

Bring the machine up to date and install the following dependencies.

apt install git net-tools apt-transport-https locales-all letsencrypt \
            mariadb-server redis-server nginx python3-certbot-nginx \
            php-fpm php-mysql php-redis php-zip php-gmp php-curl php-mbstring \
            php-xml php-apcu php-imagick php-gd php-intl php-bcmath

Configure the web-server and SSL certificates[edit]

In the Nginx configuration for this site, add a basic block for handling non-HTTP requests as follows. This will allow the LetsEncrypt domain validation requests to pass, but all other requests will be bounced to their respective HTTPS counterparts.

server {
	listen 80;
	listen [::]:80;
	server_name ~^;
	rewrite ^/\.well-known $uri last;
	return 301 https://$server_name$request_uri;
}


We can now install LetsEncrypt.

cd /var/www
git clone https://github.com/certbot/certbot.git letsencrypt


And then run it to make our certificates, after it has successfully created them add the command to be called from crontab daily.

letsencrypt/letsencrypt-auto certonly -q --keep --renew-with-new-domains --expand --webroot -w /var/www --agree-tos \
    --email "admin@organicdesign.host" -d office.organicdesign.host


Now create /var/www/nginx.ssl.conf with the following content that will be included from all SSL blocks. You'll need to replace the certificate paths with the ones that LetsEncrypt created for you. This block uses a set of secure cyphers suggested by SSLlabs, see SSL for more details.

ssl on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EDH+aRSA:EECDH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS; # SSLlabs
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /var/www/dhparams.pem;
ssl_certificate /etc/letsencrypt/live/office.organicdesign.host/fullchain.pem;
ssl_client_certificate /etc/letsencrypt/live/office.organicdesign.host/chain.pem;
ssl_certificate_key /etc/letsencrypt/live/office.organicdesign.host/privkey.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;


Don't forget to create the strong Diffi-Hellman parameters which we referred to in nginx.ssl.conf.

openssl dhparam -out /var/www/dhparams.pem 2048

Install and configure Nextcloud[edit]

Nextcloud is a "drop-box" style web-application which is completely open-source so you can install it on your own server, and it has built-in integration for working with LibreOffice Online. Nextcloud is PHP so first download the source and unpack it into /var/www/nextcloud, ensure it's accessible by www-data, and then add the Nginx configuration recommended here. In this configuration we need to adjust the domain names and delete the port 80 block since we have an existing one described above to handle LetsEncrypt domain validation requests. Also remove the SSL lines and replace them with an include of the nginx.ssl.conf we made above.

A database and user will need to be created and then you can run through the install by going to the files.organicdesign.host domain. After you've successfully installed Nextcloud, go to the admin updater to check for any problems or optimisations and upgrade to the latest stable version.

  • I changed the URL in config/config.php to https
  • I noticed the install had used the wrong DB user once so may need editing in config.php
  • I removed the upstream block and used unix:/run/php/php7.0-fpm.sock directly for the fastcgi_pass parameter

Install LibreOffice Online[edit]

We now need to add a reverse-proxy block into our Nginx configuration, you can use the block from the instructions here (see below for Docker instructions). Adjust the server_name parameter to the domain you're using and replace the SSL directives with an include of the nginx.ssl.conf we created above.

Note: If you're using the same domain for both Nextcloud and LOOL, you need to put the LOOL Nginx server block contents inside the Nextcloud block, see this for example.

From the Docker image[edit]

By far the simplest method is to use Docker. Run through the Docker installation for details, which basically involves simply pulling and running it with the following syntax.

docker pull collabora/code
docker run -t -d -p 127.0.0.1:9980:9980 -e 'aliasgroup1=https://office.organicdesign.nz:443' --restart unless-stopped --cap-add MKNOD collabora/code

Using the Debian package[edit]

Although using Docker is by far the simplest method, Docker is heavy and you may prefer to install loolwsd from a native Debian package to reduce dependencies and have it running in the native environment. Most of this is just taken directly from the start script in the Docker image source.

echo "deb https://collaboraoffice.com/repos/CollaboraOnline/CODE /" >> /etc/apt/sources.list.d/collabora.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6CCEA47B2281732DF5D504D00C54D189F4BA284D
apt update
apt install loolwsd code-brand


If you want to add dictionaries for various languages

apt install collaboraoffice5.3-dict* collaboraofficebasis5.3*
mkdir -p /usr/share/hunspell
mkdir -p /usr/share/hyphen
mkdir -p /usr/share/mythes
mkdir -p /opt/lool/systemplate/usr/share/hyphen
for i in `find /opt/collaboraoffice5.3/share/extensions/ -name hyph*.dic`;do cp $i /opt/lool/systemplate/usr/share/hyphen;done
for i in `find /opt/collaboraoffice5.3/share/extensions/ -name hyph*.dic`;do cp $i /usr/share/hyphen;done
cp /opt/collaboraoffice5.3/share/extensions/dict-en/en_US.* /usr/share/hunspell
cp /opt/collaboraoffice5.3/share/extensions/dict-en/en_GB.* /usr/share/hunspell
cp /opt/collaboraoffice5.3/share/extensions/dict-pt-BR/pt_BR.* /usr/share/hunspell
apt remove --purge collaboraoffice5.3-dict*
rm -rf /var/lib/apt/lists/*


This is needed to fix a domain resolving bug:

rm /opt/lool/systemplate/etc/resolv.conf
ln -s /etc/resolv.conf /opt/lool/systemplate/etc/resolv.conf


Copy the LetsEncrypt certs (replace with your own cert path):

cp /etc/letsencrypt/live/office.organicdesign.host/privkey.pem /etc/loolwsd/key.pem
cp /etc/letsencrypt/live/office.organicdesign.host/cert.pem /etc/loolwsd/cert.pem
cp /etc/letsencrypt/live/office.organicdesign.host/chain.pem /etc/loolwsd/ca-chain.cert.pem
chown lool:lool /etc/loolwsd/*.pem


Change the host configuration settings from localhost to your Nextcloud domain (you can just edit loolwsd.xml directly if you prefer):

perl -pi -e "s/localhost<\/host>/files.organicdesign.host<\/host>/g" /etc/loolwsd/loolwsd.xml


Then finally run the daemon as the lool user in the background (you may also like to make a @reboot crontab entry for it as well).

sudo -u lool loolwsd --version --o:sys_template_path=/opt/lool/systemplate --o:lo_template_path=/opt/collaboraoffice5.3 \
    --o:child_root_path=/opt/lool/child-roots --o:file_server_root_path=/usr/share/loolwsd &

Finishing up[edit]

Now you can enable the Collabora Online application in your Nextcloud from settings/apps and then go to Collabora Online in the administration section of settings and set the URL of your application to https://office.organicdesign.host. Now you should be ready to testing out creating and editing some office documents in our files!

Mail: You can configure and test you site's mailout settings from settings/administration/additional settings. Set your email to something external to the server first so that the test messages will be a proper test, since sometimes you may have settings that work for local addresses but not for external ones. For example, our server setup does work for external addresses if the encrypted and authenticated SMTP is used since that only works for connections coming in from the outside, since the Nextcloud is on the same host, unencrypted SMTP connections on port 25 should be used.

Registration: After that, you may want to install the registration add-on so that users can register themselves, you can configure it from the same place as the mail and define a default group and whether the accounts need admin approval - if so, the account is initially disabled and needs to be enabled by an admin.

Upgrading[edit]

Upgrading NextCloud can be easily done by simply following the instructions in the site when an upgrade is due. But it can be a good idea to opt to use the CLI updater when prompted. To do this open a shell and navigate to the Nextcloud installation location, the check for upgrades:

sudo -u www-data php occ update:check


If there are any updates to do, run the updater as follows, and select "Y" when it asks if you'd also like to run occ update.

sudo -u www-data php updater/updater.phar

You should repeat the process of checking for upgrades after this as sometimes, if the installation is quite old, upgrading leads to new potential upgrades.


If there are any problems, try running it again, it may have removed a problem extension and succeed on the second run. After successfully running the upgrade, turn the maintenance mode off:

sudo -u www-data php occ maintenance:mode --off

To upgrade Colabora, simply stop the Docker container (and be sure to stop any other containers using port 9980), pull the collabora/code repo, and if a new image version was retrieved, delete the old container and image, and start the new one with the docker run command shown above in the installation. For some reason it sometimes takes an hour or so before you can start editing documents with it in Nextcloud, so if its not working after upgrade, come back to it after a couple of hours and check then.

Changing domain name[edit]

To change the domain name of a Nextcloud installation which is running Collabora there are five places that need to be updated:

  • The sites SSL certificates
  • The web-server configuration
  • Nextcloud's config.php file
  • Remove the collabora/code container and run a new one using the new domain parameter
  • Set the new Collabora URL in the Nextcloud site configuration.

Developing a custom add-on[edit]

I'd like to make an add-on which caters for some of the things we need to do but that we've not found available within the current selection of addons. The idea is to be able to create custom record types and instances of those records. Creating a new record type would involve defining a list of fields and their data-types (which in turn requires a list of such types and their input/output contexts) and the design of a form for creating or modifying instances of those records.

  • The ability for the public to create and edit their own instances if records would allow for information gathering like google forms
  • Searching and reporting would be necessary and they could be exported to CSV or spreadsheet
  • Deck might be a good existing addon to base it on since it already has boards, stacks and cards, this could map for example to record-types, record-instances and fields

Troubleshooting[edit]

Check in the "office" item in the "administration" sidebar of your Nextcloud's "settings". It tells you there if the server-side Collabora is accessible. You can also manually check the info response from /hosting/capabilities.

Tips & Tricks[edit]

  • Always leave a document by closing it, rather than just closing the browser or tab as ghost users can cause problems
  • Only add registered people as participants in calendar events, an edit won't save if there's unregistered participants being added
  • It seems that it's best to use cron for background maintenance rather than Ajax updates (just run php -f .../cron.php as the web server user every 15min)
  • You can get user's email addresses from the accounts table which contains a JSON string of all the account into with original display name in the key column
  • Maintenance can be done from shell, use sudo -u www-data php occ to see all the commands and options

Changing default preferences[edit]

The online version of Libre Office has a very simplified user interface with very few adjustable options, but since it's using the actual Libre Office code internally those options can still be set from within the server configuration (this method does not work if you're running LOOL from the Docker image).

The configuration file containing the preferences is /etc/coolwsd/coolkitconfig.xcu which is a very simplified version of the desktop Libre Office configuration file usually found in ~/.config/libreoffice/4/user/registrymodifications.xcu. We can find the preferences we want to adjust in the desktop version of the file and copy those same entries across to the LOOL version.

Note that if you're running the docker container rather than a native installation, you can edit files in the container using the docker cp command, e.g.

docker cp <CONTAINER_ID>:/etc/coolwsd/coolkitconfig.xcu coolkitconfig.xcu
nano coolkitconfig.xcu
docker cp coolkitconfig.xcu <CONTAINER_ID>:/etc/coolwsd/coolkitconfig.xcu

For example, I wanted to disable the auto-correct functionality that fixed capitalisation of the first two letters of words. In the desktop version when I disable this functionality from the interface the following entry in the configuration is updated:

<item oor:path="/org.openoffice.Office.Common/AutoCorrect"><prop oor:name="TwoCapitalsAtStart" oor:op="fuse"><value>false</value></prop></item>

I then added this rows into the /etc/coolwsd/coolkitconfig.xcu file and after restarting the service, LOOL recognised the new preference and disabled the functionality properly.

Using cell protection[edit]

You may have noticed in the spreadsheet that the "cell protection" tab is still available in the "format cells" dialog box, but you can use it because the "protect sheet" option is not available in the "tools" menu.

You can however download your sheet, and enable the cell protection using the offline version of Libre Office, and then upload it again and then you'll find that the cell protection works as it should. To modify or un-protect a a protected cell, you'll need to go back to the offline version though.

Custom CSS[edit]

I like to enable the "Custom CSS" app which gives you the ability to add your own CSS rules in the theming section of settings. Here's my custom rules:

/* Highlight the background of the current day in the calendar */
#fullcalendar td.fc-today {
  background-color: #FFFFDE;
}

/* Make the notifications more obvious */
div#notification {
  background-color: #FFFFDE;
  font-size: 200%;
  padding: 20px 40px;
  font-weight: bold;
  margin-top: 100px;
  border-radius: 10px;
  border: 1px solid black;
}

Sharing a Nextcloud calendar with Google calendar[edit]

The intuitive way of sharing a Nextcloud calendar using the Webdav link does not work with Google calendar (or Thunderbird for that matter). But generating an export link using the following procedure works in both cases, although Google is extremely slow to synchronise external calendars, taking 1-2 days.

  • Click the share link of one of your calendars (e.g. the OD one) that you want to share
  • This opens a share link option, click the + to the right of it
  • Then you'll see a ... menu
  • Go into that and click copy "subscription link"
Nextcloud calendar sharing.png

The link will now be in the clipboard ready for pasting. This link is not the usual Dav link and the data that is returned by it is acceptable to Google and Thunderbird calendars.


To use the link to retrieve your calendar into a Google calendar:

  • In the side menu go to "other calendars"
  • Click +
  • Select "from URL"
  • And paste your link in

Errors and issues[edit]

Memcache \OC\Memcache\APCu not available[edit]

This error started showing up for the execution of cron.php. It can be mitigated by enabling the APCu for the call by adding --define apc.enable_cli=1 to php command in the crontab.

See also[edit]

Diaspora[edit]

Posted by Nad on 1 January 2011 at 10:12
This post has the following tags: Libre software
Diaspora lets you sort your connections into groups called aspects. Unique to Diaspora, aspects ensure that your photos, stories and jokes are shared only with the people you intend.

You own your pictures, and you shouldn’t have to give that up just to share them. You maintain ownership of everything you share on Diaspora, giving you full control over how it's distributed.

Diaspora makes sharing clean and easy – and this goes for privacy too. Inherently private, Diaspora doesn’t make you wade through pages of settings and options just to keep your profile secure.

See also[edit]

OpenCobalt[edit]

Posted by Nad on 13 July 2008 at 06:34
This post has the following tags: Libre software
The Croquet Project is an international effort to promote the continued development of Croquet, a free software platform and a P2P network operating system for developing and delivering deeply collaborative multi-user online applications. Croquet was specifically designed to enable the creation and low-cost deployment of large scale metaverses.

Implemented in Squeak (An open source community developed implementation of Smalltalk), Croquet features a network architecture that supports communication, collaboration, resource sharing, and synchronous computation among multiple users. It also provides a flexible framework in which most user interface concepts can be rapidly prototyped and deployed to create powerful and highly collaborative multi-user 2D and 3D applications and simulations. Applications created with the Croquet Software Developer's Kit (SDK) can be used to support highly scalable collaborative data visualization, virtual learning and problem solving environments, 3D wikis, online gaming environments (MMORPGs), and privately maintained/interconnected multiuser virtual environments.

Cobalt[edit]

Cobalt (download page) is an emerging multi-institutional community software development effort to deploy an open source production-grade metaverse browser/toolkit application built using the Croquet SDK. Cobalt was made available under the Croquet license as a pre-alpha build in March 2008.

Open Cobalt virtual machine[edit]

The Open Cobalt virtual machine is essentially the same as the Squeak virtual machine. It acts as an interface between Squeak code and the microprocessor. Its written in Slang, a functional subset of Smalltalk that can be translated into standard C. Squeak uses the C language as a cross-platform equivalent of assembly language. Since Slang is a subset of Smalltalk, the Squeak virtual machine can be edited and debugged by running it in Squeak itself. The virtual machine can also be extended with plug-ins that are written in either C or Slang. These are used in Squeak for such things as playing media files and for supporting Squeak's built in public key encryption abilities.

Squeak[edit]

[Squeak [more...]]

Our plans for a 3D environment[edit]

We think the ultimate interface is the Geoscope and Croquet perfectly fits the requirements that the Geoscope idea has for a 3D environment, such as OO, Prototype-based, P2P, Self-contained, changeable from within etc.

To begin using a new environment in our system, we must first incorporate the installation of it into our packages. next we must develop a means of having the textual content available from both the wiki and 3D environment. Also having snapshots of the 3D environment maintained in the wiki would be good.

Running on Ubuntu[edit]

I downloaded the source from the download page, unpacked it and ran the Croquet.sh file from a shell window (so I would see any output messages).

First it failed to find the audio asking for OpenAL which I downloaded source for, unpacked, configured and make, make install'd. It still failed to find Audio.

Second when I drag the home page (cobalt ball) or any other demo onto the workspace, it freezes saying "waiting for connection" forever. The shell shows a message saying that libuuid.so.1 was not found, but this library is definitely installed.

  • Seems to be that it expects the file to be in /usr/local/lib not just /lib, but symlinking to either of the existing versions gives a new error of libuuid.so.1: wrong ELF class: ELFCLASS64.

To check out[edit]

  • Emotions, rules and actions tree for sims

See also[edit]

Distributed TiddlyWiki[edit]

Posted by Infomaniac on 17 November 2011 at 03:15
This post has the following tags: Libre software
Legacy.svg Legacy: This article describes a concept that has been superseded in the course of ongoing development on the Organic Design wiki. Please do not develop this any further or base work on this concept, this is only useful for a historic record of work done. You may find a link to the currently used concept or function in this article, if not you can contact the author to find out what has taken the place of this legacy item.
A distributed wiki that might work

Frustrated with the progress of *Diaspora, an idea kept nagging at me: why can't a TiddlyWiki be distributed via Bit Torrent? I've been wanting to start a blog that is serverless and I thought there must be a way using existing technologies.

The foundation of my blog would be some implementation of TiddlyWiki, probably mGSD (previously known as MonkeyGTD or D3 ("D-Cubed"), which I tried before and found to be quite impressive.

TiddlyWiki is essentially an index html page (it uses the Single Page Application architecture) that is loaded with ingenious javaScripts that load flat datafiles, called tiddlers. As I understand it, every time a new page is created, it is stored in a new tiddler, and every time a page is edited, a new tiddler file is created that, rather than replacing it, supersedes the previous version. (The application only needs create access, not modify or delete). Thus, each page has a version history.

It is the index, or base file and its collection of tiddlers that need to be distributed.

Distribution[edit]

GitHub?[edit]

My first idea was to distribute this collection of files using Github, which nicely deals with files added to the collection. But one must have a Github account and set up crypto keys to use it - not really easy for the marginally-technical person. Worse, after I upload or sync my files to GitHub, in order for anyone to read my wonderful blog/wiki, they too must set up a GitHub account, which is really expecting too much for the average reader. Not only that, but this wikiblog will only be found by search engines as a project directory on GitHub, with no easy way for someone to discover what it's all about.

According to Git is the next Unix:

Quote.pngGit was originally not a version control system; it was designed to be the infrastructure so that someone else could build one on top. And they did; nowadays there are more than 100 git-* commands installed along with git. It's scary and confusing and weird, but what that means is git is a platform. It's a new set of nouns and verbs that we never had before. Having new nouns and verbs means we can invent entirely new things that we previously couldn't do.

Git is a new kind of filesystem, and it's faster than any filesystem I've ever seen: git checkout is faster than cp -a. It even has fsck.

Git stores revision history, and it stores it in less space than any system I've ever seen or heard of. Often, in less space than the original objects themselves!

Git uses rsync-style hash authentication on everything, as well as a new "tree of hashes" arrangement I haven't seen before, to enforce security and reliability in amazing ways that make the idea of "guaranteed identical every time" not something to strive for, but something that's always irrevocably built in.

Git names everything using globally unique identifiers that nobody else will ever accidentally use, so that being distributed is suddenly trivial.

Appenwarr

Github offers gists, issues, and wiki, though it's not clear that these are part of the DRCS (they don't seem to ever be copied to the local file system with the source), maybe they are stored only on the website. In any case, Git can be used as a distributed datastore for a wiki. Some people have already thought of the idea:

Git-wiki[edit]

is a wiki that relies on git to keep pages' history and Sinatra (a DSL Ruby for quickly creating web applications in Ruby with minimal effort) to serve them.

Quote.pngI wrote git-wiki as a quick and dirty hack, mostly to play with Sinatra. It turned out that Sinatra is an awesome little web framework and that this hack isn't as useless as I first though since I now use it daily.

However, it is definitely not feature rich and will probably never be because I mostly use it as a web frontend for git, ls and vim.

If you want history, search, etc. you should look at other people's forks.
sr

  • most active fork appears to be olelo

Cappuccino Github Issues[edit]

On Cappuccino demos is a project called GitHub Issues :

Quote.pngCappuccino front-end for GitHub Issues. Available as a website and as NativeHost desktop app available in app as a download.
  • If I understand correctly, the app synchronizes with Github issues, which I believe is distributed version control filesystem. The essential features of editing , commenting, and tagging documents are all there. hmmm...
  • Github gists are a simple way to share snippets and pastes with others. All gists are git repositories, so they are automatically versioned, forkable and usable as a git repository.

BitTorrent[edit]

Far more people know how to use a bittorent client - at least for downloading - than know how to use GitHub. A very small subset of bittorrent users know how to create a torrent file and upload it to a tracker on a torrent index site. Not only that, but trackers and torrent files are quickly being replaced by trackerless sites using magnet links, and many bittorrent clients now can distribute and search for magnet links on a DHT (and for the Vuze client, a special messaging network called PEX.

The advantage of magnet links, is that there is no .torrent file to download; it is simply a URL specifying the hash of the file and its title. Indexing sites that use magnet links don't have to store .torrent files and use less bandwidth. Even better, magnet links can be stored in the Mainline DHT, and all major bittorent clients can search the distributed database directly, without the need for an indexing site like TPB. Their only downside is that the links are long and ugly, and not easy to read or share.

The average web surfer or blogger does not understand all this, and the amount of maintenance to run a site this way is significant. And merely making a torrent file available does not a site make. Unless one knows where to look, readers will never find it, never download the torrent, and never read the site.

Also, since a torrent is a static file that describes static content, it is not possible to add new tiddlers to an existing torrent - this changes the hash of the file collection, and necessitates creating a whole new torrent, and somehow revoking the old version. This problem also applies to magnet links, since they are a hash of the entire distribution, not its individual files.

This means that it would be necessary to create a new .torrent file, or magnet link for every single tiddler, and somehow associate it with the rest of the dynamic distribution (which is not a challenge for GitHub) -- perhaps using a common namespace in the magnet links sent to the DHT. But using a bittorrent client won't do, the reader would continually have to find a way to download the newest tiddlers to keep his mirror of the site up-to-date.

There has to be an easier way. To begin with, what is needed is browser integration with Bittorrent.

Vuze[edit]

Vuze (formerly known as Azureus) is essentially a bitttorent client with a built-in browser. I don't know the specifics of its data structure, but it is essentially a multi-page site whose content indexes videos and their metadata, and then efficiently loads videos (and apparently pre-fetches previews on the current content page). Vuze by default uses an incompatible DHT but a plugin is available that integrates it with the Mainline DHT. It also can use magnet links. However, Vuze is proprietary, and although it is possible to upload videos, it is not clear (to me) how one goes about creating html content for the Vuze network, if that is even possible.

But the concept shows that it is possible to create a distributed site, or at least, distributed video content that is indexed in the DHT, with existing technology.

Opera[edit]

Opera is another proprietary browser known for its speed - almost as fast as google chrome. It's innovative in a quirky sort of way, for example, it has desktop widgets that are familiar to Mac and Windows 7 users. It does not support all the nifty plugins and extensions people have come to expect on Firefox, but the more important ones, like NotScript are available, and there are many cool extensions, for example Facebook integration and real-time site translation. Opera also includes a mail client. Lack of integrated email support is a common complaint for TiddlyWiki users using it for GTD.

Another nicety: online bookmark syncronising, accessible from anywhere.

Version 9 introduced an bittorent client. At first, most people laughed at the idea, unable to imagine any possible use for a built-in bittorrent client. But downloading a torrent really is transparent and hassle-free for those who are still bittorrent newbies. It is even possible to integrate Opera with Transmission. Opera also can use magnet links, although some websurfing is often required to get it to work. It also claims to be faster and use less memory for pages that are heavy on javaScript - a plus for our javaScript tiddlyWiki.

Lastly, Opera Unite is a built-in personal webserver, that offers the user the capability to host content, share files, composite and share photo albums, live messenger chat and webcam, and even stream your media library. Unite includes a community proxy that forwards your subdomain to your Unite server or use a dynamic DNS plugin. Pretty attractive and powerful, if you don't mind the proprietary technology.

At the end of the day, Opera has almost everything needed to create a personal, self-hosted site, and seamlessly download bittorent files. It seems easy enough to host a tiddlyWiki using Unite, but this is not distributed; it depends on Opera's proxy service.

What is missing is a way to manage new tiddlers, calculate their hashes, create magnet links, and upload them into the Mainline DHT. Opera apparently has no mechanism to create new torrents at all.

  • A utility script called Magnet Catcher is available for Opera. (More below) This could be extremely useful in creating a distributed TiddlyWiki, because although it does rely on .torrent files, it makes friendly magnet links that eliminate the need to download .torent files, making it possible to replace (manually) torrent links with magnet links in local content.
  • Unite-tracker aims to be a Bittorrent tracker service for Opera Unite.
    • Support for serving/tracking existing .torrent files
    • Support for creating .torrent files from folders/individual files
    • Can track any torrent that decides to use this service's announce URL, or can be restricted to only track .torrents that it serves up.
    • Provides a list of all peers connected to the tracker, and some information about them.
    • Tracker supports a scrape URL for gathering information about the tracked torrents.
    • Update check, with direct link to latest version (supports both stable/unstable release types)
    • Wiki documentation was last updated in 2009.

If Unite-tracker works with the current version of Opera, this is almost all the nuts-and-bolts needed to close the upload-download cycle of distributed web content, as long as an accessible page (primarily, the base index of the tiddlywiki) provides a link that references the index generated by the tracker.

However, Opera, at the end of the day, is proprietary. What is needed are open source solutions that can be maintained and improved...

Firefox Plugins[edit]

Magnet Catcher identifies torrent links on a page and automatically creates and adds a magnet link next to the torrent link. Simply click the magnet icon to get the magnet link.

Magnet Catcher also does away with the need to click on a torrent description in a search result page in order to download the torrent. (Apparently it fetches the metadata from the torrent file in the background). The magnet links are displayed directly on the browse pages next to the torrent titles.

Magnet Catcher works on almost any web site.

Resources and Addons To Make BitTorrent Magnet Life Easier

  • Mainline DHT Plugin for Vuze
  • Magnet Tracker - a handy script that scans a web page looking for torrent hashes. The script then displays a window in the bottom right of a compatible browser window which allows the user to download a Magnet link. Magnet Tracker supports many of the main torrent sites and even offers functionality on Wikileaks in response to the site’s adoption of the technology last year.
  • Magnet Catcher - Continuing on the theme, Magnet Catcher strips the concept of adding Magnet links to a page right back. As can be seen from the before and after screenshots below, this script adds Magnet links straight to the main searchpage of a site containing torrent hashes, with no need to click-through to the torrent details page.
  • Magnet Link Generator - If you already know the hash value of the material you want to download from BitTorrent, this basic webpage will convert it into a Magnet link.
  • Mgnet.me – The Magnet URI shortening service. One of the downsides to Magnet links is that they can be very long and therefore difficult to share.
    • Crucially they often have too many characters to be shared via Twitter, they are not clickable in IM apps like GTalk and MSN, and can be unfriendly on the eye. The Mgnet.me service changes all that.
    • Introduced earlier this year, Mgnet.me is a shortening service, much like Bit.ly or Tinyurl, designed specifically to convert Magnet URIs into shorter and more manageable links. As can be seen from the screenshot below, it also provides HTML code and a feature to post a newly shortened Magnet link directly to Twitter.
    • Depends on centralised service.

All of the above plugins, available for Firefox, suggest that it is currently possible to create a distributed wiki with very little modification and integration of code.

Bittorrent Streaming - the future[edit]

Sneak peak: BitTorrent expands live streaming tests - BitTorrent Live is a whole new P2P protocol to distribute live streamed data across the internet without the need for infrastructure, and with a minimum of latency. The inventor's efforts included writing a complete new P2P protocol from scratch. The BitTorrent protocol itself, he said, simply introduced too much latency to be a viable live streaming solution. The tests have so far (as of Oct 2011) been restricted to “simple pre-recorded content loops to test latency and audio/visual sync.”

I tried it, and either it didn't work on Ubuntu, or I did not know how to use it properly. That's because there is no documentation.

This is likely to be commercial closed-source. Still, it's interesting

Mechanism for updating tiddler index via RSS subscription[edit]

A Vuze plugin called Distributed Database Trusted Feed can act as a publisher or a subscriber to content.

As a publisher it takes a source of content (either a local file or a webpage) and makes a copy of this available as a torrent. It also makes a descriptor to this available for you to share with other people so they can subscribe to the content and get their own copy of it. The descriptor includes a public key so that when subscribers download the content they are safe in assuming that it came from the publisher. The plugin periodically scans the shared resource and will make a newer copy available to subscribers if it changes.

As a subscriber it takes a publication descriptor (for example a magnet link to the publish descriptor created by the publisher) and periodically downloads the latest content associated with it. It makes the content available via a local http port so the subscriber can easily consume it.

  • It remains to be seen whether this technology exists for other bittorrent clients.


Tribler distributed channel subscriptions (3/01/2010)

Tribler released a beta of its Bittorrent client with a new feature that the researchers behind the project have dubbed P2P moderation. The idea in a nutshell is that users can aggregate channels and content and distribute them through DHT. From the official announcement:

"In Tribler V5.2 every user can start their own "Channel" to publish torrents. When people like your torrents you become popular and essentially become the owner of an Internet TV channel. You can moderate this RSS-like stream of torrents. This feature is designed to stop the flow of spam in P2P bittorrent, without the requirement of any server."

Channels can be pre-populated with an existing RSS feed, or personally aggregated by manually adding torrent files. The client lists a number of popular channels and also offers the option to search for channels.

However, the search seems to be restricted to the actual channel name, which makes it impossible to find a channel by searching for the content you're looking for. Users also can't add any description, tags or artwork to their channels. Add to this the fact that I didn't even find an easy way to rename your channel, and you'll see why this is still a pretty experimental feature.

The idea itself of course isn't really new: The original eDonkey client already included the ability to publish collections of files, and Vuze users have been able to publish distributed feeds through the Distributed Database Trusted Feed plug-in and the RSS Feed Generator plug-in since 2008.

Nodewiki[edit]

Nodewiki is wiki software created using Node.js. It uses redis for data storage, with the redis node client to talk to redis. Page markup is written using Showdown, a Markdown implementation written in javascript.

fanout.js[edit]

fanout.js - a simple and robust fanout pubsub messaging server for node.js

See also[edit]

Vuze[edit]

Transmission[edit]

Posted by Nad on 10 September 2008 at 11:35
This post has the following tags: Libre software
Transmission is the default P2P torrent downloading application that comes with Ubuntu.

Starting and stopping on schedule[edit]

To run transmission from the crontab (/etc/crontab), it must be associated with a display as in the following example. The first line starts transmission at 2 am, and the second stops it at 8 am. Edit the crontab with sudo nano /etc/crontab, and add the following lines (replace username with your Linux username).

0 2 * * * username transmission --display=:0
0 8 * * * username killall -9 transmission

Now press CTRL + O and then hit return to save your changes to the file.

Quit option removed[edit]

The reason the --quit option has been removed is because the preferred method of running transmission in the background is as a daemon. See their wiki for more information.

Running on a server[edit]

transmission-daemon is a daemon-based Transmission session that can be controlled via RPC commands from transmission's web interface or transmission-remote. transmission-remote is a remote control utility for Transmission and transmission-daemon. By default, transmission-remote connects to the Transmission session at localhost:9091. Other sessions can be controlled by specifying a different host and/or port.

See also[edit]

Tonika[edit]

Posted by Infomaniac on 26 June 2011 at 16:13
This post has the following tags: Libre software
Organic security: A (digital) social network, which (by design) restricts direct communication to pairs of users who are friends, possesses many of the security properties (privacy, anonymity, deniability, resilience to denial-of-service attacks, etc.) that human sociaties implement organically in daily life. This is the only known decentralized network design that allows open membership while being robust against a long list of distributed network attacks. We call a digital system with such design an organic network and the security that it attains for its users — organic security. Organic networks are extremely desirable in the current Internet climate, however they are hard to realize because they lack long-distance calling. Tonika resolves just this issue.

Long-distance calling: At its core, Tonika is a routing algorithm for organic networks that implements long-distance calling: establishing indirect communication between non-friend users. Tonika is robust (low-latency, high-throuhgput connectivity is achieved in the presence of significant link failures), incentive-friendly (nodes work on behalf of others as much as other work for them), efficient (the effective global throughput is close to optimal for the network's bandwidth and topology constraints) and real-time concurrent (all of the above are achieved in a low-latency, real-time manner in the presence of millions of communicating parties).

Some application areas: Internet (bandwidth) neutrality. Freedom, no-censoring and no-bias of speech on the Internet. Scalable open Internet access in all countries. User ownership of data and history in social applications. Cooperative cloud computing without administration. Etc.

Robust: more robust than most other p2p networks by having a strong defence against the and can form a strong defence against the Sybil attack

Installing on Debian/Ubuntu[edit]

Haven't been able to install it so far - needs to be compiled from source, and it's written in the obscure "GO" language which is not noob-friendly. I installed GO from these instructions which resulted in a fatal error, but seemed to pass the execution test anyway so I carried on with the Tonika installation by checking it out from source and following the instructions in its README file. It gave many errors with path and environment variable problems, I tried fixed the first few and then decided it's too alpha to get into just yet and bailed.

External links[edit]

See also[edit]