The Hidden Cost of the Raspberry Pi (and other “cheap” SBC’s)

The Raspberry Pi and other small single board computers have really taken off in the past few years, especially with the burgeoning wave of development, both commercial, but mainly hobbyist of the Internet of Things (IoT) arena.

Now Raspberry Pi (I am focusing on RPi here because it kicked off the whole shebang in a big way, small SBC’s existed before then but they were not as widely available or used) was never intended to be a IoT board, it was originally intended to be used to teach programming to children. The success of this original project (with over 5 million, yes that is 5,000,000 sold) has not only spawned a myriad of projects but a whole bunch of clones and similar devices looking to capitalize on the success of the project.

With the hobbyist community getting a hold of these devices and putting them into various projects one has to question the cost of these devices. The devices for those who do not know cost US$25 or US$35 depending on the board revision however you also need to add a SD card (either standard or micro depending on revision), power supply, case (enclosure) and if needed a USB wireless dongle and you are looking at getting towards US$100, not as cheap as it sounds to be, and that’s in a basic headless configuration.

The other side to this is the environmental cost, with all these devices (remember there are 5 million RPi’s alone) floating around that will at some point in there lives end up being thrown out, and mostly into landfill it is not overly environmentally cost effective with all those electronics leaching chemicals and other materials over time. What causes this, upgrades to newer models or migrations to other platforms, or even loss of interest, the result is the same.

Now don’t get me wrong, I am not saying these systems are all wasted, or all an issue. Many interesting projects and products are developed from them, not to mention the education that people get from developing on and for these systems. What I am saying is that their use should be more specialized to where the processing power is actually required or used to aggregate the data (done through a technology such as MQTT), cache it and forward it to a more powerful management system (home server anyone).

Further to this, the idea here merges nicely with my move to containers (Docker) and my continuing work with Virtual Machines. If we take the services the RPi runs for each function and put them into a container, and that container syncing through either MQTT or directly through the applications services to a micro controller which then carries out the functions.

Why is this more efficient, because the micro controller only needs to be dumb, it needs to either read the data on the interface and report it to the server, or turn an interface on or off (or perhaps “write” a PWM value) to perform a function. This micro controller does not need to be replaced or changed when changing or upgrading the server, and can even be re-tasked to do something else without reprogramming the controller and only changing the functions and code on the mother controller node.

Much more efficient and effective. It does however have the downfall of an extra failure point so some simple smarts on the micro controller would be a good idea to allow it to function without the mother controller in the event of a failure but the MQTT controls are agnostic so we can work with that, at least for monitoring.

Opinions?

Justin

The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

Using Docker Behind a Proxy

I have started learning about containerisation with the view ultimately of deploying it in a production environment for some of the services at my larger clients. Testing and developing this however is made more difficult by the use of a mandated proxy by those who control the WAN and access to the greater internet.

Consequently when I was attempting to pull images and files from the Docker Hub I was getting errors.

Now I could use environment variables, but as this is a test machine and is on my laptop it is not always going to be behind a proxy (it is most of the time, just not always).

Consequently I wanted to add or enable the proxy variable in the Docker configuration file. Fortunately it is easy to find both the file and the line to edit.

For my test machine which is running Ubuntu Server 14.04 it is in the following location

/etc/default/docker

So you want to edit it (remembering to use sudo) with the following command

sudo nano /etc/default/docker

in this file there is a commented out line beginning with

#export http_proxy “http://127.0.0.1:3128/“

Simply remove the # from the start of the line, and fill in the section that contains http://127.0.0.1:3128/ with your proxy details (http://serveraddress:portnumber/)

Save the file [Ctrl+o] Alternatively you can save the file upon exit, the system will prompt you

If it refuses to save to the location, it is most likely due to lack of permissions, you did sudo when you opened nano didn’t you?

Exit Nano [Ctrl+x]

Now restart the docker process

sudo service docker.io restart (again notice the sudo)

Now you can pull down images if you got your setting right

Enjoy