Ubuntu 16.04 Server LTS: Generation 2 Hyper-V VM Install

So you have downloaded Ubuntu 16.04 and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as an EFI capable disk, as shown below

 

Luckly this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over.

Turning off your VM, open up the settings page and navigate to the “Firmware” menu. As you can see in the first image below, “Secure Boot” is enabled (checked). To fix this, simply uncheck it as per the second image below, click “Apply” then “Ok”
Upon doing this and restarting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way

Have Fun

Justin

I am NOT an electronics hobbiest

People call me many things, some polite, some not, some to my face and I am sure, some behind my back. One thing I have been accused of being, and that I am most certainly not, is an electronics hobbyist.

Certainly I use electronics, and my extremely limited electronics knowledge in many of my projects, but I certainly am not interested in the electronics, for the sake of the electronics, in fact I cannot think of much of less interest to me, and whilst I can understand the point of building electronics to test and build something so you can increase your knowledge, this is simply not me. I learn what I need to know to complete a project.

This has come about as at one client they are utilising Arduino’s in an educational setting, and on numerous occasions I have been asked to help out due to my knowledge surrounding them, now this is fine whilst they are doing some basic functions, you know, hello world kind of things, but so far for example I have had no need to learn how to control or manage servo’s so when that comes up, I am of little use.

What I DO build with electronics, are things that I cannot purchase off the shelf, yes I know its lazy, but like so many others I am time poor, I only build things that I have to build to achieve an outcome that I have decided I need, often times this is with the goal of some kind of automation, or reporting on certain states to conserve time on often wasted tasks that I would have to otherwise do.

One example of this, is the Particle Phone and electronic scales I am working on for the measurement and ultimate reporting of weight of a container of hydrochloric acid that is attached to the pool. The automated systems we have in place around the pool control PH, Chlorine to Salt conversion (through the use of ORP) and temperature on the solar controller. What it does not do however is tell me and report on the weight of the remaining hydrochloric acid, meaning constant checking of this one component. This project is simply to use a particle photo (or perhaps ultimately a ESP8266) to simply read with weight from a scales and report it back, and then either generate a push message or e-mail when the weight drops to a certain percentage(s) of the original (minus the approx weight of the container obviously). This reduces my need to check the system.

What this does not make me, is an electronics hobbyist, it makes me a maker, or perhaps an assembler, cobbling bits of off the shelf hardware and code together to make a task work. A true electronics hobbyist, would design the circuits, test them and go for a far greater efficiency than I am trying to achieve, as the photon is most definitely overkill for the task at hand in this case, and perhaps an ESP8266 is as well, I do not know, I do not care, I am after a working “product” at the end that can achieve my rather simple goals.

As I said, I am not an electronics hobbyist.

Justin

The Hidden Cost of the Raspberry Pi (and other “cheap” SBC’s)

The Raspberry Pi and other small single board computers have really taken off in the past few years, especially with the burgeoning wave of development, both commercial, but mainly hobbyist of the Internet of Things (IoT) arena.

Now Raspberry Pi (I am focusing on RPi here because it kicked off the whole shebang in a big way, small SBC’s existed before then but they were not as widely available or used) was never intended to be a IoT board, it was originally intended to be used to teach programming to children. The success of this original project (with over 5 million, yes that is 5,000,000 sold) has not only spawned a myriad of projects but a whole bunch of clones and similar devices looking to capitalize on the success of the project.

With the hobbyist community getting a hold of these devices and putting them into various projects one has to question the cost of these devices. The devices for those who do not know cost US$25 or US$35 depending on the board revision however you also need to add a SD card (either standard or micro depending on revision), power supply, case (enclosure) and if needed a USB wireless dongle and you are looking at getting towards US$100, not as cheap as it sounds to be, and that’s in a basic headless configuration.

The other side to this is the environmental cost, with all these devices (remember there are 5 million RPi’s alone) floating around that will at some point in there lives end up being thrown out, and mostly into landfill it is not overly environmentally cost effective with all those electronics leaching chemicals and other materials over time. What causes this, upgrades to newer models or migrations to other platforms, or even loss of interest, the result is the same.

Now don’t get me wrong, I am not saying these systems are all wasted, or all an issue. Many interesting projects and products are developed from them, not to mention the education that people get from developing on and for these systems. What I am saying is that their use should be more specialized to where the processing power is actually required or used to aggregate the data (done through a technology such as MQTT), cache it and forward it to a more powerful management system (home server anyone).

Further to this, the idea here merges nicely with my move to containers (Docker) and my continuing work with Virtual Machines. If we take the services the RPi runs for each function and put them into a container, and that container syncing through either MQTT or directly through the applications services to a micro controller which then carries out the functions.

Why is this more efficient, because the micro controller only needs to be dumb, it needs to either read the data on the interface and report it to the server, or turn an interface on or off (or perhaps “write” a PWM value) to perform a function. This micro controller does not need to be replaced or changed when changing or upgrading the server, and can even be re-tasked to do something else without reprogramming the controller and only changing the functions and code on the mother controller node.

Much more efficient and effective. It does however have the downfall of an extra failure point so some simple smarts on the micro controller would be a good idea to allow it to function without the mother controller in the event of a failure but the MQTT controls are agnostic so we can work with that, at least for monitoring.

Opinions?

Justin

The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

Access a Cisco Switch via USB Console

It may be that you want to use a USB cable, or it may be that just like me you forgot your USB to serial adapter, and now your faced with connecting to a Cisco switch with a USB cable rather than the serial cable on OSX.

Well how do we go about this, with Windows we could simply look up the port number in device manager, with OS X they do not use this reference, instead referring to the device as a TTY USB modem.

First we need to look up the device, which is contained with other devices in the folder /dev/, we also want to limit it to devices of the USB type so we are going to limit the command to that. Open terminal and type the following command;

ls -ltr /dev/*usb*

This will list all devices in the /dev/ directory (the devices directory) where it contains the key phrase usb within it, with all information, in a list with the most recently modified device (and therefore most likely the device we are looking for)

Your device will show up as something such as

tty.usbmodem.12a1

Now we have the path to the device, we need to open a console using it. In OS X the console utility screen is built in, so lets open it utilising this utility and a baud rate of 9600 which most devices will happily handle. To do this type;

screen /dev/tty.usbmodem.12a1 9600

What this command is stating is open screen on device /dev/tty/usbmodem.12a1 utilising a 9600 baud rate, no settings for stop bits etc are input, you can also utilise other baud rates if needed.

Your terminal will now connect to the console of the Cisco device, this should also however work for any other devices that utilises a USB chipset to communicate via serial emulation.

Justin

The Case of the Hijacked Internet Explorer (IE) Default Browser Message

I recently had a case of a hijacked Default Browser message (the one that asks you to set the browser as default) in Internet Explorer (IE) 11 on a Windows 8.1 machine. Now that is not to say that it cannot happen to other versions of Windows, Internet Explorer or even other browsers, but this fix will clear the Internet Explorer issue.

With many of these things, the cause of this is malware, and the user doing or rather installing or running something they shouldn’t be (what they wanted the software for was perfectly OK, its just they got stung by the malware).

Anyway the issue presented like this;

The Hijacked page, remember do not click on any links
The Hijacked page, remember do not click on any links

IMPORTANT NOTE: Now first things first. DO NOT click on any of the links in the page. It is also important to note that even if Internet Explorer is the default browser, or you have told it not to bother you, it will still appear.

Now the first step in this is understanding what has happened, which in this case is that the iframe.dll file has be hijacked, either through modification or replacement (which indicates that the program would have had to have gone through UAC and the user OK’ing the change), specifically it seems that the page is being redirected, but I cannot confirm this as it was more important to fix the issue than it was to find out the technical reasons why

None the less the first step is to run a malware cleaner, specifically I use Malwarebytes , and I did a cleanup of the system with CCleaner for good measure, but it is important to note that this is just to clean up other things that the malware may have left behind, it is not to fix this problem.

As this problem resides in what is a reasonably well protected file, the best way to fix the issue is with Microsoft’s built-in System File Checker (SFC) tool.

It is actually rather simple to fix this error;

Open a Command Prompt window as Administrator

Open an Administrative Command Prompt
Open an Administrative Command Prompt

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

This tool will now run and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

Restart your PC and the offending window will now be replaced with the default Microsoft one. Now how I said before it seems to override/overwrite the setting telling Internet Explorer not to display the defaultbrowser.htm tab (either because it is default, or you have told it not to check). This continues on here, because that setting was tampered with by the malware it will display the default browser page, to clear this you either simply need to tell it to not display it, or go through the set as default process.

Enjoy

Justin

Using Docker Behind a Proxy

I have started learning about containerisation with the view ultimately of deploying it in a production environment for some of the services at my larger clients. Testing and developing this however is made more difficult by the use of a mandated proxy by those who control the WAN and access to the greater internet.

Consequently when I was attempting to pull images and files from the Docker Hub I was getting errors.

Now I could use environment variables, but as this is a test machine and is on my laptop it is not always going to be behind a proxy (it is most of the time, just not always).

Consequently I wanted to add or enable the proxy variable in the Docker configuration file. Fortunately it is easy to find both the file and the line to edit.

For my test machine which is running Ubuntu Server 14.04 it is in the following location

/etc/default/docker

So you want to edit it (remembering to use sudo) with the following command

sudo nano /etc/default/docker

in this file there is a commented out line beginning with

#export http_proxy “http://127.0.0.1:3128/“

Simply remove the # from the start of the line, and fill in the section that contains http://127.0.0.1:3128/ with your proxy details (http://serveraddress:portnumber/)

Save the file [Ctrl+o] Alternatively you can save the file upon exit, the system will prompt you

If it refuses to save to the location, it is most likely due to lack of permissions, you did sudo when you opened nano didn’t you?

Exit Nano [Ctrl+x]

Now restart the docker process

sudo service docker.io restart (again notice the sudo)

Now you can pull down images if you got your setting right

Enjoy

Secure that Synology

I have recently started moving long term storage & other bulk data off the high-powered servers I maintain at home for working on virtual machines and other projects, and on to two Synology NAS devices, specifically a DS-1815+ for my parents and a DS-2415+ for myself. Both of these have had the official 4GB RAM upgrade installed to give each 6GB of RAM and each has only half of its bays currently populated with 8TB Seagate Archive HDD’s (so 4 in the DS-1815+ giving a total of ~15TB formatted allowing for a two disk redundancy, and 6 in the DS-2415+ giving a formatted capacity of ~30TB, again with two disk redundancy).

What I needed to do with this however is secure it as best possible without effecting the way either myself of my parents use the devices. However I have also moved several “low resource” tasks directly to the NAS to remove them from the server, what this will allow me to do is turn the power-hog of a server off when I do not need it for work, saving money. I will take you through each of these items as I get the time, but basically the following have been moved

  • File Storage
  • PLEX Server
  • Time Machine
  • BitTorrent (BT) Sync
  • SickBeard
  • SabNZBd
  • CouchPotato
  • Crashplan

To reduce the likelihood of anything causing issues with the Synology devices, and to secure them as best as possible as outlined above there are a few things that can be done.

01. Keep your NAS up to date
Your NAS like any computer (as that is essentially what a NAS is, a specialised computer) will from time to to time, have security issues identified and patched, as well as new features published, and as such it should be kept up to date. The Synology system like most computer systems these days, there is an option to have it automatically taken care of for you by the operating system (in the case of the Synology it is called Disk Station Manager or DSM) and it is a simple process to so.

Open Control Panel

SYNO-GEN-Desktop-CP
Control Panel on Desktop

 

SYNO-GEN-StartMenu-CP
Control Panel in Start Menu

Click “Update & Restore” in the “System” Menu or alternatively if you already have the control panel open in the Left Hand Menu Scroll down to “Update & Restore” and single click.

SYNO-SEC-ControlPanel-Basic-UR
Basic Control Panel
(Click “Advanced Mode” to change)

 

SYNO-SEC-ControlPanel-Advanced-UR
Advanced Control Panel
(Click “Basic Mode” to change)

Select “Update Settings”

SYNO-SEC-UpdatesMain
Open Update Settings

Select either “Newest DSM and all updates” (my preference) or “Important Updates Only”, check the “Check for DSM updates automatically” checkbox and select settings that suit you. I personally use and recommend “Install newest DSM update automatically” although you may want to choose “Install Important DSM Updates Automatically”, however I would caution against using “Download DSM updates but let me choose whether to install them”. This setting may be appropriate in some situations such as hosting business data and wanting to let it sit there for a couple of days to give the community time to vet it first, but on the whole I have not had issues with the first option and as such continue to use it. However remember, even though its a NAS you should still have a backup

SYNO-SEC-UpdateSettings
Update Settings Details

Click OK to save and commit the changes

02. Update Packages Automatically
Updating your packages on the device is just as important as keeping the device itself up to date. Fortunately Synology have once again made this a simple process.

Open Package Centre

SYNO-GEN-Desktop-PC
Package Centre – Desktop

 

SYNO-GEN-StartMenu-PC
Package Centre – Start Menu

If you have not used “Package Centre” before you may need to agree to the terms and conditions

SYNO-GEN-PC-TNC
Terms and Conditions Screen
Click “OK”

Click on Settings

 

SYNO-SEC-PC-Settings
Package Centre – Settings Selection

Select “Auto Updates”


SYNO-SEC-PC-Settings-Main
Select Auto Updates (Highlighted)

 

Ensure that “Update packages automatically” is selected. I usually use the “All Packages” option, however in a few cases I have had to use individual selections due to the updating of some components breaking others.

SYNO-SEC-PC-Settings-AutoUpdates
Ensure that “Update packages automatically” is selected

 

Click OK to save and update the settings

03. Install Antivirus
A NAS like any network (and internet) connected device will from time to time have issues that could allow a virus or other malware on to the system such as SynoLocker that was going around a while back. To combat this I highly suggest you install an Antivirus solution such as the Antivirus Essentials package from Synology. I do not know how this compares to the McAfee solution (which is only a trial and requires payment) also available as I refuse to use the McAfee as a point of principle, nor do I know its strike rate versus what you would get on a “normal” PC installation of an antivirus. But it never hurts to have it there anyway. I would also recommend setting up the antivirus to run a scan on a regular basis such as the one on your desktop, laptop or tablet should. To do this it is reasonably simple

Open Package Centre

SYNO-GEN-Desktop-PC
Package Centre – Desktop

 

SYNO-GEN-StartMenu-PC
Package Centre – Start Menu

If you have not used “Package Centre” before you may need to agree to the terms and conditions

SYNO-GEN-PC-TNC
Terms and Conditions Screen
Click “OK”

 

On the menu on the left hand side of the package centre select security

SYNO-SEC-AV-PC-SC
Opening page of Package Center with Security Highlighted

 

On the Security packages page you will see the “Antivirus Essentials” package (by default they are displayed in alphabetical order). Under “Antivirus Essentials” click install, and wait for the install to proceed.

SYNO-SEC-AV-PC-IN
Package Centre Security Packages with Install “Antivirus Essentials” highlighted
SYNO-SEC-AV-PC-IN-4
Installing “Antivirus Essentials”

Upon successfully completing the installation you will get notifications in the top right corner, the notification centre, and the button below the “Antivirus Essentials” logo will change from “Install” to “Open”. Click “Open” to open the Antivirus essentials package.

SYNO-SEC-AV-PC-OP
Install complete
Install button is now open button
Click this and open the package

Upon opening the package you will be presented with the default screen, on the left hand side select “Settings”

SYNO-SEC-AV-FP
“Antivirus Essentials” start screen with “Settings” highlighted

Under the “Update” section, ensure the “Update virus definition before scanning” checkbox is checked (enabled), we do not want to scan with old definitions now do we, that would be silly

SYNO-SEC-AV-DT
Select “Update virus definitions before scanning”

In the left hand column select “Scheduled Scan”, you will most likely be presented with a blank schedule as by default no scans are scheduled, to create a schedule click on “Create” which is a the top left of the right hand section of the page.

SYNO-SEC-AV-SS
By default there are no scheduled scans

The next settings are really up to you, but I normally select “Full Scan” unless there is a compelling reason not, fill in the “Date” and “Time” fields with your desired settings, click OK and the item is created in the schedule.

SYNO-SEC-AV-SS-WD
My Weekday Schedule
SYNO-SEC-AV-SS-WE
My Weekend Schedule
SYNO-SEC-AV-SS-F
My Complete Schedule

Close the App, All done

Please note you do not have to have just one scheduled scan, as above personally I use two, one at midday for each weekday as I am in theory not home using the NAS as I am at visiting clients, and on weekends it runs at 3 AM. I could do 3 AM daily, but I have other tasks that start at 2AM (specifically the DSM update check) and I want to reduce the likelihood of one conflicting with the other

04. Configure and use the Synology Device Firewall
Whilst your NAS may not be on the internet, more commonly you will be using some services, and whilst your router will provide a little protection, it is better to create explicit rules on the device to protect it from attacks.

Personally with this I use three rules to keep it simple; The first rule allows full access from the local network, although ultimately once I have things settled down at each site I tend to secure it allowing only the protocols and ports through that are required for that site. The second rule allows traffic from Australia and the UK (as my brother is currently residing in the UK) and the final rule blocks everything else.

To do this follow these instructions

Open Control Panel

SYNO-GEN-Desktop-CP
Control Panel on Desktop
SYNO-GEN-StartMenu-CP
Control Panel in Start Menu

The next thing you will have to do is dependent on your view of the control panel, if you are in “Basic Mode” you will need to click on “Advanced Mode” in the upper right hand corner of the “Control Panel”, if you are already in advanced mode continue to the next step.

Basic Control Panel - "Advanced Mode" Highlighted
Basic Control Panel – “Advanced Mode” highlighted

In “Advanced Mode” you simple click the “Security” item


Advanced Mode - "Security" Highlighted
Advanced Mode – “Security” Highlighted

Opening this Security section drops you into the “Security” subpage, as such you will need to select “Firewall” from the tabs at the top

Security Page - "Firewall" tab highlighted
Security Page – “Firewall” tab highlighted

In the firewall tab, by default there is no rules so you will need to create them, you do this by clicking create in the upper left corner of the right hand page section

Home page of the "Firewall" tab
Home page of the “Firewall” tab

Once you have clicked create you will be shown a basic menu for creating the rule structure

Basic Rule creation window
Basic Rule creation window

Now while clicking OK in the above window would create a rule, it would create a rule allowing all traffic from every device, everywhere to all services that you currently have, or may create in future on the NAS device (assuming the appropriate NAT forwards in in place anyway) so we need to not click OK and modify some settings to make it useful

Firstly we are going to create a network rule to allow all access from the local network, so we are going to change the “Source IP” section to the “Specific IP” section (yes even though we are allowing a whole network). Now this seems to be a non-issue as I have never had local access denied to the device, but as we will be putting a deny all in later its better to be safe than sorry as in the future this may change.

Create Firewall Rule based on IP (Range)
Create Firewall Rule based on IP (Range)

After changing Source IP to the “Specific IP” Radio button, click the now working “Select” button to the right of it, this will bring up the IP input menu. Once the below input box has popped up, change the radio button option to either “Subnet” if you want to allow the whole subnet or “IP Range” if you want just a range of IP’s. I have used the subnet option and put the network address in the “IP address” field in my case this is 172.16.1.0 and the subnet mask in the “Subnet mask/Prefix length” field in my case this is 255.255.255.128. If you want to use the “IP range” option put the first (lower) IP address of the range in the “From” field. and the last (higher/upper) IP address in the “To” field. Click OK to save your input data, and “OK” once again to save your rule

Dialogue Box for IP input
Dialogue Box for IP input

Clicking on “Create” again we will now create a regional allow for the regions/countries you want, the first part of this is changing the “Source IP” field so that the “Region” radio button is selected

Create firewall rule based on region
Create firewall rule based on region

Once it is selected, again click the “Select” button, this will bring up an input box requesting you to select regions, in my case I want to select Australia and Great Britain. Click “OK” once the regions are selected to save your selection, and “OK” once again to save your rule

Selecting Australia
Selecting Australia
Selecting Great Britain
Selecting Great Britain

Now creating a region based filter may be a bit over the top to some considering I travel often, but it’s not that hard to change so when I or another family member travels overseas and requires access to the NAS I simply add the areas that are being visited to the allow rule.

Finally we want to create an explicit deny rule. What this rule does, is deny any access to the NAS that does not meet the rules above it. Due to the way this works, this should always be the last (bottom) rule on the firewall, as everything will match it so any rules below it will not be processed. To create this rule it is even easier than the others. Once again click the “Create” button, bringing up the popup. We want to leave both the “Ports” and “Source IP” with settings of “All” but we want to change “Action” to “Deny” and “OK” to save your rule

Creating an Explicit Deny for Security
Creating an Explicit Deny for Security

Below is a screenshot of these three basic rules and how they appear in the firewall control panel to ensure they work correctly

Firewall Rules In Order
Firewall Rules In Order

05. Disable or remove unused applications and services
As with any device, the more programs, features and services you put on it the more potential places there are for people, or malware to gain access to the system, the simple solution to this is if you do not need it, remove it, if you only need it sporadically, only use enable it when you need it.

Don’t get me wrong, Synology and third parties have some great features available for the devices, Photo Station, Plex and Cloud Station just to name a few, but if you do not need them, do not install them, you can always add them later if you need them, if you have them installed and no longer use them, remove them, again you can re-add them later (make sure you have a backup of the data if you remove them).

To see what you have running it is a simple matter as outlined below

Open Package Centre

Package Centre - Desktop
Package Centre – Desktop
Package Centre - Start Menu
Package Centre – Start Menu

Depending on how it’s set up you may end up seeing the “Recommended” section or the “Installed” section, if you see “Recommended” simply select “Installed” in the left hand menu to see what is installed

Installed Packages
Installed Packages

To remove one of these if they are no longer needed simple click on the package and open it

Opened Package Menu
Opened Package Menu

Click on the “Action” menu and select “Uninstall”

Uninstall Package
Uninstall Package

You will get a series of two popups, the first checking that you’re sure you want to uninstall the package, the second to tell you it has been uninstalled

Confirm Uninstall of Package
Confirm Uninstall of Package

Package has now been successfully removed
Package has now been successfully removed

You will then be taken back to the “Installed” display, now missing the package you have removed

SYNO-SEC-PC-REMOVED

Packages however are not the only risk, the same and to a large extent a greater risk can be had from services such as SSH and Telnet (which should not be used PERIOD). SSH for example is a prime candidate for being left enabled when you do not need it, again do not get me wrong its a great tool and I use SSH all the time on several of my machines, but if I do not need it I turn it off, one less avenue open for attack.

To disable SSH, or Terminal which are the commonly left open ones do the following

Open Control Panel

Control Panel on Desktop
Control Panel on Desktop
Control Panel in Start Menu
Control Panel in Start Menu

The next thing you will have to do is dependent on your view of the control panel, if you are in “Basic Mode” you will need to click on “Advanced Mode” in the upper right hand corner of the “Control Panel”, if you are already in advanced mode continue to the next step.

Basic Control Panel - "Advanced Mode" Highlighted
Basic Control Panel
“Advanced Mode” Highlighted

In “Advanced Mode” you simple click the “Terminal & SNMP” item

Click "Terminal & SNMP"
Click “Terminal & SNMP”

The Terminal control window will open, and the settings will either be on (checked) or off (unchecked) simple uncheck them if you do not need them running

SSH Enabled
SSH Enabled
SSH Disabled
SSH Disabled

Click Apply and your done

When you remove a service and no longer need it, also remember to remove any port forwards from your router, no need to leave those ports open if you don’t need to

06. Disable or remove unused accounts
What has been said for applications and services, also goes for users, in fact users would be somewhat more important. If an account is not required, remove it. Do not get me wrong I am not saying use a single account for everyone, far from it, in fact I ensure everyone has their own account for access where required so that people are accountable for their actions, but if the account is no longer required, and will not be in the near future I will remove it, if it is going to be needed in the short term, I will disable it and enable it when I need it again.

I do take my security perhaps a little too far as I have separate accounts for administration, another for normal user access to the data, a setup that I highly recommend for security. I also however have third account for the AFP share which is presented for Time Machine on the Apple devices. I have done this to reduce the chance of anyone getting access to the full system backups, as there is more in the backups than there is in the data synced between the other folders. I do however ensure that the standard admin and guest accounts are disabled. As these accounts are standard it is safe to assume that hackers know about them and therefore are a security risk.

It is however important to note that it seems by default that DSM (Disk Station Manager – the Synology Operating System) does not allow you to remove the accounts, probably due to operational reasons in the case of the guest account. In the case of the admin account I suspect this is due to how the factory reset works, specifically due the fact that when reset via the reset button enables the account and resets the password to a factory default.

It is also important to note that the password for the built-in admin account is the one used by BOTH the admin and root SSH accounts to the device, and that disabling the account in the web interface does not seem to disable the acount on the system itself as the SSH access is still evident.

Now the smart people at Synology have disabled the guest account by default, meaning you have to enable it for it to be a security issue. However we may need to disable or remove annother account, and Synology have thankfully made it quite easy to achieve;

Open Control Panel

Control Panel on Desktop
Control Panel on Desktop
Control Panel in Start Menu
Control Panel in Start Menu

Click “Users” in the “System” Menu or alternatively if you already have the control panel open in the Left Hand Menu Scroll down to “Users” and single click.

Basic Control Panel (Click "Advanced Mode" to change)
Basic Control Panel
(Click “Advanced Mode” to change)
Advanced Control Panel (Click "Basic Mode" to change)
Advanced Control Panel
(Click “Basic Mode” to change)

Select “User” and click to open, this will change the window to display the “User” menu. Once this has happened select the user to disable, in this case “admin” and click the edit button

Admin User selected, Click the Edit Button
Admin User selected, Click the Edit Button

This will in turn pop up a menu to edit the user, down the bottom of which is the option to disable the user, and we want to disable the user immediately in this case (the scheduled disabling of the account for example would be used for a contractor or other person who you want to access the files on the devices only for a limited time). Once the options are set, hit OK and disable the user

Disabling the admin user immediately
Disabling the admin user immediately

This will drop you back to the user management screen, however the admin account will now be disabled

Disabled admin account
Disabled admin account

Now that’s all good, but how about if you want to remove an account, well that again is simple.

Select the user you want to remove and hit the “Delete” button

SYNO-SEC-USER-MENU-DELETE
User Selected and Delete highlighted
Select user and click delete

This will bring up a confirmation box confirming you want to do this, and telling you that the users home folder will be removed and is unrecoverable (hope you have a backup :D)

Confirm that you want to remove the user and its associated home folder (if applicable)
Confirm that you want to remove the user and its associated home folder (if applicable)

Now that’s done it will again drop you back to the user management scree, the account is however gone

Account Removed
Account Removed

That’s it, users are disabled and/or removed

07. Install and use a SSL Certificate
Whilst an SSL certificate strictly speaking does not add to the security of the device directly, it does help secure (through encryption) peoples interaction with the device, specifically those interactions over SSL supported mediums such as HTTP (web interface), the most important of these interactions being the transmission of credentials to the system to gain access it. Encryption does this by obuscating the data that would otherwise be sent in plain text. This however does not mean you can use a weak password (and that includes using the same password for multiple services).

Now there are several ways to do this depending on how you want to go, there is using a self-signed certificate which will secure the device, but you will get warnings about the certificate not being from a recognised source unless you add it to the trusted sources on your systems (this can be done via GPO’s, System Profiles etc on large corporate systems, and can even be done through scripting on systems where you cannot manage them centrally, you could of course use the old chestnut of installing it manually on every system, but who wants to do that?).

If you do not want to go through the process of dealing with a self-signed certificate you can get a certificate signed by an external certificate signing authority which will mean there is no need to install the certificate authority manually, as the roots are included on most operating systems by default. The downfall of this is that there are multiple types of certificates available from certification authorities as well. You are able to get a single domain certificate, a multiple domain certificate (UCC), or a wildcard certificate that protects all the sub-domains of a certain root domain.

As you want to protect more and more (sub)domains, the cost of the certificate goes up, with wildcard certificates in particular becoming very expensive to purchase and maintain. I will let you decide which way you want to go and work out the particulars, but I myself and my clients all use externally signed wildcard domain certificates. The steps for installing and setting up a certificate are basically the same no matter which way you go.

What may be different however is if you need the certificate chain. As I use GoDaddy certificates and want to present all my certificates to the client browser as a certification chain to maintain the integrity of the process, so I do need to put the entire chain into the system (it is also a part of getting a higher ranking in the SSL security tests, but more on that later). You can also let the browser find chain certificates, but this does present another possible attack vector to the system, or you may no need one, depending on the external provider or in the case for most self signed certificates.

To do this follow these instructions

Open Control Panel

Control Panel on Desktop
Control Panel on Desktop
Control Panel in Start Menu
Control Panel in Start Menu

The next thing you will have to do is dependent on your view of the control panel, if you are in “Basic Mode” you will need to click on “Advanced Mode” in the upper right hand corner of the “Control Panel”, if you are already in advanced mode continue to the next step.

Basic Control Panel - "Advanced Mode" Highlighted
Basic Control Panel
“Advanced Mode” Highlighted

In “Advanced Mode” you simple click the “Security” item


Advanced Mode<br />"Security" Highlighted
Advanced Mode – “Security” Highlighted

Opening this Security section drops you into the “Security” sub-page, as such you will need to select “Certificate” from the tabs at the top

Security Page - "Certificate" tab highlighted
Security Page – “Certificate” tab highlighted

Now that you are in the certificate menu, you will notice there is already a certificate on the device. This is a self signed certificate, and if you want to allow that, you could simply export the certificate (using the “Export” button) and import it into your trusted certificates store on your machine(s) as discussed above and be done with it. However I am going to show you how to add a third party certificate. Up the top of the tab you will notice two buttons, one saying “Create certificate” the other saying “Import certificate”. If you need to create a certificate signing request (CSR) you can do it through the Create menu (you can also create a custom self signed, renew a self signed, or even sign a CSR from another source allowing you to use the Synology as the Certificate Authority). Again here I am going to skip over this and go straight on with the import of the certificate, therefore we want to hit the “Import certificate” button.

Security Page - "Import certificate" button highlighted
Security Page – “Import certificate” button highlighted

This opens an import page, which contains three fields, these being the certificate itself, the intermediate certificates (if any) and the private key. Both the certificate, and the private key are required, and how you get the private key is dependent on how you generated the certificate, if there are questions on how to get the private key, ask in the comments and I will try to help out. Personally I use a program called XCA for my key/certificate management and I find it works very well.

 

Blank Import Form
Blank Import Form

Once the fields are filled in by browsing and selecting the appropriate files, you need to simply select “OK” and the device will now import the certificates

Completed Import Form
Completed Import Form

The screen once this has been completed will now drop back to the normal certificate home screen, but your certificate is now installed.

Home Page with Third Party Certificates
Home Page with Third Party Certificates

Congratulations, you have installed your certificate, however installing an SSL certificate by itself however is only part of the puzzle, you actually need to configure the device and its services to make use of the certificate, and ideally to redirect all requests for plain old unencrypted and insecure HTTP to your new encrypted and (hopefully) secure HTTPS implementation, also enabling HSTS is a good idea so we will turn that on as well.

Thankfully, Synology have made that very easy, by putting all the check box options for this on the one page, from within the open “Certificates” tab on the left panel you want to select “Network”

Certificates Home with Network Highlighted
Certificates Home with Network Highlighted

Clicking on this will drop you to the “Network” home page, where we need to select the “DSM Settings” tab

DSM Settings Tab Highlighted
DSM Settings Tab Highlighted

Now we are in the DSM settings tab is open you will notice there is an option to enable HTTPS connections, and several sub-options that need to be checked.

DSM Settings homepage, notice that HTTPS and its options are disabled
DSM Settings homepage
Notice that HTTPS and its options are disabled

Check the “Enable HTTPS connection” option, and the two unchecked sub-options (Automatic Redirection and HSTS) and click “Apply”

With HTTPS and its options enabled, click "Apply"
With HTTPS and its options enabled, click “Apply”

You now have a device using SSL encryption, and HSTS to ensure be best possible chance of keeping that data safe, it is important to note however that with HSTS enabled, which is a method by which the server tells the browser only to communicate it over HTTPS, until a per-determined time is reached, we also need to ensure that we keep a valid certificate on the device an update it PRIOR to the old one expiring. If this is not done, you otherwise you risk loosing access to the device without resetting the browser or overriding the default behavior of the browser to drop out due to the expired certificate.

08. Enable Device Auto-Block Lockouts
DSM has a feature as part of the firewalling system to automatically block access to clients that attempt to log in a given number of times, over a given period, to be locked out for a given period.

To this end I have the following settings set for security
Attempts: 3
Time Period: 60 Minutes
Block Expires: 1 Day

To configure this you will have to be in the “Advanced” mode of the control panel, if you are in “Basic Mode” you will need to click on “Advanced Mode” in the upper right hand corner of the “Control Panel”, if you are already in advanced mode continue to the next step.

Basic Control Panel - "Advanced Mode" Highlighted
Basic Control Panel – “Advanced Mode” Highlighted

In “Advanced Mode” you simple click the “Security” item


Advanced Mode - "Security" Highlighted
Advanced Mode – “Security” Highlighted

Opening this Security section drops you into the “Security” sub page, as such you will need to select “Auto Block” from the tabs at the top

Security Page - "AutoBlock" tab highlighted
Security Page – “AutoBlock” tab highlighted

In the Auto Block tab the details have already been filled in, although the “Enable auto block” and “Enable block expiration” check boxes are unchecked

Check the “Enable auto block” which will allow you to change the “Login attempts” and “Within (minutes)” fields, then if you want to enable block expiration which I highly recommend the check the “Enable block expiration” which will then enable the “Unblock after (days)’ field.

Auto Block Page (Default Settings)
Auto Block Page (Default Settings)
Auto Block set to my settings
Auto Block set to my settings

As stated above and as in the screenshot my settings are

Attempts: 3
Time Period: 60 Minutes
Block Expires: 1 Day

Once the settings are to your liking click “Apply” Auto Block is now enabled.

NOTE: Unless you want to get banned from your own device I strongly suggest you enable ban expiration, although you can get around this by changing IP’s, but it does cause issues when used in conjunction with with reverse proxies (which will be explained in a later article), as all EXTERNAL requests are comming from the one IP.

There you have it as “basic” secured Synology, I have included a couple of tips below but if there are any comments or questions, please leave them in the comments below

Justin

Useful Tip
Create and assign permissions to groups, not users. Coming from a Systems Administrations point of view, if you need to add permissions to something, create a group if an appropriate one does not already exist, and assign that group permissions to the item or object, even if the group only has one user. This will allow you to move people into and out of groups in the future to assign them permissions to things, without having to look up and assign permissions to each user. This is how I was taught to do it in large installations (admittedly using Directory Services for corporate authentication) and its saved me much time and effort in the past and I continue to do it how.

Useful Tip #2
Again this may be me taking it a little far, but I hide all the shares on the device unless they are truly public. Like many people, we occasionaly have guests on our WiFi, and I simply want to hide the data from them (what they cannot see, they will not try to access for the most part). This is simply done by clicking the “Hide this shared folder in My Network Places” checkbox. I also check the “Hide sub-folders and files from users without permissions” checkbox for good measure. this will stop files and folders showing up in the directory listing to users without permissions to that folder, if you use those features.

Smarter Sprinkler System

Irrigation (sprinkler) systems have come a long way since their inception, and even further since the advent of modern electronics, and with the modern Internet and the beginings of the Internet of Things (IoT) revolution they are getting smarter and are able to do more. One example of this is that where a “modern” controller can tell if it is raining, or has rained in the past period through the use of a rain guage, IoT devices such as the OpenSprinkler can now use forcast weather from the internet to make a decision about the watering. Linking this with things such as moisture sensor data can make these systems even smarter. This is however one thing that seems to be missing, the “smart” solenoid.

I am not a gardener by choice per-sey but more by nececcity, wanting to take more control of the food I and my family eat requires growing our own, which whilst easy in some respects, does chew up a lot of time.

Solonoids themselves are quite simple devices they use a magnetic coil to retract a metal (normally iron) core against a spring (which opposes the coil so the solonoid goes back to “rest” when the electrical current is no longer applied) to open or close a gate, if the gate is closed, water does not pass, open the gate, and the water flows through. Nice and simple/

What is not so simple however is the current requirement to run an entire cable pair, yes there are ways of theoretically doing n+1 (n being the number of solonoids) but in general its one solonoid its one cable (pair).

Now with cheaper, smarter, more capable electronics what is to stop us moving the “smarts” that for so long have been intergrated with the controller, on to the solonoid itself, you could then program it over the cloud, a RTC would allow it to turn on/off on a schedule, a hard link to a moisture sensor could allow it to turn on if the soil gets to dry, and cloud computing, or a local weather station could stop it watering if it has rained or is predicted to rain within the next allocated period, say 6 hours.

That gives you more smarts than are most old control boards are capable of, almost as much as modern ones are capable of.

But what if, now we have this connected to the cloud, and we can group them, in one or more groups to control when things are watered. Got tomatos that need watering twice a day but are at opposite ends of the garden, not a problem just create a group of two or more solonoids to control, put in the times and off you go, what about 3 areas, just at another solonoid, 4 areas so on and so forth.

But we are talking the cloud here…. it’s all seeing, all knowing. You could in theory not only control based upon groups, but what about plant types, if you could TELL the system that you were growing tomato’s and you could tell it how much water you want to give them, and when. If you wanted to you could even attach a flow meter to measure the amount of water delivered rather than base it on the arbitary value of time where the pressure and therefore amount of water could vary, with a flow meter you KNOW how much as been delivered.

What I am thinking is a bit light your LifX bulbs, but for solonoids. What about data, well that is easy you can do it through standard 802.11 Wireless, or how about XBEE back to a controller station, or even use a three cable wire to tap in to, using an addressing system such as I2C. In the end it does not matter so much about the how it works technically, so long as I can walk up, plug in the power (or power & data) and connect to the water piping, program it how I want and boom, it works.

Ah well we can all dream

Project: HomeMadeMonitor

Yes, yes I know I should finish other projects first, but I have yet another electronics project I want to start playing with, and this post is more for a reminder of what it was I was aiming to achieve more than anything else.

As I have been falling more and more down the self sufficient/home made produce rabbit hole, I have noticed that I need more and more data for certain things, so that I can then act upon it, or even better yet, have an automated system to act apon it for me.

The two things I am thinking of, are specifically the (dry) curing of meats such as salami’s and the aging of cheese. Both these are reliant on maintaining the right temperature and humidity within a certain range for a prolonged period of time, ranging from weeks to years depending on what your wanting to achieve. Now in the past this has been what I have been designing SafeDuino to do, and whilst that does other things in addition to monitoring temperature and humidity, some of those more advanced are not needed by the monitoring and control software.

Whilst it may be more benficial to “recycle” code from the SafeDuino project. I am finding myself drawn more and more to the Particle Photon system for these basic requirements as not only as it generally cheaper than the Arduino’s (Freetronics EtherTen and EtherMega) I have been using, but it does come with built in WiFi. Conversly I then have to supply power to the device, either through a paraciticle connection to something else, a dedicated connection to it, or a Solar/Wind & Battery solution, where as the EtherTen and EtherMega boards (with the use of a PoE module) can pull power from the PoE switching equipment I have in the house. A wired connection is always going to be more reliable and faster anyway, although the speed is irrelevent when your doing so little data transfer. Using wireless also means I need to ensure that I have wireless access at each point on the property where I want a monitor, which is easier said that done when we have a large property, much larger than any one or two wireless points can handle, although this is due mainly to the construction materials of the buildings on the property and the distances involved.

This leaves me wondering how to achieve this. Given the design of what I want to achieve, being able to control one or more chambers, through one or more devices I am thinking I need to build it as to make extensive use of 1 wire and I2C technologies, ideally linking back to a Paricle Photon (it has considerably more processing power than an Arduino) and then back to a server. Simular to what they have done with BrewPi, but this means as I said above power supply issues to due no PoE connector, and connectivity issues…

Anyway I will think on it further, but I will most likely end up using the Particle Photon.

What I need to achieve however is rather simple, as I said above monitoring one or multiple “cambers” from a single root node (the Particle Photon) and reporting the data back to a server, where it can be further processed and acted upon if need be. These “chambers” can be joined, or independent, and may or may not have one or more common components (think combo fridge/freezer where the compressor is common to both).

Each “chamber” must be capable of having unique settings (within reason, you cannot expect it to keep one chamber at 30 degrees Celcius and the other at 10, its just not going to happen if you have common components) and be able to work out how to handle this.

I guess it is time to start designing (and finishing off other projects)

Have Fun

Justin

%d bloggers like this: