The Old Backup Regime

After I purchased the NAS box to place at home for my work data (there is a separate one for family data, they do however backup to each other but I will cover that in another post) I decommissioned my old Windows Server 2008 R2 box.

This box, however, did do a multitude of things that were controlled via scheduled tasks and scripts that I have now moved to the Synology. Chief amongst this was the backup for several websites for “just for when” something goes wrong.

There were several bits of software in the implementation of this task, these were (are);

  • wget (Windows Version) – Command line utility for downloading files, whilst there are other options, this was quick and simple, exactly what I needed
  • FTPSync (CyberKiko) – a Great little piece of software, can display a GUI showing sync progress which is useful for troubleshooting or runs in a silent mode with no GUI. It utilises simple ini text files for configuration (it encrypts the password) making it easy to configure and it has many options for doing this configuration
  • DeleteMe (CyberKiko) – Simple file removal tool, give it a folder (it can have multiple set up) and a maximum age of the files in that folder and it will remove anything older than that.
  • 7Zip (Command Line Version) – Command Line zip archive creation utility, what more is there to say
  • Custom PHP DB Export Scripts  – Custom PHP scripts that pulls the database(s) out of MySQL and zips it up. This was originally run with a CRON job, but I found it easier to use wget to pull the trigger file when I wanted the backup was then created, then pull the file itself, then pull a delete trigger

That’s it for the software I use but what about the backup process itself? For each of the sites, I need to backup the custom PHP scripts were configured on the server. Then a custom batch file containing a bunch of commands (or should that be a batch of commands) to download and archive the files.

The batch file had the following segments in it to achieve the end goal;

  1.  Check if backup is still running from previous attempt (Utilizes blank text file that is created at start of script and then removed at end)
    1. If it is running, skip the script and go to the end of the file
    2. If a backup job is not running, create the file locking out other jobs.
  2. Run cleanup of old files
  3. If an existing backup directory for today exists (due to a failed backup job most likely), remove it and create a new one
  4. Start logging output to a log file
  5. Start Repeating Process (Repeats once for each site that is being backed up)
    1. Generate Database Backup
    2. Retrieve Database Backup
    3. Remove Database Backup to the long term storage folder
    4. Rename Database Backup File
    5. Move Database Backup File to Storage Location
    6. Sync (utilizing FTPSync) the sites directories
    7. Remove Existing zipped backup file of the site’s files and directories if it exists
    8. Zip folder structure and files for the website put the ZIP file in the long term storage folder
  6. Copy Backup Complete information to log file
  7. Remove Process Lock File

To download the batch files, click here

Reasonably simple, to add a new site, copy and paste a previous one, update a few details and off you go.

Now I realize that some of this is perhaps not the best or most secure way to achieve a goal (specifically how I was handling the database) but it was quick, easy and it worked. I could have also made the whole process more efficient by using config a files and a for loop, but well I didn’t

Have Fun

Justin

 

Oh Shit!!! The Data is Gone…. Or Is It?

Yep, I screwed up, I made assumptions, didn’t double and triple check things, and made a mess of something I was working on, professionally none the less. I did fix it, but it was a stupid screw up none the less. The irony is not lost on me about how I harp on about backups regularly to everyone.

The other day I ended up with one of those Oh!! SHIT moments, I was migrating an older 2012 R2 file server to 2016 and whilst I was doing this I decided to kick the old server that was due to return for the leasing company from the Failover Cluster it was in. As standard I paused the node, removed all the references from it and I then hit the evict function on the node to evict it from the cluster, and it was from this that it all went to shit, doing this whilst doing migrations, what the first part of my mistake. What happened the Failover Cluster borked itself and crashed on the remaining servers, and it would not restart, to this day I have not got it to restart.

After spending an hour or so trying to get the cluster to restart, I relented and went to the backups to restore the offending server. Hitting the backups I go to the server I want to store, and notice that its only 16GB WTF!!!! the server should be several TB in size (it is a file server after all).

Upon further investigation it seems that I was missreading the backup reports and the old server, which has the same name on the old Hyper-V Cluster as the new server does on the new implementation was not getting backed up, it was the new one. I misread the report and assumed that it was backing up the old server, mistake number 0 (this had been happening for the 6 weeks before the backup failure) and the old restore points being more than our retention limit were gone. Ok I will hit the long term off-site backups might take a while but the data is safe, well it was not or so it seems, the other technican at the offsite location had removed the offsite backups for the fileserver from the primary site. Why, because they were taking up too much space on that site’s primary backup disk (The storage at each site is partitioned to provide onsite backup for each site, with the second partition being the offsite backup for the other site).

Damn, so this copy of the data is the only one.

Ok so I killed the cluster server that everything was on, and using the old evicted node I rebuild a single node “cluster” and mounted the CSV, mounted the VHDX and everything appeared as it should. Whoo Hoo access to the data, well not so fast there buddy.

After moving some data an error popped up stating that the data was inaccessible, ok no problem loss of a single file is not a real issue. Then it popped up again, then again.. the second Oh! Shit! moment within several hours.

2017-02-02 - Dedupe Error

I recovered and moved the data I could access leaving me purely with data I couldn’t. I tried chkdsk and other tools and after several hours I took a break from it, needing to clear my mind.

Coming back to it later I looked at the error, looked at what was happening, and recalled seeing an article on another blog about Data Deduplication corrupting files on Server 2016. With this I began wondering if it had effected Server 2012 R2, then the lightning struck deduplication, this process leaves redirects in place and essentially has a database of files that it links to for the deduplication. The server the VHDX was mounted did not know about the Deduplication, the database or how to access it.

Up until now I had only mounted the Data VHD. Now I rebuilt the server utilising the original Operating System VHDX to run the server. I let it install the new devices and boot.

Upon the server booting I opened a file I could not access before, and it instantly popped onto my screen. Problem Solved

Note to remember If you are doing Data Recovery or trying to copy data from a VHDX (or other disk, virtual or physical) that was part of a deduplicated file server, you need to do it from the server due to the deduplication database. You may be able to import the database to another server, I really have no idea, and I am not going to try to find out.

The Hidden Cost of the Raspberry Pi (and other “cheap” SBC’s)

The Raspberry Pi and other small single board computers have really taken off in the past few years, especially with the burgeoning wave of development, both commercial, but mainly hobbyist of the Internet of Things (IoT) arena.

Now Raspberry Pi (I am focusing on RPi here because it kicked off the whole shebang in a big way, small SBC’s existed before then but they were not as widely available or used) was never intended to be a IoT board, it was originally intended to be used to teach programming to children. The success of this original project (with over 5 million, yes that is 5,000,000 sold) has not only spawned a myriad of projects but a whole bunch of clones and similar devices looking to capitalize on the success of the project.

With the hobbyist community getting a hold of these devices and putting them into various projects one has to question the cost of these devices. The devices for those who do not know cost US$25 or US$35 depending on the board revision however you also need to add a SD card (either standard or micro depending on revision), power supply, case (enclosure) and if needed a USB wireless dongle and you are looking at getting towards US$100, not as cheap as it sounds to be, and that’s in a basic headless configuration.

The other side to this is the environmental cost, with all these devices (remember there are 5 million RPi’s alone) floating around that will at some point in there lives end up being thrown out, and mostly into landfill it is not overly environmentally cost effective with all those electronics leaching chemicals and other materials over time. What causes this, upgrades to newer models or migrations to other platforms, or even loss of interest, the result is the same.

Now don’t get me wrong, I am not saying these systems are all wasted, or all an issue. Many interesting projects and products are developed from them, not to mention the education that people get from developing on and for these systems. What I am saying is that their use should be more specialized to where the processing power is actually required or used to aggregate the data (done through a technology such as MQTT), cache it and forward it to a more powerful management system (home server anyone).

Further to this, the idea here merges nicely with my move to containers (Docker) and my continuing work with Virtual Machines. If we take the services the RPi runs for each function and put them into a container, and that container syncing through either MQTT or directly through the applications services to a micro controller which then carries out the functions.

Why is this more efficient, because the micro controller only needs to be dumb, it needs to either read the data on the interface and report it to the server, or turn an interface on or off (or perhaps “write” a PWM value) to perform a function. This micro controller does not need to be replaced or changed when changing or upgrading the server, and can even be re-tasked to do something else without reprogramming the controller and only changing the functions and code on the mother controller node.

Much more efficient and effective. It does however have the downfall of an extra failure point so some simple smarts on the micro controller would be a good idea to allow it to function without the mother controller in the event of a failure but the MQTT controls are agnostic so we can work with that, at least for monitoring.

Opinions?

Justin

The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

Access a Cisco Switch via USB Console

It may be that you want to use a USB cable, or it may be that just like me you forgot your USB to serial adapter, and now your faced with connecting to a Cisco switch with a USB cable rather than the serial cable on OSX.

Well how do we go about this, with Windows we could simply look up the port number in device manager, with OS X they do not use this reference, instead referring to the device as a TTY USB modem.

First we need to look up the device, which is contained with other devices in the folder /dev/, we also want to limit it to devices of the USB type so we are going to limit the command to that. Open terminal and type the following command;

ls -ltr /dev/*usb*

This will list all devices in the /dev/ directory (the devices directory) where it contains the key phrase usb within it, with all information, in a list with the most recently modified device (and therefore most likely the device we are looking for)

Your device will show up as something such as

tty.usbmodem.12a1

Now we have the path to the device, we need to open a console using it. In OS X the console utility screen is built in, so lets open it utilising this utility and a baud rate of 9600 which most devices will happily handle. To do this type;

screen /dev/tty.usbmodem.12a1 9600

What this command is stating is open screen on device /dev/tty/usbmodem.12a1 utilising a 9600 baud rate, no settings for stop bits etc are input, you can also utilise other baud rates if needed.

Your terminal will now connect to the console of the Cisco device, this should also however work for any other devices that utilises a USB chipset to communicate via serial emulation.

Justin

The Case of the Hijacked Internet Explorer (IE) Default Browser Message

I recently had a case of a hijacked Default Browser message (the one that asks you to set the browser as default) in Internet Explorer (IE) 11 on a Windows 8.1 machine. Now that is not to say that it cannot happen to other versions of Windows, Internet Explorer or even other browsers, but this fix will clear the Internet Explorer issue.

With many of these things, the cause of this is malware, and the user doing or rather installing or running something they shouldn’t be (what they wanted the software for was perfectly OK, its just they got stung by the malware).

Anyway the issue presented like this;

The Hijacked page, remember do not click on any links
The Hijacked page, remember do not click on any links

IMPORTANT NOTE: Now first things first. DO NOT click on any of the links in the page. It is also important to note that even if Internet Explorer is the default browser, or you have told it not to bother you, it will still appear.

Now the first step in this is understanding what has happened, which in this case is that the iframe.dll file has be hijacked, either through modification or replacement (which indicates that the program would have had to have gone through UAC and the user OK’ing the change), specifically it seems that the page is being redirected, but I cannot confirm this as it was more important to fix the issue than it was to find out the technical reasons why

None the less the first step is to run a malware cleaner, specifically I use Malwarebytes, and I did a cleanup of the system with CCleaner for good measure, but it is important to note that this is just to clean up other things that the malware may have left behind, it is not to fix this problem.

As this problem resides in what is a reasonably well protected file, the best way to fix the issue is with Microsoft’s built-in System File Checker (SFC) tool.

It is actually rather simple to fix this error;

Open a Command Prompt window as Administrator

Open an Administrative Command Prompt
Open an Administrative Command Prompt

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

This tool will now run and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

Restart your PC and the offending window will now be replaced with the default Microsoft one. Now how I said before it seems to override/overwrite the setting telling Internet Explorer not to display the defaultbrowser.htm tab (either because it is default, or you have told it not to check). This continues on here, because that setting was tampered with by the malware it will display the default browser page, to clear this you either simply need to tell it to not display it, or go through the set as default process.

Enjoy

Justin

%d bloggers like this: