Oh Shit!!! The Data is Gone…. Or Is It?

Yep, I screwed up, I made assumptions, didn’t double and triple check things, and made a mess of something I was working on, professionally none the less. I did fix it, but it was a stupid screw up none the less. The irony is not lost on me about how I harp on about backups regularly to everyone.

The other day I ended up with one of those Oh!! SHIT moments, I was migrating an older 2012 R2 file server to 2016 and whilst I was doing this I decided to kick the old server that was due to return for the leasing company from the Failover Cluster it was in. As standard I paused the node, removed all the references from it and I then hit the evict function on the node to evict it from the cluster, and it was from this that it all went to shit, doing this whilst doing migrations, what the first part of my mistake. What happened the Failover Cluster borked itself and crashed on the remaining servers, and it would not restart, to this day I have not got it to restart.

After spending an hour or so trying to get the cluster to restart, I relented and went to the backups to restore the offending server. Hitting the backups I go to the server I want to store, and notice that its only 16GB WTF!!!! the server should be several TB in size (it is a file server after all).

Upon further investigation it seems that I was missreading the backup reports and the old server, which has the same name on the old Hyper-V Cluster as the new server does on the new implementation was not getting backed up, it was the new one. I misread the report and assumed that it was backing up the old server, mistake number 0 (this had been happening for the 6 weeks before the backup failure) and the old restore points being more than our retention limit were gone. Ok I will hit the long term off-site backups might take a while but the data is safe, well it was not or so it seems, the other technican at the offsite location had removed the offsite backups for the fileserver from the primary site. Why, because they were taking up too much space on that site’s primary backup disk (The storage at each site is partitioned to provide onsite backup for each site, with the second partition being the offsite backup for the other site).

Damn, so this copy of the data is the only one.

Ok so I killed the cluster server that everything was on, and using the old evicted node I rebuild a single node “cluster” and mounted the CSV, mounted the VHDX and everything appeared as it should. Whoo Hoo access to the data, well not so fast there buddy.

After moving some data an error popped up stating that the data was inaccessible, ok no problem loss of a single file is not a real issue. Then it popped up again, then again.. the second Oh! Shit! moment within several hours.

2017-02-02 - Dedupe Error

I recovered and moved the data I could access leaving me purely with data I couldn’t. I tried chkdsk and other tools and after several hours I took a break from it, needing to clear my mind.

Coming back to it later I looked at the error, looked at what was happening, and recalled seeing an article on another blog about Data Deduplication corrupting files on Server 2016. With this I began wondering if it had effected Server 2012 R2, then the lightning struck deduplication, this process leaves redirects in place and essentially has a database of files that it links to for the deduplication. The server the VHDX was mounted did not know about the Deduplication, the database or how to access it.

Up until now I had only mounted the Data VHD. Now I rebuilt the server utilising the original Operating System VHDX to run the server. I let it install the new devices and boot.

Upon the server booting I opened a file I could not access before, and it instantly popped onto my screen. Problem Solved

Note to remember If you are doing Data Recovery or trying to copy data from a VHDX (or other disk, virtual or physical) that was part of a deduplicated file server, you need to do it from the server due to the deduplication database. You may be able to import the database to another server, I really have no idea, and I am not going to try to find out.

Ubuntu 16.04 Server LTS: Generation 2 Hyper-V VM Install

So you have downloaded Ubuntu 16.04 and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as an EFI capable disk, as shown below


Luckly this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over.

Turning off your VM, open up the settings page and navigate to the “Firmware” menu. As you can see in the first image below, “Secure Boot” is enabled (checked). To fix this, simply uncheck it as per the second image below, click “Apply” then “Ok”
Upon doing this and restarting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way

Have Fun


The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

The Case of the Hijacked Internet Explorer (IE) Default Browser Message

I recently had a case of a hijacked Default Browser message (the one that asks you to set the browser as default) in Internet Explorer (IE) 11 on a Windows 8.1 machine. Now that is not to say that it cannot happen to other versions of Windows, Internet Explorer or even other browsers, but this fix will clear the Internet Explorer issue.

With many of these things, the cause of this is malware, and the user doing or rather installing or running something they shouldn’t be (what they wanted the software for was perfectly OK, its just they got stung by the malware).

Anyway the issue presented like this;

The Hijacked page, remember do not click on any links
The Hijacked page, remember do not click on any links

IMPORTANT NOTE: Now first things first. DO NOT click on any of the links in the page. It is also important to note that even if Internet Explorer is the default browser, or you have told it not to bother you, it will still appear.

Now the first step in this is understanding what has happened, which in this case is that the iframe.dll file has be hijacked, either through modification or replacement (which indicates that the program would have had to have gone through UAC and the user OK’ing the change), specifically it seems that the page is being redirected, but I cannot confirm this as it was more important to fix the issue than it was to find out the technical reasons why

None the less the first step is to run a malware cleaner, specifically I use Malwarebytes, and I did a cleanup of the system with CCleaner for good measure, but it is important to note that this is just to clean up other things that the malware may have left behind, it is not to fix this problem.

As this problem resides in what is a reasonably well protected file, the best way to fix the issue is with Microsoft’s built-in System File Checker (SFC) tool.

It is actually rather simple to fix this error;

Open a Command Prompt window as Administrator

Open an Administrative Command Prompt
Open an Administrative Command Prompt

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

This tool will now run and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

Restart your PC and the offending window will now be replaced with the default Microsoft one. Now how I said before it seems to override/overwrite the setting telling Internet Explorer not to display the defaultbrowser.htm tab (either because it is default, or you have told it not to check). This continues on here, because that setting was tampered with by the malware it will display the default browser page, to clear this you either simply need to tell it to not display it, or go through the set as default process.



Enabling Data Deduplication on Server 2012 R2

Data deduplication (or dedupe for short) is a process which by the system responsible for the deduplication scans the files in one or more specific locations for duplicates, and where duplicates are found it replaces all the duplicate data with a reference to the “original” data. This in essence is a data compression technique designed to save space by reducing the data actually stored, as well as aiming to provide single-instance data storage (storing only one copy of the data, no matter how many places its located in).

The way this is achieved is dependent on the system used, it can be done but it can be done on block level, file level or other levels, again depending on the system and how it is implemented.

What we are going to do in this article is we are going to enable deduplication on a Windows Server 2012 R2 Server. Keep in mind this is changing data and quite possibly going to cause data damage or loss, as such make sure you have a working backup BEFORE continuing.

Firstly we need to access the server that you are planning to configure deduplication on, I will leave it up to you how you achieve that. Once you have access to the server we can begin.

On the server open “Server Manager” if it is not already open


If it gives you the default splash page, simply click next (and I suggest telling it to skip that page in future by use of the checkbox) Once we are in the “Installation Type” page we need to select “Role-based or feature-based installation” and click “Next”


In the “Server Selection” page select the server you want to install the service on (commonly the one your using), Click “Next”



Next up is the “Server Roles” page, here is where the configuration changes need to take place. In the right had list of checkboxes (titled “Roles”) scroll down till you see “File And Storage Services” then open “File and iSCSI Services” then further down the page check the “Data Duplication” checkbox. Click “Next”, accepting any additional features it wants to install.


In the “Features” page simply click “Next”


On the “Confirmation” page check you are installing what is required and click “Install”


Wait for the system to install, and exit the installer control panel, restart if your server requires it.

Upon completion of the install and any tasks associated with the installation re-open “Server Manager” and in the left hand column select “File and Storage Services”


This will change the screen in “Server Manager” to a three column layout, in the middle column select “Volumes”


With the volumes now displaying in the right hand of the three columns, right click on the volume you want to configure deduplication on and select “Configure Data Deduplication”


This will bring up the “Deduplication Settings” screen for the volume you right clicked on. Unless Data Deduplication has been configured before, the “Data deduplication” will be “Disabled”.


As I am configuring this on a file server, I am going to select the “General purpose file server” option, and leave the rest as defaults. I am then going to click on the “Set Deduplciation Schedule” button


The “Deduplication Schedule” will now open. I suggest checking the “Enable background optimization” checkbox as this will allow the server to optimise data in the background. I also elected to create schedules to allow for more aggressive use of system resources, the first one allows for it to be done after most people have left for the day, and before the servers scheduled backup, the second one allows it to run all weekend but again stops for backups. Please note that these settings are SYSTEM settings and apply to all data deduplication jobs on the system, and are not unique to each individual deduplication job

Click “Apply” on the “Deduplication Schedule” screen, and then “Apply” on the “Deduplication Settings” screen, this will drop you back to the “File and Storage Services > Volumes” screen, and you are now done, Data deduplication is configured.

Have fun, and don’t forget that backup


Using Internet Information Services (IIS) to Redirect HTTP to HTTPS on a Web Application Proxy (WAP)Server

For those of you who do not know, Microsoft’s Web Application Proxy (WAP) is a reverse HTTPS proxy used for redirecting  HTTPS requests from multiple incoming domains (or subdomains) to internal servers. it does however not handle HTTP at any point, which is a failure in itself, I mean it would not be hard to add a part of the system where if enabled it redirects HTTP to HTTPS itself, rather than having to use a workaround, come on Microsoft stay on the ball here, but I digress.

As I stated the main issue here is it does not within the WAP itself redirect a HTTP request to the equivalent HTTPS address. I have played with multiple possible solutions for this including a Linux server running Apache 2 using PHP to read the requested URL and redirect it to the HTTPS equivalent. None of these however have the simple elegance of this solution which includes the HTTP to HTTPS redirect on the same box as the WAP system itself.

First of all you need to log into the WAP server and install the Internet Information Services role. Once done open the management console and you should get a window similar to below.


Now navigate to the required server by clicking on it, and on the right hand side click “Get New Web Platform Components”.


This will open a new web browser window as shown below, when it does simply select “Free Download”.if you have issues with not being able to download the file due to a security warning, you should see the earlier blog here to see how to enable the downloads. Download and install the software via your chosen method.


Once it is installed a new page will appear, this is the main splash page of the Web Platform Installer


Using the search box (which at the time of writing, using Web Platform Installer 5.0, is in the top right hand corner) search for the word “Rewrite”. This will then display a “URL Rewrite” result with the version number appended to the end (which at time of writing this article is 2.0) and click the “Add” button to the right of the highlighted “URL Rewrite” line,


This will change the text on the button to “Remove” and activate the “Install” button the the lower right of the screen, click the install button.


Clicking this install button will bring up a licensing page, click the “I Accept” button (assuming of course you do accept the T’s & C’s)


You will then get an install progress page


Which will change to a completed page after it is done, so click the “Finish” button in the lower right hand corner


This will drop you back to the same original splash screen of the Web Platform Installer, click “Exit


You will now need to close and re-open the IIS Manager and reselect the server you were working on. You should now see two new options, the first being “Web Platform Installer” which we do not need to concern ourselves with any further, the second is “URL Rewrite”,


Double click on “URL Rewrite” and open up the URL Rewrite management console, on the right hand side of this console in the “Actions” pane, click “Add Rule”.


This opens up a box of possible rewrite rules, what we want to create is an “Inbound Rule” as our requests are coming into the server from an external source. Select “Blank Rule” and click the “OK” button


In the new page that opens, in the “Name” field type the name that you want to give the rule, I use and suggest HTTP to HTTPS Redirect, as this tells you exactly what it does at a glance


In the next section, “Match URL” set “Requested URL” to “Matches the Pattern” (default), “Using” to “Regular Expressions” (default) and most importantly “Pattern” to “(.*)” (without the quotes). I suggest you take this opportunity to test the pattern matching.

15-NewRule-Regex Match

In the “Conditions” section, ensure that the “Logical grouping” is set to “Match All” (default) and click the “Add” button.


In the new box that appears enter the following, in the “Condition input” field type “{HTTPS}” (again without the quotes, and yes those are curly braces, not brackets). Change the “Check if input string” dropdown to “Matches the Pattern” and in the “Pattern” box below type “^OFF$” (again, no quotes), and “Ignore case” should be checked. With this one I do not suggest testing the pattern, as even though this system works fine for me, this test ALWAYS fails. Click the “OK” button (mine is not highlighted here as I had already clicked it away and had to re-open the box)


This will take you back to the new rule screen, check the conditions match as shown and then we can move on.


This is the part where we now tell it what we want to do when it matches the previous conditions, in the Action pane change the “Action type” to “Redirect”, Set the “Redirect URL” to “https://{HTTP_HOST}/{R:1}” (again, they are curly braces and of course no quotes), you can select whether “Append query string” is checked or not, but I highly recommend leaving it checked, as if someone has emailed out a URL with a query on it, but not put in the protocol headers (http:// and https:// being the ones we are concerned about) we want the query string to be appended to the end of the redirected URL so they end up where they intended to be. Finally make the “Redirect type” dropdown read “Permanent (301)” (default).


Restart the server service for good measure and there you have it you now have HTTP being redirected to HTTPS which in theory at least is on the same server. Ensure that you have ports 80 (HTTP) and 443 (HTTPS) redirected from your router to the server and the firewalls (and any other intermediaries) on both the router and server set to allow the traffic as required

Enjoy and as always have fun