The VM’s Faulty Pagefile

Recently I was having an issue with one of my Virtual Machines, specifically the one I use for accounting purposes. Each time I booted the VM I would get an error stating that there was an error with the paging file. Me being me, I ignored it and continued on about my task, with a case of “its only my stuff I will deal with it later” and putting the issue out of my mind. However I then started to get errors in the software I was using, thinking, “oh great what now” I looked at a few things and then came back to this error, which as it turns out, was the cause of the symptoms I was seeing in the software package I was using.

Windows created a temporary paging file on your computer because of a problem that occured with your paging file configuration when you started your computer. The total paging file size for all disk drives may be somewhat larger than the size you specified

Now what does this mean, well it can mean one of several things. Most commonly, expecially on a VM it means that the disk is full to a point where Windows cannot create the paging file when it starts up.

Luckily this is a simple fix, which I am not going to take you through here as the way it is done is entirely dependent on your paticular hypervisor (most commonly Hyper-V, VMWare, Parallels or VirtualBox), this does however assume you have the space free to increase the size of the virtual disk, if you do not have space, you will need to clear some files off the VM to make space.

Ok, but what if you have plenty of space. Well their are two options off the top of my head that make work, one is removing and then recreating the paging file as it may have become corrupted, especially if the system had an unclean shutdown (powered off instantly such as a power failure) or you may need to run a system file check to clean up any errors.

Whilst neither are hard to do, both can take some time to complete. I would suggest starting with the system file scan as it is the easier of the two to do and the more comprehensive, but both options are outlined below

System File Scan

To do this you need to open an administrative command prompt

Open an Administrative Command Prompt
Open an Administrative Command Prompt

 

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

 

This tool will now run (which may take some time), and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

 

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

 

Restart your PC and see if the issue has been resolved, if not you may try to manually delete and recreate the pagefile as outlined as below.

 

Manual Removal and Recreation of the Pagefile

 

Having logged into your system as an account with administrative rights (or otherwise authorised yourself for administrative access to the system panel) you need to open the system properties display on the system, or if the dialogue box with the warning pops up then click OK and the Pagefile controls will open, allowing you to skip the first section

 

    1. Firstly, if the Paging file settings display is not open we need to open it, do this by

      a) Right Clicking on
      If it’s not already open, open the virtual memory settings by rich-clicking on Computer, → Properties → Advanced System Settings → click the Advanced tab → Under Performance, click Settings, go to Advanced tab, finally under Virtual Memory section click the Change button.

 

    1. Uncheck the Automatically manage paging file size for all drives checkbox.

 

    1. Set a “Custom size” for the paging file on the C drive: 0MB initial, 0MB maximum.

 

    1. Click OK, close all dialog boxes, and restart your computer.

 

    1. After logging in again, delete the file C:\pagefile.sys
        1. To do this, you may need to change your folder settings so you can see it first. Open a window of your C: drive and click Organize at the top, then Folder and Search Options
        1. Click the View tab, and make sure Show hidden files, folders and drives is turned on, and that Hide protected system files is not checked.
      1. Click OK and go back to your C: drive, find pagefile.sys and delete it.

 

    1. Now go back to the virtual memory settings (see step 2 above) and set the paging file for the C: drive to System managed size, and then make sure the Automatically manage paging file size for all drives checkbox is checked.

 

    1. Click OK, close all dialog boxes, and restart your computer.

LFTP and the Stuck Login

I have been working on a new backup management system that utilizes the Synology and its ability to schedule tasks recently. Whilst I am untimely working on a program written in Go to be able to manage multiple backup configurations utilizing multiple backup protocols to achieve my goal I have been playing with the underlying software and protocols outside this program. One such piece of software is LFTP, this software allows for the transfer of files utilizing the FTP, FTPs, sFTP, HTTP, HTTPS and other protocols but the afore mentioned ones are the ones that are important for the software I am writing, but most importantly it supports mirroring with the FTP series protocols

Whilst I am writing this software I still wanted to get backups of the system running, to this end I was testing the LFTP commands and I hit an issue where the system will simply not connect to the server, yet the regular FTP client works fine.

Firstly we have to understand that LFTP does not connect to the server until the first command is issued, in the case of the example below, this was ls. Once this command is issued LFTP attempts to connect to and log in to the server, and this is where the issue happens, LFTP just hangs at “Logging In”

user@server backupfolder$ lftp -u username,password ftp.hostname.comlftp username@ftp.hostname.com:~> ls`ls' at 0 [Logging in...] 

To work out what the issues I had to do a little research and it comes down the fact the LFTP wants to default to secure connections, which in and of itself is not a bad thing, in fact it is a good thing but many FTP servers are yet to implement the sFTP/FTPs protocols and as such we end up with a hang at login. There is, however, two ways to fix this.

The first way to fix this is to turn off FTP for this connection only which is done through the modified connect command of

lftp -e "set ftp:ssl-allow=off;" -u username,password ftp.hostname.com

This is best if you are dealing with predominantly secure sites, however as I said most FTP servers are still utilising the older insecure FTP protocol at which point it may be more beneficial to change the LFTP configuration to default to insecure mode (and then enable it if needed for the secure connections, depends on which you have more of). To do this we need to edit the LFTP config file, to do this do the following

Utilising your favorite text editor (vi, nano or whatever it matters not) the config file is at /etc/lftp.conf

At some point in the file (I suggest at the end) put the following line

set ftp:ssl-allow false

Save your configuration and the defaulting to secure is turned off and your LFTP connection should work

Have Fun

Justin

The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

Enabling Data Deduplication on Server 2012 R2


Data deduplication (or dedupe for short) is a process which by the system responsible for the deduplication scans the files in one or more specific locations for duplicates, and where duplicates are found it replaces all the duplicate data with a reference to the “original” data. This in essence is a data compression technique designed to save space by reducing the data actually stored, as well as aiming to provide single-instance data storage (storing only one copy of the data, no matter how many places its located in).

The way this is achieved is dependent on the system used, it can be done but it can be done on block level, file level or other levels, again depending on the system and how it is implemented.

What we are going to do in this article is we are going to enable deduplication on a Windows Server 2012 R2 Server. Keep in mind this is changing data and quite possibly going to cause data damage or loss, as such make sure you have a working backup BEFORE continuing.

Firstly we need to access the server that you are planning to configure deduplication on, I will leave it up to you how you achieve that. Once you have access to the server we can begin.

On the server open “Server Manager” if it is not already open

2014-09-19-01-ServerManager

If it gives you the default splash page, simply click next (and I suggest telling it to skip that page in future by use of the checkbox) Once we are in the “Installation Type” page we need to select “Role-based or feature-based installation” and click “Next”

2014-09-19-02-AddRoleorFeature

In the “Server Selection” page select the server you want to install the service on (commonly the one your using), Click “Next”

2014-09-19-03-SelectServer

 

Next up is the “Server Roles” page, here is where the configuration changes need to take place. In the right had list of checkboxes (titled “Roles”) scroll down till you see “File And Storage Services” then open “File and iSCSI Services” then further down the page check the “Data Duplication” checkbox. Click “Next”, accepting any additional features it wants to install.

2014-09-19-04-SelectService

In the “Features” page simply click “Next”

2014-09-19-05-IgnoreFeatures

On the “Confirmation” page check you are installing what is required and click “Install”

2014-09-19-06-Install

Wait for the system to install, and exit the installer control panel, restart if your server requires it.

Upon completion of the install and any tasks associated with the installation re-open “Server Manager” and in the left hand column select “File and Storage Services”

2014-09-19-07-ServerManager

This will change the screen in “Server Manager” to a three column layout, in the middle column select “Volumes”

2014-09-19-08-ServerVolumes

With the volumes now displaying in the right hand of the three columns, right click on the volume you want to configure deduplication on and select “Configure Data Deduplication”

2014-09-19-09-ServerVolumesRightClick

This will bring up the “Deduplication Settings” screen for the volume you right clicked on. Unless Data Deduplication has been configured before, the “Data deduplication” will be “Disabled”.

2014-09-19-10-DuplicationSettings-Initial

As I am configuring this on a file server, I am going to select the “General purpose file server” option, and leave the rest as defaults. I am then going to click on the “Set Deduplciation Schedule” button

2014-09-19-11-DuplicationSettings-Enable

The “Deduplication Schedule” will now open. I suggest checking the “Enable background optimization” checkbox as this will allow the server to optimise data in the background. I also elected to create schedules to allow for more aggressive use of system resources, the first one allows for it to be done after most people have left for the day, and before the servers scheduled backup, the second one allows it to run all weekend but again stops for backups. Please note that these settings are SYSTEM settings and apply to all data deduplication jobs on the system, and are not unique to each individual deduplication job

Click “Apply” on the “Deduplication Schedule” screen, and then “Apply” on the “Deduplication Settings” screen, this will drop you back to the “File and Storage Services > Volumes” screen, and you are now done, Data deduplication is configured.

Have fun, and don’t forget that backup

Justin

Using Internet Information Services (IIS) to Redirect HTTP to HTTPS on a Web Application Proxy (WAP)Server

For those of you who do not know, Microsoft’s Web Application Proxy (WAP) is a reverse HTTPS proxy used for redirecting  HTTPS requests from multiple incoming domains (or subdomains) to internal servers. it does however not handle HTTP at any point, which is a failure in itself, I mean it would not be hard to add a part of the system where if enabled it redirects HTTP to HTTPS itself, rather than having to use a workaround, come on Microsoft stay on the ball here, but I digress.

As I stated the main issue here is it does not within the WAP itself redirect a HTTP request to the equivalent HTTPS address. I have played with multiple possible solutions for this including a Linux server running Apache 2 using PHP to read the requested URL and redirect it to the HTTPS equivalent. None of these however have the simple elegance of this solution which includes the HTTP to HTTPS redirect on the same box as the WAP system itself.

First of all you need to log into the WAP server and install the Internet Information Services role. Once done open the management console and you should get a window similar to below.

01-OpenIISManager

Now navigate to the required server by clicking on it, and on the right hand side click “Get New Web Platform Components”.

02-GetNewWebPlatformComponents

This will open a new web browser window as shown below, when it does simply select “Free Download”.if you have issues with not being able to download the file due to a security warning, you should see the earlier blog here to see how to enable the downloads. Download and install the software via your chosen method.

03-FreeDownload

Once it is installed a new page will appear, this is the main splash page of the Web Platform Installer

04-WebPlatformInstaller5.0HomeScreen

Using the search box (which at the time of writing, using Web Platform Installer 5.0, is in the top right hand corner) search for the word “Rewrite”. This will then display a “URL Rewrite” result with the version number appended to the end (which at time of writing this article is 2.0) and click the “Add” button to the right of the highlighted “URL Rewrite” line,

05-URLRewriteAdd

This will change the text on the button to “Remove” and activate the “Install” button the the lower right of the screen, click the install button.

06-URLRewriteInstall

Clicking this install button will bring up a licensing page, click the “I Accept” button (assuming of course you do accept the T’s & C’s)

07-LicenceAcceptance

You will then get an install progress page

08-RewriteInstallProcess

Which will change to a completed page after it is done, so click the “Finish” button in the lower right hand corner

09-RewriteInstallFinish

This will drop you back to the same original splash screen of the Web Platform Installer, click “Exit

10-WPI-Finish

You will now need to close and re-open the IIS Manager and reselect the server you were working on. You should now see two new options, the first being “Web Platform Installer” which we do not need to concern ourselves with any further, the second is “URL Rewrite”,

11-IISManager-NewModule

Double click on “URL Rewrite” and open up the URL Rewrite management console, on the right hand side of this console in the “Actions” pane, click “Add Rule”.

12-AddRewriteRule

This opens up a box of possible rewrite rules, what we want to create is an “Inbound Rule” as our requests are coming into the server from an external source. Select “Blank Rule” and click the “OK” button

13-NewRule-BlankRule(Inbound)

In the new page that opens, in the “Name” field type the name that you want to give the rule, I use and suggest HTTP to HTTPS Redirect, as this tells you exactly what it does at a glance

14-NewRule-NameRule

In the next section, “Match URL” set “Requested URL” to “Matches the Pattern” (default), “Using” to “Regular Expressions” (default) and most importantly “Pattern” to “(.*)” (without the quotes). I suggest you take this opportunity to test the pattern matching.

15-NewRule-Regex Match

In the “Conditions” section, ensure that the “Logical grouping” is set to “Match All” (default) and click the “Add” button.

16.01-NewRule-AddCondition

In the new box that appears enter the following, in the “Condition input” field type “{HTTPS}” (again without the quotes, and yes those are curly braces, not brackets). Change the “Check if input string” dropdown to “Matches the Pattern” and in the “Pattern” box below type “^OFF$” (again, no quotes), and “Ignore case” should be checked. With this one I do not suggest testing the pattern, as even though this system works fine for me, this test ALWAYS fails. Click the “OK” button (mine is not highlighted here as I had already clicked it away and had to re-open the box)

16.02-NewRule-ConditionSettings

This will take you back to the new rule screen, check the conditions match as shown and then we can move on.

16.03NewRule-ConditionComplete

This is the part where we now tell it what we want to do when it matches the previous conditions, in the Action pane change the “Action type” to “Redirect”, Set the “Redirect URL” to “https://{HTTP_HOST}/{R:1}” (again, they are curly braces and of course no quotes), you can select whether “Append query string” is checked or not, but I highly recommend leaving it checked, as if someone has emailed out a URL with a query on it, but not put in the protocol headers (http:// and https:// being the ones we are concerned about) we want the query string to be appended to the end of the redirected URL so they end up where they intended to be. Finally make the “Redirect type” dropdown read “Permanent (301)” (default).

17-NewRule-ActionConfiguration

Restart the server service for good measure and there you have it you now have HTTP being redirected to HTTPS which in theory at least is on the same server. Ensure that you have ports 80 (HTTP) and 443 (HTTPS) redirected from your router to the server and the firewalls (and any other intermediaries) on both the router and server set to allow the traffic as required

Enjoy and as always have fun

Justin

EMCO Remote Shutdown and Setting Windows 8(.1) Remote Registry by Group Policy Object (GPO)

As I have mentioned in a previous blog post, several clients who have been using this software for several years with their fleets of Windows 7 desktops with great success. This however changed when testing during the Windows 8.1 deployment we found that it does not work for 8/8.1 this is due to the Remote Registry service no longer being enabled by default.

2014-08-11-RemoteRegistry-00-DisabledRegistry

Now rather than wanting to update the machines manually or to change the service status in the image, I wanted to start this service as this will ensure that all devices turn it on and when I or someone else creates a new image in future, it is one less thing to do. It turns out this is easier to do than I thought it would be.

First you need to open up “Group Policy Management“, find the policy you want to edit by expanding the appropriate trees (or create a new policy within the right scope), right click on it and select “Edit“. This is a computer policy so if like me you limit your GPO’s to work on only users OR computers (Best Practice), then make sure you select a computer enabled policy.

2014-08-11-RemoteRegistry-01-GPEDIT

 

Once you have opened the “Group Policy Management Editor” then you will need to navigate the tree (in the left hand column) to “Computer Configuration” > “Policies” > “Windows Settings” > “Security Settings” > “System Services” and then in the right hand column search out “Remote Registry“, double click on this to open the “Remote Registry Properties” box.

2014-08-11-RemoteRegistry-03-EditPolicy

In this box, select the “Define this policy setting” checkbox, which will then in turn enable the options below it, and you simply want to change the “Select service startup mode” radio buttons system to “Automatic

Now after a group policy update (which can be forced on individual machines via “gpupdate /force“, without the quotes) and a reboot, the machines will have the “Remote Registry” started and running

2014-08-11-RemoteRegistry-04-RegistryEnabled

 

Justin

%d bloggers like this: