The VM’s Faulty Pagefile

Recently I was having an issue with one of my Virtual Machines, specifically the one I use for accounting purposes. Each time I booted the VM I would get an error stating that there was an error with the paging file. Me being me, I ignored it and continued on about my task, with a case of “its only my stuff I will deal with it later” and putting the issue out of my mind. However I then started to get errors in the software I was using, thinking, “oh great what now” I looked at a few things and then came back to this error, which as it turns out, was the cause of the symptoms I was seeing in the software package I was using.

Windows created a temporary paging file on your computer because of a problem that occured with your paging file configuration when you started your computer. The total paging file size for all disk drives may be somewhat larger than the size you specified

Now what does this mean, well it can mean one of several things. Most commonly, expecially on a VM it means that the disk is full to a point where Windows cannot create the paging file when it starts up.

Luckily this is a simple fix, which I am not going to take you through here as the way it is done is entirely dependent on your paticular hypervisor (most commonly Hyper-V, VMWare, Parallels or VirtualBox), this does however assume you have the space free to increase the size of the virtual disk, if you do not have space, you will need to clear some files off the VM to make space.

Ok, but what if you have plenty of space. Well their are two options off the top of my head that make work, one is removing and then recreating the paging file as it may have become corrupted, especially if the system had an unclean shutdown (powered off instantly such as a power failure) or you may need to run a system file check to clean up any errors.

Whilst neither are hard to do, both can take some time to complete. I would suggest starting with the system file scan as it is the easier of the two to do and the more comprehensive, but both options are outlined below

System File Scan

To do this you need to open an administrative command prompt

Open an Administrative Command Prompt
Open an Administrative Command Prompt

 

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

 

This tool will now run (which may take some time), and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

 

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

 

Restart your PC and see if the issue has been resolved, if not you may try to manually delete and recreate the pagefile as outlined as below.

 

Manual Removal and Recreation of the Pagefile

 

Having logged into your system as an account with administrative rights (or otherwise authorised yourself for administrative access to the system panel) you need to open the system properties display on the system, or if the dialogue box with the warning pops up then click OK and the Pagefile controls will open, allowing you to skip the first section

 

    1. Firstly, if the Paging file settings display is not open we need to open it, do this by

      a) Right Clicking on
      If it’s not already open, open the virtual memory settings by rich-clicking on Computer, → Properties → Advanced System Settings → click the Advanced tab → Under Performance, click Settings, go to Advanced tab, finally under Virtual Memory section click the Change button.

 

    1. Uncheck the Automatically manage paging file size for all drives checkbox.

 

    1. Set a “Custom size” for the paging file on the C drive: 0MB initial, 0MB maximum.

 

    1. Click OK, close all dialog boxes, and restart your computer.

 

    1. After logging in again, delete the file C:\pagefile.sys
        1. To do this, you may need to change your folder settings so you can see it first. Open a window of your C: drive and click Organize at the top, then Folder and Search Options
        1. Click the View tab, and make sure Show hidden files, folders and drives is turned on, and that Hide protected system files is not checked.
      1. Click OK and go back to your C: drive, find pagefile.sys and delete it.

 

    1. Now go back to the virtual memory settings (see step 2 above) and set the paging file for the C: drive to System managed size, and then make sure the Automatically manage paging file size for all drives checkbox is checked.

 

    1. Click OK, close all dialog boxes, and restart your computer.

LFTP and the Stuck Login

I have been working on a new backup management system that utilizes the Synology and its ability to schedule tasks recently. Whilst I am untimely working on a program written in Go to be able to manage multiple backup configurations utilizing multiple backup protocols to achieve my goal I have been playing with the underlying software and protocols outside this program. One such piece of software is LFTP, this software allows for the transfer of files utilizing the FTP, FTPs, sFTP, HTTP, HTTPS and other protocols but the afore mentioned ones are the ones that are important for the software I am writing, but most importantly it supports mirroring with the FTP series protocols

Whilst I am writing this software I still wanted to get backups of the system running, to this end I was testing the LFTP commands and I hit an issue where the system will simply not connect to the server, yet the regular FTP client works fine.

Firstly we have to understand that LFTP does not connect to the server until the first command is issued, in the case of the example below, this was ls. Once this command is issued LFTP attempts to connect to and log in to the server, and this is where the issue happens, LFTP just hangs at “Logging In”

user@server backupfolder$ lftp -u username,password ftp.hostname.comlftp username@ftp.hostname.com:~> ls`ls' at 0 [Logging in...] 

To work out what the issues I had to do a little research and it comes down the fact the LFTP wants to default to secure connections, which in and of itself is not a bad thing, in fact it is a good thing but many FTP servers are yet to implement the sFTP/FTPs protocols and as such we end up with a hang at login. There is, however, two ways to fix this.

The first way to fix this is to turn off FTP for this connection only which is done through the modified connect command of

lftp -e "set ftp:ssl-allow=off;" -u username,password ftp.hostname.com

This is best if you are dealing with predominantly secure sites, however as I said most FTP servers are still utilising the older insecure FTP protocol at which point it may be more beneficial to change the LFTP configuration to default to insecure mode (and then enable it if needed for the secure connections, depends on which you have more of). To do this we need to edit the LFTP config file, to do this do the following

Utilising your favorite text editor (vi, nano or whatever it matters not) the config file is at /etc/lftp.conf

At some point in the file (I suggest at the end) put the following line

set ftp:ssl-allow false

Save your configuration and the defaulting to secure is turned off and your LFTP connection should work

Have Fun

Justin

The Old Backup Regime

After I purchased the NAS box to place at home for my work data (there is a separate one for family data, they do however backup to each other but I will cover that in another post) I decommissioned my old Windows Server 2008 R2 box.

This box, however, did do a multitude of things that were controlled via scheduled tasks and scripts that I have now moved to the Synology. Chief amongst this was the backup for several websites for “just for when” something goes wrong.

There were several bits of software in the implementation of this task, these were (are);

  • wget (Windows Version) – Command line utility for downloading files, whilst there are other options, this was quick and simple, exactly what I needed
  • FTPSync (CyberKiko) – a Great little piece of software, can display a GUI showing sync progress which is useful for troubleshooting or runs in a silent mode with no GUI. It utilises simple ini text files for configuration (it encrypts the password) making it easy to configure and it has many options for doing this configuration
  • DeleteMe (CyberKiko) – Simple file removal tool, give it a folder (it can have multiple set up) and a maximum age of the files in that folder and it will remove anything older than that.
  • 7Zip (Command Line Version) – Command Line zip archive creation utility, what more is there to say
  • Custom PHP DB Export Scripts  – Custom PHP scripts that pulls the database(s) out of MySQL and zips it up. This was originally run with a CRON job, but I found it easier to use wget to pull the trigger file when I wanted the backup was then created, then pull the file itself, then pull a delete trigger

That’s it for the software I use but what about the backup process itself? For each of the sites, I need to backup the custom PHP scripts were configured on the server. Then a custom batch file containing a bunch of commands (or should that be a batch of commands) to download and archive the files.

The batch file had the following segments in it to achieve the end goal;

  1.  Check if backup is still running from previous attempt (Utilizes blank text file that is created at start of script and then removed at end)
    1. If it is running, skip the script and go to the end of the file
    2. If a backup job is not running, create the file locking out other jobs.
  2. Run cleanup of old files
  3. If an existing backup directory for today exists (due to a failed backup job most likely), remove it and create a new one
  4. Start logging output to a log file
  5. Start Repeating Process (Repeats once for each site that is being backed up)
    1. Generate Database Backup
    2. Retrieve Database Backup
    3. Remove Database Backup to the long term storage folder
    4. Rename Database Backup File
    5. Move Database Backup File to Storage Location
    6. Sync (utilizing FTPSync) the sites directories
    7. Remove Existing zipped backup file of the site’s files and directories if it exists
    8. Zip folder structure and files for the website put the ZIP file in the long term storage folder
  6. Copy Backup Complete information to log file
  7. Remove Process Lock File

To download the batch files, click here

Reasonably simple, to add a new site, copy and paste a previous one, update a few details and off you go.

Now I realize that some of this is perhaps not the best or most secure way to achieve a goal (specifically how I was handling the database) but it was quick, easy and it worked. I could have also made the whole process more efficient by using config a files and a for loop, but well I didn’t

Have Fun

Justin

 

Oh Shit!!! The Data is Gone…. Or Is It?

Yep, I screwed up, I made assumptions, didn’t double and triple check things, and made a mess of something I was working on, professionally none the less. I did fix it, but it was a stupid screw up none the less. The irony is not lost on me about how I harp on about backups regularly to everyone.

The other day I ended up with one of those Oh!! SHIT moments, I was migrating an older 2012 R2 file server to 2016 and whilst I was doing this I decided to kick the old server that was due to return for the leasing company from the Failover Cluster it was in. As standard I paused the node, removed all the references from it and I then hit the evict function on the node to evict it from the cluster, and it was from this that it all went to shit, doing this whilst doing migrations, what the first part of my mistake. What happened the Failover Cluster borked itself and crashed on the remaining servers, and it would not restart, to this day I have not got it to restart.

After spending an hour or so trying to get the cluster to restart, I relented and went to the backups to restore the offending server. Hitting the backups I go to the server I want to store, and notice that its only 16GB WTF!!!! the server should be several TB in size (it is a file server after all).

Upon further investigation it seems that I was missreading the backup reports and the old server, which has the same name on the old Hyper-V Cluster as the new server does on the new implementation was not getting backed up, it was the new one. I misread the report and assumed that it was backing up the old server, mistake number 0 (this had been happening for the 6 weeks before the backup failure) and the old restore points being more than our retention limit were gone. Ok I will hit the long term off-site backups might take a while but the data is safe, well it was not or so it seems, the other technican at the offsite location had removed the offsite backups for the fileserver from the primary site. Why, because they were taking up too much space on that site’s primary backup disk (The storage at each site is partitioned to provide onsite backup for each site, with the second partition being the offsite backup for the other site).

Damn, so this copy of the data is the only one.

Ok so I killed the cluster server that everything was on, and using the old evicted node I rebuild a single node “cluster” and mounted the CSV, mounted the VHDX and everything appeared as it should. Whoo Hoo access to the data, well not so fast there buddy.

After moving some data an error popped up stating that the data was inaccessible, ok no problem loss of a single file is not a real issue. Then it popped up again, then again.. the second Oh! Shit! moment within several hours.

2017-02-02 - Dedupe Error

I recovered and moved the data I could access leaving me purely with data I couldn’t. I tried chkdsk and other tools and after several hours I took a break from it, needing to clear my mind.

Coming back to it later I looked at the error, looked at what was happening, and recalled seeing an article on another blog about Data Deduplication corrupting files on Server 2016. With this I began wondering if it had effected Server 2012 R2, then the lightning struck deduplication, this process leaves redirects in place and essentially has a database of files that it links to for the deduplication. The server the VHDX was mounted did not know about the Deduplication, the database or how to access it.

Up until now I had only mounted the Data VHD. Now I rebuilt the server utilising the original Operating System VHDX to run the server. I let it install the new devices and boot.

Upon the server booting I opened a file I could not access before, and it instantly popped onto my screen. Problem Solved

Note to remember If you are doing Data Recovery or trying to copy data from a VHDX (or other disk, virtual or physical) that was part of a deduplicated file server, you need to do it from the server due to the deduplication database. You may be able to import the database to another server, I really have no idea, and I am not going to try to find out.

Unzip Multiple Zip Files on OSX from Command Line

I recently had a need to unzip a whole bunch of zip files at work containing new client RADIUS certificates to be installed on the clients due to the depreciation of the SHA1 algorithm for security reasons by the software vendors (Microsoft and Apple in this case).

These zip files contained one useful certificate file (a .pfx containing the required certificate and the new certificate chain) per zip and a bunch of other files that are only applicable in certain situations, that I need to remove once decompressed and extracted the files from the zip archive. I consequently used a simple multiple-step process utilizing the power of the terminal prompt/command line to achieve this.

Firstly if you are needing to do this, I am assuming the files are all easily accessible and to make it easier, let’s make a directory to house all the initial zip files and put the files in there, this makes the cleanup so much easier later.

Once this is achived we can utilise the terminal prompt to make the rest of the process easier. I recommend you do this and put the files in their own directory as the following command swquice will unzip ALL zip archives files (or rather it will attempt to unzip anything with a .zip extension) in the directory, and will delete them if you do that part of the process.

Open terminal (Type Terminal into Spotlight Command + Space Bar or it is in the Application/Utilities folder)

In terminal do the following

[code language=”bash”]# go to the containing folder

cd /Users/jpsimmonds/Downloads/AAAA-Certs

#Unzip all the Files in the directory (escape “\” is used to stop wildcard expansion)

unzip \*.zip

#Remove All Zip Files – To change the file types to remove change the “zip” portion of the command

rm -f *.zip[/code]

Nice and easy, the files are now extracted and the initial zips (and other files if you ran the delete command on extra extensions) are removed, leaving you just the files that you require

Installing a non-Windows Secure Boot capable EFI Virtual Machine in Hyper-V

So you have downloaded an operating system installation disk (Ubuntu 16.04.2 used in this instructional) and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as a valid Secure Boot capable disk, as shown below it states that “The image’s hash and certificate are not allowed”

Luckily this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over the validity of the Secure Boot certificate.

Check out the video I have created showing you how to do this, alternatively keep reading below for instructions and more details

 

 

Turning off your VM, open up the settings page and navigate to the “Security” menu (Server 2016). As you can see in the image below, “Secure Boot” is enabled (checked) and the template is set to “Microsoft Windows”. What this effectively does is limit the Secure Boot function to working only with an appropriately signed Microsoft Windows boot system.

To fix this, there are two options, and it depends on the operating system you are trying to install. Preferably we want to keep the benefits of Secure Boot so the best option if it works for your operating system we want to simply change the template to “Microsoft UEFI Certificate Authority” this opens up the Secure Boot option to work with a greater range of appropriately signed boot systems, as against the Microsoft Windows one exclusively. The settings for this are shown below

Click Apply and this is hopefully now work, and you can check this by running the virtual machine.

Upon booting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way

 

If this change in of the CA template for Secure Boot does not work however you may need to disable secure boot entirely.

To achieve this go back to the “Security” menu simply uncheck it as per the image below, click Apply and it should now work.

 

 

Have Fun

Justin

%d bloggers like this: