Oh Shit!!! The Data is Gone…. Or Is It?

Yep, I screwed up, I made assumptions, didn’t double and triple check things, and made a mess of something I was working on, professionally none the less. I did fix it, but it was a stupid screw up none the less. The irony is not lost on me about how I harp on about backups regularly to everyone.

The other day I ended up with one of those Oh!! SHIT moments, I was migrating an older 2012 R2 file server to 2016 and whilst I was doing this I decided to kick the old server that was due to return for the leasing company from the Failover Cluster it was in. As standard I paused the node, removed all the references from it and I then hit the evict function on the node to evict it from the cluster, and it was from this that it all went to shit, doing this whilst doing migrations, what the first part of my mistake. What happened the Failover Cluster borked itself and crashed on the remaining servers, and it would not restart, to this day I have not got it to restart.

After spending an hour or so trying to get the cluster to restart, I relented and went to the backups to restore the offending server. Hitting the backups I go to the server I want to store, and notice that its only 16GB WTF!!!! the server should be several TB in size (it is a file server after all).

Upon further investigation it seems that I was missreading the backup reports and the old server, which has the same name on the old Hyper-V Cluster as the new server does on the new implementation was not getting backed up, it was the new one. I misread the report and assumed that it was backing up the old server, mistake number 0 (this had been happening for the 6 weeks before the backup failure) and the old restore points being more than our retention limit were gone. Ok I will hit the long term off-site backups might take a while but the data is safe, well it was not or so it seems, the other technican at the offsite location had removed the offsite backups for the fileserver from the primary site. Why, because they were taking up too much space on that site’s primary backup disk (The storage at each site is partitioned to provide onsite backup for each site, with the second partition being the offsite backup for the other site).

Damn, so this copy of the data is the only one.

Ok so I killed the cluster server that everything was on, and using the old evicted node I rebuild a single node “cluster” and mounted the CSV, mounted the VHDX and everything appeared as it should. Whoo Hoo access to the data, well not so fast there buddy.

After moving some data an error popped up stating that the data was inaccessible, ok no problem loss of a single file is not a real issue. Then it popped up again, then again.. the second Oh! Shit! moment within several hours.

2017-02-02 - Dedupe Error

I recovered and moved the data I could access leaving me purely with data I couldn’t. I tried chkdsk and other tools and after several hours I took a break from it, needing to clear my mind.

Coming back to it later I looked at the error, looked at what was happening, and recalled seeing an article on another blog about Data Deduplication corrupting files on Server 2016. With this I began wondering if it had effected Server 2012 R2, then the lightning struck deduplication, this process leaves redirects in place and essentially has a database of files that it links to for the deduplication. The server the VHDX was mounted did not know about the Deduplication, the database or how to access it.

Up until now I had only mounted the Data VHD. Now I rebuilt the server utilising the original Operating System VHDX to run the server. I let it install the new devices and boot.

Upon the server booting I opened a file I could not access before, and it instantly popped onto my screen. Problem Solved

Note to remember If you are doing Data Recovery or trying to copy data from a VHDX (or other disk, virtual or physical) that was part of a deduplicated file server, you need to do it from the server due to the deduplication database. You may be able to import the database to another server, I really have no idea, and I am not going to try to find out.

Ubuntu 16.04 Server LTS: Generation 2 Hyper-V VM Install

So you have downloaded Ubuntu 16.04 and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as an EFI capable disk, as shown below

 

Luckly this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over.

Turning off your VM, open up the settings page and navigate to the “Firmware” menu. As you can see in the first image below, “Secure Boot” is enabled (checked). To fix this, simply uncheck it as per the second image below, click “Apply” then “Ok”
Upon doing this and restarting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way

Have Fun

Justin

The Home Server Conundrum

Servers have been in the home for just as long as they have been in the business’ but for the most part they have been confined to home lab’s and to the homes of systems admins, and the more serious hobbyists.

However, with more and more devices entering the modern “connected” home, it is time to once again consider, is it time for the server to into the home. Whilst some companies are, and have been starting to make inroads and push their products into the home market segment, most notably Microsoft and their partners with the “Windows Home Server” systems.

Further to this modern Network Attached Storage (NAS) devices are becoming more and more powerful, leading to their manufacturers not only publishing their own software for the devices, but thriving communities growing up around them and implementing their own software on them, Synology and the SynoCommunity for example.

These devices are still however limited to running specially packaged software, and in many cases are missing the features from other systems. I know this is often by design, as one manufacturer does not want their “killer app” on competitors system.

Specifically what I am thinking of with the above statement is some of the features of the Windows Home Server and Essentials Server from Microsoft, as many homes are “Microsoft” shops, yet many homes also have one or more Apple devices (here I am thinking specifically iPads/iPhones) and given the limited bandwidth and data transfer available to most people, an Apple Caching Server would be of benefit.

Now sure you could run these on multiple servers, or even existing hardware that you have around the house, but then you have multiple devices running and chewing up power. Which in this day and age of ever increasing electricity bills and the purported environmental costs of power, is less than ideal.

These issues could at least be partly alleviated by the use of enterprise level technologies such as virtualisation and containerisation, however these are well beyond the management skills for the average home user to implement and manage. Not to mention that some companies (I am looking at you here Apple) do not allow their software to run on “generic” hardware, well at least within the terms of the licencing agreement, nor do they offer a way to do this legally by purchasing a licence.

Virtualisation also allows extra “machines” to run such as Sophos UTM for security and management on the network.

Home server are also going to become more and more important to act as a bridge or conduit for Internet of Things products to gain access to the internet. Now sure the products could talk directly back to the servers, and in many cases this will be fine if they can respond locally, and where required cache their own data in the case of a loss of connection to the main servers either through the servers themselves, or the internet connection in general being down.

However what I expect to develop over a longer period is more of a hybrid approach, with a server in the home acting as a local system providing local access to functions and data caching, whilst syncing and reporting to an internet based system for out of house control. I suggest this as many people do not have the ability to manage an externally accessible server, so it is more secure to use a professionally hosted one that then talks to the local one over a secure connection.

But more on that in another article as we are talking about the home server here. So why did I bring it up? Containerisation; many of these devices will want to run with their own “server” software or similar, and the easiest way to manage this is going to be through containerisation of the services on a platform such as Docker. This is especially true now that Docker commands and alike are coming to Windows Server systems it will provide a basically agnostic method and language to set up and maintain the services.

This also bring with it questions about moving houses, and the on-boarding of devices from one tenant or owner of the property to another one. Does the server become a piece of house equipment, staying with the property when you move out, do you create an “image” for the new occupier to run on their device to configure it to manage all the local devices, do you again run two servers, a personal one that moves with you, and a smaller one that runs all the “smarts” of the house that then links to your server and presents the devices to your equipment? What about switching gear, especially if your devices use PoE(+) for power supply? So many questions, but these are for another day.

For all this to work however we need to not only work all these issues out, but for the regular users the user interface to these systems, and the user experience is going to be a major deciding factor. That and we need a bunch of standards so that users can change the UI/Controller and still have all the devices work as one would expect.

So far for the most part the current systems have done an admirable job for this, but they are still a little to “techie” for the average user, and will need to improve.

There is a lot of potential for the home server in the coming years, and I believe it is becoming more and more necessary to have one, but there is still a lot of work to do before the become a ubiquitous device.

The Case of the Hijacked Internet Explorer (IE) Default Browser Message

I recently had a case of a hijacked Default Browser message (the one that asks you to set the browser as default) in Internet Explorer (IE) 11 on a Windows 8.1 machine. Now that is not to say that it cannot happen to other versions of Windows, Internet Explorer or even other browsers, but this fix will clear the Internet Explorer issue.

With many of these things, the cause of this is malware, and the user doing or rather installing or running something they shouldn’t be (what they wanted the software for was perfectly OK, its just they got stung by the malware).

Anyway the issue presented like this;

The Hijacked page, remember do not click on any links
The Hijacked page, remember do not click on any links

IMPORTANT NOTE: Now first things first. DO NOT click on any of the links in the page. It is also important to note that even if Internet Explorer is the default browser, or you have told it not to bother you, it will still appear.

Now the first step in this is understanding what has happened, which in this case is that the iframe.dll file has be hijacked, either through modification or replacement (which indicates that the program would have had to have gone through UAC and the user OK’ing the change), specifically it seems that the page is being redirected, but I cannot confirm this as it was more important to fix the issue than it was to find out the technical reasons why

None the less the first step is to run a malware cleaner, specifically I use Malwarebytes , and I did a cleanup of the system with CCleaner for good measure, but it is important to note that this is just to clean up other things that the malware may have left behind, it is not to fix this problem.

As this problem resides in what is a reasonably well protected file, the best way to fix the issue is with Microsoft’s built-in System File Checker (SFC) tool.

It is actually rather simple to fix this error;

Open a Command Prompt window as Administrator

Open an Administrative Command Prompt
Open an Administrative Command Prompt

Once you are in the command prompt type;

sfc /scannow

Type sfc /scannow
Type sfc /scannow

This tool will now run and verify the files Microsoft has put into the system to validate they are the correct files, if they are not and have been replaced or otherwise modified, it will replace them with the original file. This process may take some time depending on the hardware you are running it on

SFC Running - This may take a while
SFC Running – This may take a while

Once complete, you need to restart the PC, and the SFC tool tells you as much

SFC has completed it task, now it wants you to reboot your PC
SFC has completed it task, now it wants you to reboot your PC

Restart your PC and the offending window will now be replaced with the default Microsoft one. Now how I said before it seems to override/overwrite the setting telling Internet Explorer not to display the defaultbrowser.htm tab (either because it is default, or you have told it not to check). This continues on here, because that setting was tampered with by the malware it will display the default browser page, to clear this you either simply need to tell it to not display it, or go through the set as default process.

Enjoy

Justin

Enabling Data Deduplication on Server 2012 R2


Data deduplication (or dedupe for short) is a process which by the system responsible for the deduplication scans the files in one or more specific locations for duplicates, and where duplicates are found it replaces all the duplicate data with a reference to the “original” data. This in essence is a data compression technique designed to save space by reducing the data actually stored, as well as aiming to provide single-instance data storage (storing only one copy of the data, no matter how many places its located in).

The way this is achieved is dependent on the system used, it can be done but it can be done on block level, file level or other levels, again depending on the system and how it is implemented.

What we are going to do in this article is we are going to enable deduplication on a Windows Server 2012 R2 Server. Keep in mind this is changing data and quite possibly going to cause data damage or loss, as such make sure you have a working backup BEFORE continuing.

Firstly we need to access the server that you are planning to configure deduplication on, I will leave it up to you how you achieve that. Once you have access to the server we can begin.

On the server open “Server Manager” if it is not already open

2014-09-19-01-ServerManager

If it gives you the default splash page, simply click next (and I suggest telling it to skip that page in future by use of the checkbox) Once we are in the “Installation Type” page we need to select “Role-based or feature-based installation” and click “Next”

2014-09-19-02-AddRoleorFeature

In the “Server Selection” page select the server you want to install the service on (commonly the one your using), Click “Next”

2014-09-19-03-SelectServer

 

Next up is the “Server Roles” page, here is where the configuration changes need to take place. In the right had list of checkboxes (titled “ Roles” ) scroll down till you see “File And Storage Services” then open “File and iSCSI Services” then further down the page check the “Data Duplication” checkbox. Click “Next” , accepting any additional features it wants to install.

2014-09-19-04-SelectService

In the “Features” page simply click “Next”

2014-09-19-05-IgnoreFeatures

On the “Confirmation” page check you are installing what is required and click “Install”

2014-09-19-06-Install

Wait for the system to install, and exit the installer control panel, restart if your server requires it.

Upon completion of the install and any tasks associated with the installation re-open “Server Manager” and in the left hand column select “File and Storage Services”

2014-09-19-07-ServerManager

This will change the screen in “Server Manager” to a three column layout, in the middle column select “Volumes”

2014-09-19-08-ServerVolumes

With the volumes now displaying in the right hand of the three columns, right click on the volume you want to configure deduplication on and select “Configure Data Deduplication”

2014-09-19-09-ServerVolumesRightClick

This will bring up the “Deduplication Settings” screen for the volume you right clicked on. Unless Data Deduplication has been configured before, the “Data deduplication” will be “Disabled” .

2014-09-19-10-DuplicationSettings-Initial

As I am configuring this on a file server, I am going to select the “General purpose file server” option, and leave the rest as defaults. I am then going to click on the “Set Deduplciation Schedule” button

2014-09-19-11-DuplicationSettings-Enable

The “Deduplication Schedule” will now open. I suggest checking the “Enable background optimization” checkbox as this will allow the server to optimise data in the background. I also elected to create schedules to allow for more aggressive use of system resources, the first one allows for it to be done after most people have left for the day, and before the servers scheduled backup, the second one allows it to run all weekend but again stops for backups. Please note that these settings are SYSTEM settings and apply to all data deduplication jobs on the system, and are not unique to each individual deduplication job

Click “Apply” on the “Deduplication Schedule” screen, and then “Apply” on the “Deduplication Settings” screen, this will drop you back to the “File and Storage Services > Volumes” screen, and you are now done, Data deduplication is configured.

Have fun, and don’t forget that backup

Justin

Using Internet Information Services (IIS) to Redirect HTTP to HTTPS on a Web Application Proxy (WAP)Server

For those of you who do not know, Microsoft’s Web Application Proxy (WAP) is a reverse HTTPS proxy used for redirecting  HTTPS requests from multiple incoming domains (or subdomains) to internal servers. it does however not handle HTTP at any point, which is a failure in itself, I mean it would not be hard to add a part of the system where if enabled it redirects HTTP to HTTPS itself, rather than having to use a workaround, come on Microsoft stay on the ball here, but I digress.

As I stated the main issue here is it does not within the WAP itself redirect a HTTP request to the equivalent HTTPS address. I have played with multiple possible solutions for this including a Linux server running Apache 2 using PHP to read the requested URL and redirect it to the HTTPS equivalent. None of these however have the simple elegance of this solution which includes the HTTP to HTTPS redirect on the same box as the WAP system itself.

First of all you need to log into the WAP server and install the Internet Information Services role. Once done open the management console and you should get a window similar to below.

01-OpenIISManager

Now navigate to the required server by clicking on it, and on the right hand side click “ Get New Web Platform Components ”.

02-GetNewWebPlatformComponents

This will open a new web browser window as shown below, when it does simply select “ Free Download ”.if you have issues with not being able to download the file due to a security warning, you should see the earlier blog here to see how to enable the downloads. Download and install the software via your chosen method.

03-FreeDownload

Once it is installed a new page will appear, this is the main splash page of the Web Platform Installer

04-WebPlatformInstaller5.0HomeScreen

Using the search box (which at the time of writing, using Web Platform Installer 5.0, is in the top right hand corner) search for the word “ Rewrite ”. This will then display a “ URL Rewrite ” result with the version number appended to the end (which at time of writing this article is 2.0) and click the “Add” button to the right of the highlighted “ URL Rewrite ” line,

05-URLRewriteAdd

This will change the text on the button to “ Remove ” and activate the “ Install ” button the the lower right of the screen, click the install button.

06-URLRewriteInstall

Clicking this install button will bring up a licensing page, click the “ I Accept ” button (assuming of course you do accept the T’s & C’s)

07-LicenceAcceptance

You will then get an install progress page

08-RewriteInstallProcess

Which will change to a completed page after it is done, so click the “ Finish ” button in the lower right hand corner

09-RewriteInstallFinish

This will drop you back to the same original splash screen of the Web Platform Installer, click “ Exit

10-WPI-Finish

You will now need to close and re-open the IIS Manager and reselect the server you were working on. You should now see two new options, the first being “ Web Platform Installer ” which we do not need to concern ourselves with any further, the second is “ URL Rewrite ”,

11-IISManager-NewModule

Double click on “ URL Rewrite ” and open up the URL Rewrite management console, on the right hand side of this console in the “ Actions ” pane, click “ Add Rule ”.

12-AddRewriteRule

This opens up a box of possible rewrite rules, what we want to create is an “ Inbound Rule ” as our requests are coming into the server from an external source. Select “ Blank Rule ” and click the “ OK ” button

13-NewRule-BlankRule(Inbound)

In the new page that opens, in the “ Name ” field type the name that you want to give the rule, I use and suggest HTTP to HTTPS Redirect, as this tells you exactly what it does at a glance

14-NewRule-NameRule

In the next section, “ Match URL ” set “ Requested URL ” to “ Matches the Pattern ” (default), “ Using ” to “ Regular Expressions ” (default) and most importantly “ Pattern ” to “(.*)” (without the quotes). I suggest you take this opportunity to test the pattern matching.

15-NewRule-Regex Match

In the “ Conditions ” section, ensure that the “ Logical grouping ” is set to “ Match All ” (default) and click the “ Add ” button.

16.01-NewRule-AddCondition

In the new box that appears enter the following, in the “ Condition input ” field type “ {HTTPS} ” (again without the quotes, and yes those are curly braces, not brackets). Change the “ Check if input string ” dropdown to “ Matches the Pattern ” and in the “ Pattern ” box below type “ ^OFF$ ” (again, no quotes), and “ Ignore case ” should be checked. With this one I do not suggest testing the pattern, as even though this system works fine for me, this test ALWAYS fails. Click the “ OK ” button (mine is not highlighted here as I had already clicked it away and had to re-open the box)

16.02-NewRule-ConditionSettings

This will take you back to the new rule screen, check the conditions match as shown and then we can move on.

16.03NewRule-ConditionComplete

This is the part where we now tell it what we want to do when it matches the previous conditions, in the Action pane change the “ Action type ” to “ Redirect ”, Set the “ Redirect URL ” to “ https://{HTTP_HOST}/{R:1} ” (again, they are curly braces and of course no quotes), you can select whether “ Append query string ” is checked or not, but I highly recommend leaving it checked, as if someone has emailed out a URL with a query on it, but not put in the protocol headers (http:// and https:// being the ones we are concerned about) we want the query string to be appended to the end of the redirected URL so they end up where they intended to be. Finally make the “ Redirect type ” dropdown read “ Permanent (301) ” (default).

17-NewRule-ActionConfiguration

Restart the server service for good measure and there you have it you now have HTTP being redirected to HTTPS which in theory at least is on the same server. Ensure that you have ports 80 (HTTP) and 443 (HTTPS) redirected from your router to the server and the firewalls (and any other intermediaries) on both the router and server set to allow the traffic as required

Enjoy and as always have fun

Justin

Fixing a Corrupt Active Directory Database

Recently I was contacted by a colleague who was having issues with an Active Directory database. Whist there is nothing unusual in this colleague contacting me for help or vice-versa, this issue was beyond the norm.

What he had reported to me was that there was issues with the primary domain controller (PDC) and secondary domain controller (SDC) on this site having out of sync databases, which came to the fore as he was adding new devices (through WDSUtil) to be imaged, they appeared on the SDC but not on the PDC, with this causing issues predominantly with the fact they would image the machine, and get the correct name from the SDC which was also acting as the (Windows Deployment Services) WDS server but it would not bind to the domain, as there was no account for it on the PDC.

Upon further investigation (over the phone at this point) we discovered the the two domain controllers were out of sync and the tombstone had exipred, fixing this problem allowed for a partial sync as outlined below;

On PDC
PDC==>SDC – Success
SDC==>PDC – Fail

On SDC
PDC ==>SDC – Success
SDC==>PDC – Success

These tests were run from the “Active Directory Sites and Services” tool on the domain controllers as shown above.

Looking at the error logs it showed AD Domain Services errors of 1988  and an error stating

Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database. Not all direct or transitive replication partners replicated in the deletion before the tombstone lifetime number of days passed. Objects that have been deleted and garbage collected from an Active Directory Domain Services partition but still exist in the writable partitions of other DCs in the same domain, or read-only partitions of global catalog servers in other domains in the forest are known as “lingering objects”

It did also give a whole bunch of sensitive information (hence I will not publish it) stating the object that was causing it. Looking for the cause of the error I came across the repadmin (AD Replication Admin command line tool)- repadmin /removelingeringobjects ServerWithLingeringObjects CleanServerGUID NamespaceContainingLingeringObject which I ran, and I ran the replication tests again and got the same results.

So figuring I had nothing to loose I deleted the object that was referenced in the error, which in my case was a user, so I do this and try the replication again. This time I got an error stating that “An internal error occurred”, great what next. Looking at the error logs again (on the PDC, as by this time I was pretty sure it was the PDC that was causing the issues) I found an error of 467 meaning a corrupt database…. Oh SHIT… ok not that bad really but still.

I decided that I would try to repaid the database directly rather than using ADRM on the server (as I only had remote access). I stopped the Active Directory Domain Services – service in the Services Manager (services.msc) and knowing that the AD database is a JET database and that it is stored in  C:\Windows\NTDS   (NTDS Stands for NT Directory Services) I copied the file ntds.dit (the AD Database itself) to the desktop twice (two different file names, one to work on one to back up)

So once I had the two files I ran a verify on the database through the command esentutl /g C:\Users\<USER>\Desktop\ntds.dit  the results coming back that the database is in fact corrupt so I ran the fix  esentutl /p C:\Users\<USER>\Desktop\ntds.dit   I then moved the fixed file back to  C:\Windows\NTDS,   restarted the Active Directory Domain Services – service in the Services Manager (services.msc) ran the replication tests again, and they all passed

Crisis averted, and I am now owed a good bottle of Scotch Whisky

This was all done over a remote session so it is possible

Justin

EMCO Remote Shutdown and Setting Windows 8(.1) Remote Registry by Group Policy Object (GPO)

As I have mentioned in a previous blog post, several clients who have been using this software for several years with their fleets of Windows 7 desktops with great success. This however changed when testing during the Windows 8.1 deployment we found that it does not work for 8/8.1 this is due to the Remote Registry service no longer being enabled by default.

2014-08-11-RemoteRegistry-00-DisabledRegistry

Now rather than wanting to update the machines manually or to change the service status in the image, I wanted to start this service as this will ensure that all devices turn it on and when I or someone else creates a new image in future, it is one less thing to do. It turns out this is easier to do than I thought it would be.

First you need to open up ” Group Policy Management “, find the policy you want to edit by expanding the appropriate trees (or create a new policy within the right scope), right click on it and select ” Edit “. This is a computer policy so if like me you limit your GPO’s to work on only users OR computers (Best Practice), then make sure you select a computer enabled policy.

2014-08-11-RemoteRegistry-01-GPEDIT

 

Once you have opened the ” Group Policy Management Editor ” then you will need to navigate the tree (in the left hand column) to ” Computer Configuration ” > ” Policies ” > ” Windows Settings ” > ” Security Settings ” > ” System Services ” and then in the right hand column search out ” Remote Registry “, double click on this to open the ” Remote Registry Properties ” box.

2014-08-11-RemoteRegistry-03-EditPolicy

In this box, select the ” Define this policy setting ” checkbox, which will then in turn enable the options below it, and you simply want to change the ” Select service startup mode ” radio buttons system to ” Automatic

Now after a group policy update (which can be forced on individual machines via ” gpupdate /force “, without the quotes) and a reboot, the machines will have the ” Remote Registry ” started and running

2014-08-11-RemoteRegistry-04-RegistryEnabled

 

Justin

Internet Explorer Cannot Download a File on Server 2012 R2

So you have just set up a new Server 2012 (R2) server, and gone to download that fine you need for the next step, only to be shown a nasty message stating that you cannot do that, as file downloads have been disabled.

NoFileDownload

Well the good thing to know is that its an easy fix, simply open up “ Internet Options ” go to the “ Security ” tab, select the “ Internet ” zone and Select the “ Custom level… ” button

InternetOptions-SecurityTab-CustomLevel

This opens up a “ Security Settings – Internet Zone ” window. In the main section of the windows scroll down to where it says “ Downloads ”, and the the subsection of “ File download ” (as of this writing the setting is just above half way down the options list) and simply change it from “ Disable to “ Enable ”. Click ok and drop back to the main screen and retry that download again

EnableDownloads

If you get a warning, as shown below, simply OK it and continue on

Warning

Have fun

Justin

EMCO Remote Shutdown

Remote Shutdown  from  EMCO Software  is a great piece of software for helping to manage fleets of Windows Based PC’s in large environments, it uses Wake on LAN to start PC’s up at a certain time, certainly nothing fancy with that, however what else it can do is force log offs and shutdowns on a schedule as well. It does this through some clever use of facilities already built into Windows, but little used. I would certainly rate this software highly and recommend its use to anyone with large fleets to manage.

I have used it at several clients to manage their fleets to start up the PC’s before workers get there, and to shut them down after they leave, this is to stop people leaving them on from sheer forgetfulness (if it needs to be left on we can exclude the PC from shutdown for the required time, but people do need to tell us) and we use it as part of our environmental program to minimize power wastage. People if logged in and using the PC at the time the shutdown or log off instructions come in, can cancel it themselves (we do not want to stop people working now do we).

This has helped with several things, as we have a specified lunch period at each site, I shut down the PC’s 10 minutes after it has started, this is to allow updates to install, and restart the PC’s 5 minutes before people are due back at work. We have a much greater patch ratio now, than before this happened. The schedules at the sites are simply, start at 0800, shutdown at 1345, restart at 1455, shutdown some PC’s at 1610PM, shutdown all PC’s at 2000.

In addition to this when one of the clients had an environmental audit (they were chasing a 6 star environmental rating) the auditor was impressed with the technology and it aided in their gaining of their 6 star rating

All in all I am very impressed with the EMCO solution and highly recommend it

%d bloggers like this: