Smarter Sprinkler System

Irrigation (sprinkler) systems have come a long way since their inception, and even further since the advent of modern electronics, and with the modern Internet and the beginings of the Internet of Things (IoT) revolution they are getting smarter and are able to do more. One example of this is that where a “modern” controller can tell if it is raining, or has rained in the past period through the use of a rain guage, IoT devices such as the OpenSprinkler can now use forcast weather from the internet to make a decision about the watering. Linking this with things such as moisture sensor data can make these systems even smarter. This is however one thing that seems to be missing, the “smart” solenoid.

I am not a gardener by choice per-sey but more by nececcity, wanting to take more control of the food I and my family eat requires growing our own, which whilst easy in some respects, does chew up a lot of time.

Solonoids themselves are quite simple devices they use a magnetic coil to retract a metal (normally iron) core against a spring (which opposes the coil so the solonoid goes back to “rest” when the electrical current is no longer applied) to open or close a gate, if the gate is closed, water does not pass, open the gate, and the water flows through. Nice and simple.

What is not so simple however is the current requirement to run an entire cable pair, yes there are ways of theoretically doing n+1 (n being the number of solonoids) but in general its one solonoid its one cable (pair).

Now with cheaper, smarter, more capable electronics what is to stop us moving the “smarts” that for so long have been intergrated with the controller, on to the solonoid itself, you could then program it over the cloud, a RTC would allow it to turn on/off on a schedule, a hard link to a moisture sensor could allow it to turn on if the soil gets to dry, and cloud computing, or a local weather station could stop it watering if it has rained or is predicted to rain within the next allocated period, say 6 hours.

That gives you more smarts than are most old control boards are capable of, almost as much as modern ones are capable of.

But what if, now we have this connected to the cloud, and we can group them, in one or more groups to control when things are watered. Got tomatos that need watering twice a day but are at opposite ends of the garden, not a problem just create a group of two or more solonoids to control, put in the times and off you go, what about 3 areas, just at another solonoid, 4 areas so on and so forth.

But we are talking the cloud here…. it’s all seeing, all knowing. You could in theory not only control based upon groups, but what about plant types, if you could TELL the system that you were growing tomato’s and you could tell it how much water you want to give them, and when. If you wanted to you could even attach a flow meter to measure the amount of water delivered rather than base it on the arbitary value of time where the pressure and therefore amount of water could vary, with a flow meter you KNOW how much as been delivered.

What I am thinking is a bit light your LifX bulbs, but for solonoids. What about data, well that is easy you can do it through standard 802.11 Wireless, or how about XBEE back to a controller station, or even use a three cable wire to tap in to, using an addressing system such as I2C. In the end it does not matter so much about the how it works technically, so long as I can walk up, plug in the power (or power & data) and connect to the water piping, program it how I want and boom, it works.

Ah well we can all dream

Project: HomeMadeMonitor

Yes, yes I know I should finish other projects first, but I have yet another electronics project I want to start playing with, and this post is more for a reminder of what it was I was aiming to achieve more than anything else.

As I have been falling more and more down the self sufficient/home made produce rabbit hole, I have noticed that I need more and more data for certain things, so that I can then act upon it, or even better yet, have an automated system to act apon it for me.

The two things I am thinking of, are specifically the (dry) curing of meats such as salami’s and the aging of cheese. Both these are reliant on maintaining the right temperature and humidity within a certain range for a prolonged period of time, ranging from weeks to years depending on what your wanting to achieve. Now in the past this has been what I have been designing SafeDuino to do, and whilst that does other things in addition to monitoring temperature and humidity, some of those more advanced are not needed by the monitoring and control software.

Whilst it may be more benficial to “recycle” code from the SafeDuino project. I am finding myself drawn more and more to the Particle Photon system for these basic requirements as not only as it generally cheaper than the Arduino’s (Freetronics EtherTen and EtherMega) I have been using, but it does come with built in WiFi. Conversly I then have to supply power to the device, either through a paraciticle connection to something else, a dedicated connection to it, or a Solar/Wind & Battery solution, where as the EtherTen and EtherMega boards (with the use of a PoE module) can pull power from the PoE switching equipment I have in the house. A wired connection is always going to be more reliable and faster anyway, although the speed is irrelevent when your doing so little data transfer. Using wireless also means I need to ensure that I have wireless access at each point on the property where I want a monitor, which is easier said that done when we have a large property, much larger than any one or two wireless points can handle, although this is due mainly to the construction materials of the buildings on the property and the distances involved.

This leaves me wondering how to achieve this. Given the design of what I want to achieve, being able to control one or more chambers, through one or more devices I am thinking I need to build it as to make extensive use of 1 wire and I2C technologies, ideally linking back to a Paricle Photon (it has considerably more processing power than an Arduino) and then back to a server. Simular to what they have done with BrewPi, but this means as I said above power supply issues to due no PoE connector, and connectivity issues…

Anyway I will think on it further, but I will most likely end up using the Particle Photon.

What I need to achieve however is rather simple, as I said above monitoring one or multiple “cambers” from a single root node (the Particle Photon) and reporting the data back to a server, where it can be further processed and acted upon if need be. These “chambers” can be joined, or independent, and may or may not have one or more common components (think combo fridge/freezer where the compressor is common to both).

Each “chamber” must be capable of having unique settings (within reason, you cannot expect it to keep one chamber at 30 degrees Celcius and the other at 10, its just not going to happen if you have common components) and be able to work out how to handle this.

I guess it is time to start designing (and finishing off other projects)

Have Fun


Activate Me!!!!

With recent and ongoing research and statistics showing that activation locks on phones and tablets has in many cases reduced the theft of the devices, is it time to consider this same technology for other devices?

I would argue that whilst technically it is ready and suitable for use in the wild, and there are certainly technologies out there that do similar things it is time this technology became installed on other devices. I mean think about it, just about all modern devices have a computer in it of some sort, everything from the brand new smart TV, to the washing machine. With the increasing use of the Internet of Things (IoT) these devices are also becoming connected to the wider world, in fact many of the “smart” gadgets such as TV’s, BluRay players and PVR’s already have internet accessibility. It is on these devices that I would advocate we should start installing activation lock technology, as not only are these the main target of the thieves but with their inherent desire to be connected to the internet it would allow you to put it into a lost/stolen mode and when it connects it locks out. Obviously you would want this to display a message and some basic information. So it can be returned to the owners.

Further to this having an independent clearing house for this, so all your devices and use it under a single log-in would be ideal as you can then lock one or more devices with one control interaction rather than having to go to each manufacturers or partner groups page to block devices. As time goes on this could then of course be rolled out to more and more devices, thereby making it harder and harder for thieves to steal anything with electronics in it and the activation tech.

Further to this thing such as device encryption could then be placed in using this tech again similar to what is already available on phones and tablets this would allow devices that store data (almost all of them these days in some form or another) to be securely erased (wipe the encryption key, wipe access to the data) to prevent identity theft and other malicious use of private data (you do have an ENCRYPTED backup though right)

This definitly will not happen overnight, if at all due to the competing methodologies, standards and the companies unwillingness to work together to make a standard. I can dream though can’t I

OSX 10.10 802.1x Profiles

DISCLAIMER: Playing with system configuration data and removing files is dangerous and presents a risk to your system, only attempt this fix at your own risk, any consequences are on your head

Recently I have had to start replacing a number of certificates used for wireless authentication on a RADIUS/802.1X authenticated wireless network at a number of clients, and for the most part it has gone smoothly (but this does not make for a good blog post now does it). There have however been issues with a number of OS X based devices, and more specifically devices that have gone through a number of in place upgrades since the system profile was installed.

These systems have all had a number of in place upgrades over the years from either OS X 10.6 given their age and as such there are now issues removing these 802.1X profiles.

To understand why this is happening, a little background on how the profiles were managed previously and are managed now is in order.

In 10.6 and prior an 802.1X profile was added (+) or removed (-) through the 802.1X tab in the Advanced settings on the interface (in this case WiFi/Airport)


In 10.7 and later these buttons have been removed


With 10.7 to manage these profiles a new System Preferences option was added, it is called simply “Profiles”.


Now whilst this is not an issue for most cases, unless a profile has been added since the upgrade, it does not appear in the Profiles pane, and therefore the Profiles pane does not show in the System Preferences menu.

This leaves us with a profile we cannot remove due to the lack of buttons in the 802.1X tab on the interface, and no Profiles pane accessible (due to no registered profiles) in the System Preferences tab



So how do we remove it? through the venerable and all powerful command line interface (Terminal).

First you need to know the location of the system configuration profiles which is the directory /Library/Preferences/SystemConfiguration.

Now this is where I can only guide you, I did this operation in the opposite order to what is outlined here due to the fact that I did the second part first and it did not remove the profile, therefore I do not know if its required or not to remove the profile, try running the first remove before removing the other two files.

The profile information seems to be stored in the file within the system configuration directory, so to remove it we want to run the following command

sudo rm /Library/Preferences/SystemConfiguration/

This will prompt you for a password if you have not authorized to sudo yet/recently (it has a timeout of 5 minutes), enter your password, hit enter and it will remove the file, now reboot OS X (yes this is required) and the profile SHOULD be removed.


NOTE: Adrian Stevenson left a comment on the 13th of October 2015 stating that the above file is the only required to remove the profile, based upon this, the information below is not relevant to solving this issue, I have however left it so the article still contains all its original information

Further to this Kevin posted in the comments on the 27th of January 2016 that the command is confirmed on Mavericks only to require the first line


However if its not removed as I said above I had removed two other files prior to removing the file. Specifically these are the following files;


These files were located via use of the grep command to locate references for the keyword “802” inside files (that are themselves inside the SystemConfiguration directory). The command locate these is as follows;

grep "802" /Library/Preferences/SystemConfiguration/*

NOTE: Notice the lack of a sudo, we are only reading information here, not writing so no need to sudo

It is however worth noting that due to the use of the keyword “802” this searches for all references to 802 (well der) and as wireless itself, as well as other communications protocols all have 802 numbers which they can be referenced by (i.e. 802.11 is wireless) it will find references to these protocols as well, so removing all files where this occurs may, and most likely will remove configurations for other 802 series protocols/standards where these are referenced by their 802 identifiers inside the configuration profiles. On the laptop I did this testing on, removing these files removed ALL wireless connection details, and although this may not be a great concern in some cases, it may cause issues in others.

Anyway if the removal of the first file and its subsequent reboot did not work, removing all three files should fix the issue (we want to remove the original file again to ensure there have been no references generated in the new file)


sudo rm /Library/Preferences/SystemConfiguration/
sudo rm /Library/Preferences/SystemConfiguration/NetworkInterfaces.plist
sudo rm /Library/Preferences/SystemConfiguration/

Reboot and the Profile should have now removed itself.


Let me know if it works for you in the comments


Creating a USB to DC socket power cable (for an Iridium 9555)


As part of my travel kit I generally take a Satellite phone for emergency communication either out, or more commonly in from work colleagues. With my old one having died, I replaced it with an Iridium Model 9555, and whilst the phone is new, and is a current model it works along the same lines of phones from the early to mid-2000’s. Along the same lines of these phones, the phone uses a wall transformer wired into a standard DC power plug (3mm barrel diameter in this case). This leads to a plethora of adapters to keep the thing charged around the world, and adds to the things that I need to carry with me, and as you may have seen with my other posts I am trying to travel with less, not more.

2015-01-12 - DC Cable - 01 - PhoneThe Phone in Question

2015-01-12 - DC Cable - ChargersStandard Chargers & Adapters for Iridum 9555

Having a look at the chargers in an effort to see if I can eliminate them and use a charger that I already carry to charge the phone. Through looking at the details on the chargers a little measurement and testing of the chargers output 6V DC at 850mA output in a tip positive/barrel negative configuration. I found this rather interesting that it is using 6VDC, as it is very close the the USB voltage of 5V.

USB chargers these days are rather ubiquitous around the place, and with most chargers putting out 5VDC at around 1 amp and going up to 2.1 amps for tablet devices. Given this a USB charger should be able to power and charge the phone, however depending on the tolerances in the chargers output, and the phones required input it may accept a straight input from the USB charger, or it may need to be boosted through a boost converter to achieve this. Either way it means I can do away with all the adapters and simply use the USB chargers I already have to carry.

Looking at the boost converters such as the LM2577 and XL6009 based converters from eBay are capable of this, but first I want to see if I can charge the phone without the converters. Either way I need to make the same cable, with the converter if I need to add it later I can simply cut the cable and insert it in the middle.

Now to start, to complete this project only basic tools are needed; wire strippers, cutting tools, soldering iron (with solder). I used several different cutting tools but you can use whatever you want.

2015-01-12 - DC Cable - 03 - ToolsTools (Soldering Iron, Solder, Wire Strippers, Scissors & Scalple)

I also use a liquid to help the soldering process, this is called Bakers Soldering Fluid, I cannot recommend this stuff highly enough it is simply fantastic, and you do not need much of it, as such a bottle lasts for ages.

2015-01-12 - DC Cable - 04 - BakersBakers Soldering Fluid

As for the parts I used these are shown below, these include; USB Cable with a width of 3.3mm on the cable insulation which is less than the inner diameter of the hole for the DC power plug shroud which is 4mm. It is important to note that the “donor” cable is a USB to Micro cable, as all the full size cables (i.e. not mini or micro connectors on one end) were too wide to go into the plugs shroud. The DC power socket itself has a 3mm barrel on it, beyond that any plug should work. Also two pieces of heatshrink are used, one 4mm and the other 4.5mm. With this you can then create a USB to DC adapter.

2015-01-12 - DC Cable - 05 - PartsUSB Cable, Heat Shrink & DC Power Plug

The first step is to remove the desired head of the USB cable, in my case this is the Micro-USB head

2015-01-12 - DC Cable - 06 - Head RemovalCutting the head off the Cable 2015-01-12 - DC Cable - 07 - Head RemovedCable without Micro-USB Connector

After cutting the connector off the cable, the next thing to do is stripping the out insulation off the cable, thereby exposing the contents inside

2015-01-12 - DC Cable - 08 - Stripping the CableStripping the Cable

This exposes the two types of shielding that protect the inner conductors protecting the lines from EMI that can induce data errors. This shielding causes problems I will go into a below, it is however messy and causes some extra work, it is also not necessary for our purposes.

2015-01-12 - DC Cable - 09 - Cable ShieldingThe shielding on the USB cable

This shielding as per the USB specification is meant to be grounded to the chassis ground, which is no the same as the signal ground that is on the conductor inside the shielding. Whilst this shielding serves no purpose as DC jacks only have the two connections and no shield ground as there is in other connectors. In the creation of the cable can create an issue with causing a grounding loop if it is put directly against the bare metal of the barrel connection to which the inner ground (negative) conductor is to be connected to as these cables are commonly supported by the extension of this barrel connector, as such it needs to be removed as much as possible to do this we first strip it back and expose the insulated inner conductors

2015-01-12 - DC Cable - 10 - Stripped SheildingOuter wire shield pulled back, exposing the inner foil shield (Note how many wires are on the paper under the shield, this can be messy)

2015-01-12 - DC Cable - 11 - Both SheildsBoth Shields pulled back exposing the inner conductors

To make this easier I take the shields and twist them together, much like twisting the bare conductors together before tinning them to make it a cleaner job.

2015-01-12 - DC Cable - 12 - Wrapping SheildsTwisted up foil & wire shielding

Once it is twisted up, cut it off. For this I use some sharp scissors, although this could be achieved with most cutting devices, I like these scissors

2015-01-12 - DC Cable - 13 - Cutting ShieldsRemoving the shielding

Once this is done, I slip the shroud from the DC plug over the cable, and although this could have been done earlier or later in the process I find this is the best time to do it as the mess from the shielding has gone, and the heatshrink has not yet gone on, thereby expanding the cable diameter and making it harder to get the shroud on.

2015-01-12 - DC Cable - 14 - Place ShroudShroud placed on the cable

Sliding the shroud down the cable and out of the way for now I cut heat shrink just big enough to cover the end of the cable and ensure the last of the shielding that is hard to remove will not short against the barrel plug.

2015-01-12 - DC Cable - 15 - Heatshrink SizeCut heat shrink showing the size and the exposed remnants of the shielding

Once the cable is done, thread it over the cable so it covers the remnants of the shielding and shrink it into place

2015-01-12 - DC Cable - 16 - Heatshrink PlacementHeatshrink in place over the cabling protecting the remnants of the shielding

2015-01-12 - DC Cable - 17 - Shrinking HeatshrinkShrinking the heat shrink in place over the shielding remnants

Now that we have dealt with the shielding remnants, we need to get rid of the two data cables. USB cables for those prior to USB 3, use four conductors, two for power (the red and black), and two for data (green and white). Now whilst I could simply remove the cables by cutting them off I am too paranoid about shorting out the conductors and damaging the cable itself or worse one of the devices connected to the cable ends. To deal with this I trimmed the cables back short (about 4mm in length) and then folded them back over the heatshrink.

2015-01-12 - DC Cable - 18 - Conductor ColoursShowing the four USB conductors

2015-01-12 - DC Cable - 19 - Snipping Data CablesTrimming Data Cables

2015-01-12 - DC Cable - 20 - Folding Data CablesFolding the data conductors back over the heatshrink

Now I have the issue of holding them there, given my desire to do my utmost to prevent possible shorting and possible damage to devices I am going to heat shrink them down in place

2015-01-12 - DC Cable - 21 - Heatshrink SizeHeat Shrink cut to cover the Data Connectors 2015-01-12 - DC Cable - 22 - Heatshrink PlacementHeat shrink Over Conductors

Once this is in place and heat shrinked down the next step is to bare the other two conductors, and it is at this point I twist the strands of the conductor together and tin the conductors. Given how small the conductors are on my cable, I simply used my fingernails to strip the wires

2015-01-12 - DC Cable - 23 - Tinned WiresStripped and tinned connectors

It is then simply a matter of putting the conductors through the holes on the inner section of the DC plug, with the positive going to the tip and the negative going to the barrel, once they are soldered in place securely, trim the conductors as close as possible to the soldered joints, this is to minimise the interference when sliding and screwing the shroud into position.

2015-01-12 - DC Cable - 24 - Installed and Soldered ConductorsPositive and Negative conductors soldered in place

2015-01-12 - DC Cable - 25 - Trimmed ConductorsTrimmed Conductors

Once this is done, slide the shroud over the cable and you are done

2015-01-12 - DC Cable - 26 - CompleteCompleted cable

I have tested this cable on my Iridium 9555 and an Apple USB charger and it works fine, the charger gets warm as one would expect but no more warm than when charging any other phone. I have also tried in on other USB chargers and so far they have all worked fine.

Enjoy and as always do this at your own risk


Enabling Data Deduplication on Server 2012 R2

Data deduplication (or dedupe for short) is a process which by the system responsible for the deduplication scans the files in one or more specific locations for duplicates, and where duplicates are found it replaces all the duplicate data with a reference to the “original” data. This in essence is a data compression technique designed to save space by reducing the data actually stored, as well as aiming to provide single-instance data storage (storing only one copy of the data, no matter how many places its located in).

The way this is achieved is dependent on the system used, it can be done but it can be done on block level, file level or other levels, again depending on the system and how it is implemented.

What we are going to do in this article is we are going to enable deduplication on a Windows Server 2012 R2 Server. Keep in mind this is changing data and quite possibly going to cause data damage or loss, as such make sure you have a working backup BEFORE continuing.

Firstly we need to access the server that you are planning to configure deduplication on, I will leave it up to you how you achieve that. Once you have access to the server we can begin.

On the server open “Server Manager” if it is not already open


If it gives you the default splash page, simply click next (and I suggest telling it to skip that page in future by use of the checkbox) Once we are in the “Installation Type” page we need to select “Role-based or feature-based installation” and click “Next”


In the “Server Selection” page select the server you want to install the service on (commonly the one your using), Click “Next”



Next up is the “Server Roles” page, here is where the configuration changes need to take place. In the right had list of checkboxes (titled “Roles”) scroll down till you see “File And Storage Services” then open “File and iSCSI Services” then further down the page check the “Data Duplication” checkbox. Click “Next”, accepting any additional features it wants to install.


In the “Features” page simply click “Next”


On the “Confirmation” page check you are installing what is required and click “Install”


Wait for the system to install, and exit the installer control panel, restart if your server requires it.

Upon completion of the install and any tasks associated with the installation re-open “Server Manager” and in the left hand column select “File and Storage Services”


This will change the screen in “Server Manager” to a three column layout, in the middle column select “Volumes”


With the volumes now displaying in the right hand of the three columns, right click on the volume you want to configure deduplication on and select “Configure Data Deduplication”


This will bring up the “Deduplication Settings” screen for the volume you right clicked on. Unless Data Deduplication has been configured before, the “Data deduplication” will be “Disabled”.


As I am configuring this on a file server, I am going to select the “General purpose file server” option, and leave the rest as defaults. I am then going to click on the “Set Deduplciation Schedule” button


The “Deduplication Schedule” will now open. I suggest checking the “Enable background optimization” checkbox as this will allow the server to optimise data in the background. I also elected to create schedules to allow for more aggressive use of system resources, the first one allows for it to be done after most people have left for the day, and before the servers scheduled backup, the second one allows it to run all weekend but again stops for backups. Please note that these settings are SYSTEM settings and apply to all data deduplication jobs on the system, and are not unique to each individual deduplication job

Click “Apply” on the “Deduplication Schedule” screen, and then “Apply” on the “Deduplication Settings” screen, this will drop you back to the “File and Storage Services > Volumes” screen, and you are now done, Data deduplication is configured.

Have fun, and don’t forget that backup


Slimming Down (of my power adapters)

Recently I have been trying to reduce what I carry, and where I cannot eliminate it I am trying to reduce the size and weight of the items. To this end I have been fascinated by the new FINsix DART, power charger. Now there is plenty of material available out there on the web about this device, but the run down of the device is it was first shown at CES this year and its a small, yet powerful 65 Watt laptop charger. This is possible by the use of VHF (Very High Frequency) switching, thereby delivering smaller packets per switch (on/off) and thereby saving energy, and allowing the components to be smaller as they do have to use deal with as much energy at one time, but this means very little to most people.

What this essentially boils down to is a much smaller, sleeker and less bulky charger, supplying the same amount of power to your laptop (or other device).



Looking at this device, the size and weight reduction it offers are most certainly a good thing, and provide what I am looking for, that is not to say however there are not issues with this device.

First and foremost is the devices 65 Watts of power (which as most laptops charge at between 16V and 19V would indicate the amperage throughput is someone in the neighbourhood of 3.25 to 4.0 amps), this is however not enough for either my current laptop, which uses 165 Watts, but I am not too concerned about that as I am replacing it early next year, but it is not powerful enough to power the replacement even, which by comparison to my current power hungry beast uses only 85 Watts. This power gap I am sure will be narrowed with time, and I will undoubtedly be able to get one to power my laptop soon enough.

What is a little harder to deal with is the power plug pins. Yes I know it seems minor, and prehaps it is to others, but the bags I use for work are nice leather bags from Saddleback Leather, Pad and Quill or Kent (depends on how and what I am doing as to which I grab) all of these bags apart from being more expensive than most have leather on the interior, which the pins can scratch up and damage.

Many US adapters, including the one from Apple have a solution to this problem, retractable pins, and whilst this works for the “straight” American style pins, I have yet to see one for Australian style pins, check out the pictures below to see what I mean





To this end, FINsix could at least for the US make the plugs retract, but this may not be possible due to the design.

So what the fix I hear you ask, simple a cap, same as that pen that’s in your bag now, a simple cap over the pins, in the same colour as your device, that you can place over the pins when its not in use. I do have another idea, but I am unable to draw it as such currently, I will give it a shot on paper at some point in the near future and do another post about it


Have Fun






Office 365 Exchange Rules Allowance & Execution of Scripts Error

I have recently been migrating my work, personal and a few other domains, including clients to a new Microsoft Office 365 tenants. Nothing new to report here, however due to having many rules in place on some of the mailboxes I have needed to update the rules allowance on several of the mailboxes away from the default 64KB to a higher amount (in my case I chose 265KB as this is the maximum allowed). Now this was always a simple task on a locally hosted Exchange, simply done through PowerShell, and it turns out it can still be done this way on Office 365, but with a few more added steps to connect to the PowerShell system on the Office 365 Outlook server.

Firstly you want to open up PowerShell and input the command

$O365Creds = Get-Credential

This tells PowerShell to ask for a credential via the familiar login popup box.


This credential box DOES NOT check the credentials validity, it simply grabs them and puts the in a variable called $O365Creds.

Now we are going to actually connect to the system so we can start doing something

$O365Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $O365Creds -Authentication Basic –AllowRedirection

That commands opens the session, if you have the wrong credentials you will get an error stating

New-PSSession : [] Connecting to remote server failed with the following error message :Access is denied.

If this happens simply re-run the credential request command, then re-run the session creation command

Once successful you will connect to the PowerShell server, with this you will most likely get a warning stating the following:

WARNING: Your connection has been redirected to the following URI:

This is due to the “–AllowRedirection” switch in the session creation command. This switch is there to allow a single URL to redirect to multiple PowerShell servers within a cluster and is needed in an Office 365 system is a cluster and there is no way of knowing which PowerShell servers have empty sessions available on them, consequently the above warning is to be expected and not a concern

Both these are shown below;


Now we want to import the remote session (and consequently import the command information) to our local PowerShell session to do this we run, a progress bar will display in PowerShell as this is happening

Import-PSSession $O365Session


When doing this you may get an error if you are doing this for the first time, or where the Execution Policy settings changed

Import-Module : There were errors in loading the format data file:
Microsoft.PowerShell, , <<File Path>> cannot be loaded because the execution of scripts is disabled on this system.


To get over this error you need to run an elevated command prompt (you DO NOT have to close the original PowerShell window) and run the following command

Set-ExecutionPolicy Unrestricted

PowerShell will give you a warning stating that this may potentially expose your system to security risks, either enter Y (for yes) and hit enter, or simply hit enter (as Y is the default) if you are happy with this


Once this is done re-run the PowerShell import command, it should now succeed

Now we can run the command to increase the amount of space for the rules which is a simple one line command

set-mailbox -identity mailbox@domain.tld -RulesQuota 256kb

This command expand the rules quote of mailbox mailbox@domain.tld to 256KB (this is the maximum available)

This command will not show any confirmation it will simply drop to asking you for the next command to input. Input any other rules upgrades (or any other commands you want to run remotely) and then log off

To do this simply run;

Remove-PSSession $O365Session

This will again show no confirmation and will simply drop you to the next command prompt

Have Fun


Using Internet Information Services (IIS) to Redirect HTTP to HTTPS on a Web Application Proxy (WAP)Server

For those of you who do not know, Microsoft’s Web Application Proxy (WAP) is a reverse HTTPS proxy used for redirecting  HTTPS requests from multiple incoming domains (or subdomains) to internal servers. it does however not handle HTTP at any point, which is a failure in itself, I mean it would not be hard to add a part of the system where if enabled it redirects HTTP to HTTPS itself, rather than having to use a workaround, come on Microsoft stay on the ball here, but I digress.

As I stated the main issue here is it does not within the WAP itself redirect a HTTP request to the equivalent HTTPS address. I have played with multiple possible solutions for this including a Linux server running Apache 2 using PHP to read the requested URL and redirect it to the HTTPS equivalent. None of these however have the simple elegance of this solution which includes the HTTP to HTTPS redirect on the same box as the WAP system itself.

First of all you need to log into the WAP server and install the Internet Information Services role. Once done open the management console and you should get a window similar to below.


Now navigate to the required server by clicking on it, and on the right hand side click “Get New Web Platform Components”.


This will open a new web browser window as shown below, when it does simply select “Free Download”.if you have issues with not being able to download the file due to a security warning, you should see the earlier blog here to see how to enable the downloads. Download and install the software via your chosen method.


Once it is installed a new page will appear, this is the main splash page of the Web Platform Installer


Using the search box (which at the time of writing, using Web Platform Installer 5.0, is in the top right hand corner) search for the word “Rewrite”. This will then display a “URL Rewrite” result with the version number appended to the end (which at time of writing this article is 2.0) and click the “Add” button to the right of the highlighted “URL Rewrite” line,


This will change the text on the button to “Remove” and activate the “Install” button the the lower right of the screen, click the install button.


Clicking this install button will bring up a licensing page, click the “I Accept” button (assuming of course you do accept the T’s & C’s)


You will then get an install progress page


Which will change to a completed page after it is done, so click the “Finish” button in the lower right hand corner


This will drop you back to the same original splash screen of the Web Platform Installer, click “Exit


You will now need to close and re-open the IIS Manager and reselect the server you were working on. You should now see two new options, the first being “Web Platform Installer” which we do not need to concern ourselves with any further, the second is “URL Rewrite”,


Double click on “URL Rewrite” and open up the URL Rewrite management console, on the right hand side of this console in the “Actions” pane, click “Add Rule”.


This opens up a box of possible rewrite rules, what we want to create is an “Inbound Rule” as our requests are coming into the server from an external source. Select “Blank Rule” and click the “OK” button


In the new page that opens, in the “Name” field type the name that you want to give the rule, I use and suggest HTTP to HTTPS Redirect, as this tells you exactly what it does at a glance


In the next section, “Match URL” set “Requested URL” to “Matches the Pattern” (default), “Using” to “Regular Expressions” (default) and most importantly “Pattern” to “(.*)” (without the quotes). I suggest you take this opportunity to test the pattern matching.

15-NewRule-Regex Match

In the “Conditions” section, ensure that the “Logical grouping” is set to “Match All” (default) and click the “Add” button.


In the new box that appears enter the following, in the “Condition input” field type “{HTTPS}” (again without the quotes, and yes those are curly braces, not brackets). Change the “Check if input string” dropdown to “Matches the Pattern” and in the “Pattern” box below type “^OFF$” (again, no quotes), and “Ignore case” should be checked. With this one I do not suggest testing the pattern, as even though this system works fine for me, this test ALWAYS fails. Click the “OK” button (mine is not highlighted here as I had already clicked it away and had to re-open the box)


This will take you back to the new rule screen, check the conditions match as shown and then we can move on.


This is the part where we now tell it what we want to do when it matches the previous conditions, in the Action pane change the “Action type” to “Redirect”, Set the “Redirect URL” to “https://{HTTP_HOST}/{R:1}” (again, they are curly braces and of course no quotes), you can select whether “Append query string” is checked or not, but I highly recommend leaving it checked, as if someone has emailed out a URL with a query on it, but not put in the protocol headers (http:// and https:// being the ones we are concerned about) we want the query string to be appended to the end of the redirected URL so they end up where they intended to be. Finally make the “Redirect type” dropdown read “Permanent (301)” (default).


Restart the server service for good measure and there you have it you now have HTTP being redirected to HTTPS which in theory at least is on the same server. Ensure that you have ports 80 (HTTP) and 443 (HTTPS) redirected from your router to the server and the firewalls (and any other intermediaries) on both the router and server set to allow the traffic as required

Enjoy and as always have fun


Fixing a Corrupt Active Directory Database

Recently I was contacted by a colleague who was having issues with an Active Directory database. Whist there is nothing unusual in this colleague contacting me for help or vice-versa, this issue was beyond the norm.

What he had reported to me was that there was issues with the primary domain controller (PDC) and secondary domain controller (SDC) on this site having out of sync databases, which came to the fore as he was adding new devices (through WDSUtil) to be imaged, they appeared on the SDC but not on the PDC, with this causing issues predominantly with the fact they would image the machine, and get the correct name from the SDC which was also acting as the (Windows Deployment Services) WDS server but it would not bind to the domain, as there was no account for it on the PDC.

Upon further investigation (over the phone at this point) we discovered the the two domain controllers were out of sync and the tombstone had exipred, fixing this problem allowed for a partial sync as outlined below;

PDC==>SDC – Success
SDC==>PDC – Fail

PDC ==>SDC – Success
SDC==>PDC – Success

These tests were run from the “Active Directory Sites and Services” tool on the domain controllers as shown above.

Looking at the error logs it showed AD Domain Services errors of 1988  and an error stating

Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database. Not all direct or transitive replication partners replicated in the deletion before the tombstone lifetime number of days passed. Objects that have been deleted and garbage collected from an Active Directory Domain Services partition but still exist in the writable partitions of other DCs in the same domain, or read-only partitions of global catalog servers in other domains in the forest are known as “lingering objects”

It did also give a whole bunch of sensitive information (hence I will not publish it) stating the object that was causing it. Looking for the cause of the error I came across the repadmin (AD Replication Admin command line tool)- repadmin /removelingeringobjects ServerWithLingeringObjects CleanServerGUID NamespaceContainingLingeringObject which I ran, and I ran the replication tests again and got the same results.

So figuring I had nothing to loose I deleted the object that was referenced in the error, which in my case was a user, so I do this and try the replication again. This time I got an error stating that “An internal error occurred”, great what next. Looking at the error logs again (on the PDC, as by this time I was pretty sure it was the PDC that was causing the issues) I found an error of 467 meaning a corrupt database…. Oh SHIT… ok not that bad really but still.

I decided that I would try to repaid the database directly rather than using ADRM on the server (as I only had remote access). I stopped the Active Directory Domain Services – service in the Services Manager (services.msc) and knowing that the AD database is a JET database and that it is stored in C:\Windows\NTDS (NTDS Stands for NT Directory Services) I copied the file ntds.dit (the AD Database itself) to the desktop twice (two different file names, one to work on one to back up)

So once I had the two files I ran a verify on the database through the command esentutl /g C:\Users\<USER>\Desktop\ntds.dit the results coming back that the database is in fact corrupt so I ran the fix esentutl /p C:\Users\<USER>\Desktop\ntds.dit I then moved the fixed file back to C:\Windows\NTDS, restarted the Active Directory Domain Services – service in the Services Manager (services.msc) ran the replication tests again, and they all passed

Crisis averted, and I am now owed a good bottle of Scotch Whisky

This was all done over a remote session so it is possible