LFTP and the Stuck Login

I have been working on a new backup management system that utilizes the Synology and its ability to schedule tasks recently. Whilst I am untimely working on a program written in Go to be able to manage multiple backup configurations utilizing multiple backup protocols to achieve my goal I have been playing with the underlying software and protocols outside this program. One such piece of software is LFTP, this software allows for the transfer of files utilizing the FTP, FTPs, sFTP, HTTP, HTTPS and other protocols but the afore mentioned ones are the ones that are important for the software I am writing, but most importantly it supports mirroring with the FTP series protocols

Whilst I am writing this software I still wanted to get backups of the system running, to this end I was testing the LFTP commands and I hit an issue where the system will simply not connect to the server, yet the regular FTP client works fine.

Firstly we have to understand that LFTP does not connect to the server until the first command is issued, in the case of the example below, this was ls. Once this command is issued LFTP attempts to connect to and log in to the server, and this is where the issue happens, LFTP just hangs at “Logging In”

user@server backupfolder$ lftp -u username,password ftp.hostname.comlftp username@ftp.hostname.com:~> ls`ls' at 0 [Logging in...] 

To work out what the issues I had to do a little research and it comes down the fact the LFTP wants to default to secure connections, which in and of itself is not a bad thing, in fact it is a good thing but many FTP servers are yet to implement the sFTP/FTPs protocols and as such we end up with a hang at login. There is, however, two ways to fix this.

The first way to fix this is to turn off FTP for this connection only which is done through the modified connect command of

lftp -e "set ftp:ssl-allow=off;" -u username,password ftp.hostname.com

This is best if you are dealing with predominantly secure sites, however as I said most FTP servers are still utilising the older insecure FTP protocol at which point it may be more beneficial to change the LFTP configuration to default to insecure mode (and then enable it if needed for the secure connections, depends on which you have more of). To do this we need to edit the LFTP config file, to do this do the following

Utilising your favorite text editor (vi, nano or whatever it matters not) the config file is at /etc/lftp.conf

At some point in the file (I suggest at the end) put the following line

set ftp:ssl-allow false

Save your configuration and the defaulting to secure is turned off and your LFTP connection should work

Have Fun

Justin

The Old Backup Regime

After I purchased the NAS box to place at home for my work data (there is a separate one for family data, they do however backup to each other but I will cover that in another post) I decommissioned my old Windows Server 2008 R2 box.

This box, however, did do a multitude of things that were controlled via scheduled tasks and scripts that I have now moved to the Synology. Chief amongst this was the backup for several websites for “just for when” something goes wrong.

There were several bits of software in the implementation of this task, these were (are);

  • wget (Windows Version) – Command line utility for downloading files, whilst there are other options, this was quick and simple, exactly what I needed
  • FTPSync (CyberKiko) – a Great little piece of software, can display a GUI showing sync progress which is useful for troubleshooting or runs in a silent mode with no GUI. It utilises simple ini text files for configuration (it encrypts the password) making it easy to configure and it has many options for doing this configuration
  • DeleteMe (CyberKiko) – Simple file removal tool, give it a folder (it can have multiple set up) and a maximum age of the files in that folder and it will remove anything older than that.
  • 7Zip (Command Line Version) – Command Line zip archive creation utility, what more is there to say
  • Custom PHP DB Export Scripts  – Custom PHP scripts that pulls the database(s) out of MySQL and zips it up. This was originally run with a CRON job, but I found it easier to use wget to pull the trigger file when I wanted the backup was then created, then pull the file itself, then pull a delete trigger

That’s it for the software I use but what about the backup process itself? For each of the sites, I need to backup the custom PHP scripts were configured on the server. Then a custom batch file containing a bunch of commands (or should that be a batch of commands) to download and archive the files.

The batch file had the following segments in it to achieve the end goal;

  1.  Check if backup is still running from previous attempt (Utilizes blank text file that is created at start of script and then removed at end)
    1. If it is running, skip the script and go to the end of the file
    2. If a backup job is not running, create the file locking out other jobs.
  2. Run cleanup of old files
  3. If an existing backup directory for today exists (due to a failed backup job most likely), remove it and create a new one
  4. Start logging output to a log file
  5. Start Repeating Process (Repeats once for each site that is being backed up)
    1. Generate Database Backup
    2. Retrieve Database Backup
    3. Remove Database Backup to the long term storage folder
    4. Rename Database Backup File
    5. Move Database Backup File to Storage Location
    6. Sync (utilizing FTPSync) the sites directories
    7. Remove Existing zipped backup file of the site’s files and directories if it exists
    8. Zip folder structure and files for the website put the ZIP file in the long term storage folder
  6. Copy Backup Complete information to log file
  7. Remove Process Lock File

To download the batch files, click here

Reasonably simple, to add a new site, copy and paste a previous one, update a few details and off you go.

Now I realize that some of this is perhaps not the best or most secure way to achieve a goal (specifically how I was handling the database) but it was quick, easy and it worked. I could have also made the whole process more efficient by using config a files and a for loop, but well I didn’t

Have Fun

Justin

 

Oh Shit!!! The Data is Gone…. Or Is It?

Yep, I screwed up, I made assumptions, didn’t double and triple check things, and made a mess of something I was working on, professionally none the less. I did fix it, but it was a stupid screw up none the less. The irony is not lost on me about how I harp on about backups regularly to everyone.

The other day I ended up with one of those Oh!! SHIT moments, I was migrating an older 2012 R2 file server to 2016 and whilst I was doing this I decided to kick the old server that was due to return for the leasing company from the Failover Cluster it was in. As standard I paused the node, removed all the references from it and I then hit the evict function on the node to evict it from the cluster, and it was from this that it all went to shit, doing this whilst doing migrations, what the first part of my mistake. What happened the Failover Cluster borked itself and crashed on the remaining servers, and it would not restart, to this day I have not got it to restart.

After spending an hour or so trying to get the cluster to restart, I relented and went to the backups to restore the offending server. Hitting the backups I go to the server I want to store, and notice that its only 16GB WTF!!!! the server should be several TB in size (it is a file server after all).

Upon further investigation it seems that I was missreading the backup reports and the old server, which has the same name on the old Hyper-V Cluster as the new server does on the new implementation was not getting backed up, it was the new one. I misread the report and assumed that it was backing up the old server, mistake number 0 (this had been happening for the 6 weeks before the backup failure) and the old restore points being more than our retention limit were gone. Ok I will hit the long term off-site backups might take a while but the data is safe, well it was not or so it seems, the other technican at the offsite location had removed the offsite backups for the fileserver from the primary site. Why, because they were taking up too much space on that site’s primary backup disk (The storage at each site is partitioned to provide onsite backup for each site, with the second partition being the offsite backup for the other site).

Damn, so this copy of the data is the only one.

Ok so I killed the cluster server that everything was on, and using the old evicted node I rebuild a single node “cluster” and mounted the CSV, mounted the VHDX and everything appeared as it should. Whoo Hoo access to the data, well not so fast there buddy.

After moving some data an error popped up stating that the data was inaccessible, ok no problem loss of a single file is not a real issue. Then it popped up again, then again.. the second Oh! Shit! moment within several hours.

2017-02-02 - Dedupe Error

I recovered and moved the data I could access leaving me purely with data I couldn’t. I tried chkdsk and other tools and after several hours I took a break from it, needing to clear my mind.

Coming back to it later I looked at the error, looked at what was happening, and recalled seeing an article on another blog about Data Deduplication corrupting files on Server 2016. With this I began wondering if it had effected Server 2012 R2, then the lightning struck deduplication, this process leaves redirects in place and essentially has a database of files that it links to for the deduplication. The server the VHDX was mounted did not know about the Deduplication, the database or how to access it.

Up until now I had only mounted the Data VHD. Now I rebuilt the server utilising the original Operating System VHDX to run the server. I let it install the new devices and boot.

Upon the server booting I opened a file I could not access before, and it instantly popped onto my screen. Problem Solved

Note to remember If you are doing Data Recovery or trying to copy data from a VHDX (or other disk, virtual or physical) that was part of a deduplicated file server, you need to do it from the server due to the deduplication database. You may be able to import the database to another server, I really have no idea, and I am not going to try to find out.

Unzip Multiple Zip Files on OSX from Command Line

I recently had a need to unzip a whole bunch of zip files at work containing new client RADIUS certificates to be installed on the clients due to the depreciation of the SHA1 algorithm for security reasons by the software vendors (Microsoft and Apple in this case).

These zip files contained one useful certificate file (a .pfx containing the required certificate and the new certificate chain) per zip and a bunch of other files that are only applicable in certain situations, that I need to remove once decompressed and extracted the files from the zip archive. I consequently used a simple multiple-step process utilizing the power of the terminal prompt/command line to achieve this.

Firstly if you are needing to do this, I am assuming the files are all easily accessible and to make it easier, let’s make a directory to house all the initial zip files and put the files in there, this makes the cleanup so much easier later.

Once this is achived we can utilise the terminal prompt to make the rest of the process easier. I recommend you do this and put the files in their own directory as the following command swquice will unzip ALL zip archives files (or rather it will attempt to unzip anything with a .zip extension) in the directory, and will delete them if you do that part of the process.

Open terminal (Type Terminal into Spotlight Command + Space Bar or it is in the Application/Utilities folder)

In terminal do the following

[code language=”bash”]# go to the containing folder

cd /Users/jpsimmonds/Downloads/AAAA-Certs

#Unzip all the Files in the directory (escape “\” is used to stop wildcard expansion)

unzip \*.zip

#Remove All Zip Files – To change the file types to remove change the “zip” portion of the command

rm -f *.zip[/code]

Nice and easy, the files are now extracted and the initial zips (and other files if you ran the delete command on extra extensions) are removed, leaving you just the files that you require

Ubuntu 16.04 Server LTS: Generation 2 Hyper-V VM Install

So you have downloaded Ubuntu 16.04 and noticed supports EFI, yet when you try to boot from the ISO message, you are greeted with a message stating that the machine does not detect it as an EFI capable disk, as shown below

 

Luckly this is an easy fix, as it is simply secure boot that Ubuntu/Hyper-V are having an argument over.

Turning off your VM, open up the settings page and navigate to the “Firmware” menu. As you can see in the first image below, “Secure Boot” is enabled (checked). To fix this, simply uncheck it as per the second image below, click “Apply” then “Ok”
Upon doing this and restarting your virtual machine, you will now be presented with the boot menu from the disk, allowing you to continue on your way

Have Fun

Justin

I am NOT an electronics hobbiest

People call me many things, some polite, some not, some to my face and I am sure, some behind my back. One thing I have been accused of being, and that I am most certainly not, is an electronics hobbyist.

Certainly I use electronics, and my extremely limited electronics knowledge in many of my projects, but I certainly am not interested in the electronics, for the sake of the electronics, in fact I cannot think of much of less interest to me, and whilst I can understand the point of building electronics to test and build something so you can increase your knowledge, this is simply not me. I learn what I need to know to complete a project.

This has come about as at one client they are utilising Arduino’s in an educational setting, and on numerous occasions I have been asked to help out due to my knowledge surrounding them, now this is fine whilst they are doing some basic functions, you know, hello world kind of things, but so far for example I have had no need to learn how to control or manage servo’s so when that comes up, I am of little use.

What I DO build with electronics, are things that I cannot purchase off the shelf, yes I know its lazy, but like so many others I am time poor, I only build things that I have to build to achieve an outcome that I have decided I need, often times this is with the goal of some kind of automation, or reporting on certain states to conserve time on often wasted tasks that I would have to otherwise do.

One example of this, is the Particle Phone and electronic scales I am working on for the measurement and ultimate reporting of weight of a container of hydrochloric acid that is attached to the pool. The automated systems we have in place around the pool control PH, Chlorine to Salt conversion (through the use of ORP) and temperature on the solar controller. What it does not do however is tell me and report on the weight of the remaining hydrochloric acid, meaning constant checking of this one component. This project is simply to use a particle photo (or perhaps ultimately a ESP8266) to simply read with weight from a scales and report it back, and then either generate a push message or e-mail when the weight drops to a certain percentage(s) of the original (minus the approx weight of the container obviously). This reduces my need to check the system.

What this does not make me, is an electronics hobbyist, it makes me a maker, or perhaps an assembler, cobbling bits of off the shelf hardware and code together to make a task work. A true electronics hobbyist, would design the circuits, test them and go for a far greater efficiency than I am trying to achieve, as the photon is most definitely overkill for the task at hand in this case, and perhaps an ESP8266 is as well, I do not know, I do not care, I am after a working “product” at the end that can achieve my rather simple goals.

As I said, I am not an electronics hobbyist.

Justin