Website Developers eCommerce eMarketing
For a free consultation 787 -557-3109
Web Developers Knowledge Center
  PR Market Web Developers, Inc. "We make it happen, always and on time"

Migrating a Linux Server

Jan 26


Migrating a Linux Server From the Command Line Stage 3

Other scenarios

In the previous article in this series we discussed using rsync to do a live migration of your system from one Linux server to another. 

We looked at preparing the destination environment to be similar to the source server, then set up an rsync exclude file, then performed the sync.

It’s not always practical to perform a live sync the way we outlined it, or you may want to migrate just a couple applications instead of the entire system. 

Let’s look at your options in those cases.

Remember that this process requires that rsync be installed on both the origin and destination servers.

 The package name is usually "rsync”. Use your package manager to install it if you get a "not found” response from running:

which rsync

Syncing with the server inactive

A live sync has the advantage of minimizing downtime during the migration but it can require multiple runs of rsync to complete if you 

have a lot of files that change frequently. After a pass or two with rsync on a live server it can be practical to perform the

 final sync while your origin server is not running at all.

It’s also possible that you may be unable to boot the source system for reasons unrelated to the installed software. 

In that case you’d also need to be able to access your files while the server is dormant.

For real (non-virtual) servers you’d copy files while keeping the server down by booting the machine from a rescue disk 

(usually a live CD distribution), mounting the file system, then performing the final sync from there. Fortunately there are ways to simulate that for virtual servers.

An approach many virtual server providers take is to provide an option to boot your server using a temporary server instance. 

It acts as a virtual rescue disk, allowing you to mount your server’s file system while the system isn’t running. 

For PRMWD Cloud Servers it’s called "rescue mode”.

Rescue mode

For a Cloud Server you can put the instance into rescue mode via the Cloud Control Panel. For more information on rescue mode see this article.

Once the server is in rescue mode remember that the SSH key for the host will have changed, so you’ll probably 

need to delete the server’s key from your "~/.ssh/known_hosts” file or equivalent.

Once you’re logged into the instance in rescue mode you should be able to mount your server’s file system and then proceed with the sync.

Determine your file system’s device by running:

fdisk -l

Look for the device that matches the size of your disk. It should be the second disk listed, either "/dev/sda1”, "/dev/sdb1”, or "/dev/xvdb1”.

We’ll use /dev/xvdb1 for our example. To mount that filesystem onto the /mnt/origin directory, run:

mkdir /mnt/origin

mount /dev/xvdb1 /mnt/origin

Note that rescue mode has a time limit of 90 minutes, after which your server will reboot in normal mode. If your final sync takes longer than that you may need to put the instance back into rescue mode then run the rsync command again to pick up where things left off. If you still run into trouble with the time limit talk to our support staff and they can assist you.

Rsyncing from a mount point

After booting your server into a rescue mode and mounting your server’s file system to a mount point like "/mnt/origin” you will need to adjust your rsync command accordingly.

First make sure you create an exclude file and make any changes necessary for your environment as explained in the "Rsync excludes” section of this article on live server migration. Create the list without mentioning the mount point stuff, just list them as they would appear in your regular file system.

An example exclude file would look just like one used for a live migration:




























Next we’ll set up our rsync command so it takes the mount points of the file systems on both servers into account. The rsync command with the origin server in a rescue mode would be:

sudo rsync -e 'ssh -p 30000' -azPx --delete-after --exclude-from="/mnt/origin/home/demo/exclude.txt" /mnt/origin/ root@

Note the trailing "/” on the origin directory. Including the slash at the end makes sure rsync treats the origin and destination

 directories as the same relative locations, so don’t leave that part out. Otherwise you might end up with your files getting 

copied into a new subdirectory on the destination instead of sending the files to their proper locations.

As a bonus, with that trailing slash on the directories rsync will treat the exclude file list as relative to the source directory. 

That’s why we don’t need to change the exclude file to account for the mount point.

Both servers in rescue mode

If you’re being extra careful and have both the origin and the destination in a rescue mode you would change the destination directory too. 

With the destination server mounted at "/mnt/destination” the rsync command would look like:

sudo rsync -e 'ssh -p 30000' -azPx --delete-after --exclude-from="/mnt/origin/home/demo/exclude.txt" /mnt/origin/ root@

Once the final sync is done you can boot up the destination server and run your tests.

Per-package approaches

It may be impractical to migrate your entire server, or you may only have a couple services you need to bring over. In that case 

you can migrate the system on a per-package basis. It might be a little more work than running a full sync but it could be faster overall.

In general this approach would require installing the requisite package on the destination server then copying its configuration 

and data files from the origin server to the appropriate place on the destination. Once you’re done start or restart the service on 

the destination and test to make sure everything is in its place.

You may need to tweak any aspects of the system you had changed on the origin server once you’ve completed the copies. 

If you created a logrotate config for the service (or changed it) you’ll need to copy that over.

 If you had a cron job set up for the service that would also need to be migrated.

A couple examples follow to illustrate the approach.

Web servers

If you’re migrating a web server you’ll need to make sure you bring over your configuration files (including virtual host definitions) as well as the files used by your website.

If you’ve been keeping your web files in a user’s home directory make sure you have that user created on the destination server. If the user name is "demo” and the web files are all in the directory "public_html” you can run an rsync command similar to the following:

sudo rsync -e 'ssh -p 30000' -azPx --delete-after ~demo/public_html root@

We left out the "exclude-from” flag because we wouldn’t usually need to exclude any files from this sync.

The configuration directory for your web server may vary by distribution, particularly for apache. Ubuntu and Debian use "/etc/apache2”, CentOS and other Red Hat-based distributions use "/etc/httpd”, and so on. So first, find your config directory.

Once you have the configuration directory identified run an rsync command similar to the above but copying the configuration directory instead. If you’re running nginx this might look like:

sudo rsync -e 'ssh -p 30000' -azPx --delete-after /etc/nginx root@

If you’re using PHP you may also need to bring over any changes you made to your php.ini.

After that restart your web server and run it through some tests.


A similar approach works with a database service. Install the database on the destination server, 

making sure you get as close to the version running on the origin as you can.

Bring the database service down and identify where its configuration and data files are kept. 

For mysql the configuration files are usually in /etc/mysql and the databases themselves are in /var/lib/mysql.

It’s easier to do this with two rsync commands, one for the config and one for the databases. 

For a mysql installation the commands might look like this:

sudo rsync -e 'ssh -p 30000' -azPx --delete-after /etc/mysql root@

sudo rsync -e 'ssh -p 30000' -azPx --delete-after /var/lib/mysql root@

Next check to make sure there aren’t other changes you made on the origin server related to the database (like cron jobs or logrotate configuration). 

Then start or restart the database service on the destination and get to testing.

Speeding up rsync

If you find that your migration is taking a long time there might be further measures you can take to cut down on the work rsync has to do. 


Now you should have your system migrated to another server and are enjoying the results. 

Congratulations! But do remember to test thoroughly before going into production with the new server. Surprises are bad (on a server, anyway).

Let us know
comments (0)
Jan 26


Migrating a Linux Server From the Command Line Stage 1

Server Migration

Migrating your data from one Linux server to another is only a simple affair if you’ve been running a simple server. If you have a lot of interdependent services or a highly customized setup then recreating your environment from scratch is an involved process. It gets less complex if you can copy over just the files you need without worrying about overwriting system files specific to the new server.

So that’s what we’re going to do here. We’ll look at how to prepare for a migration and what tools will make the job go easier.

Full migration versus package migration

The first choice you need to make is whether you want to migrate the whole server, configuration and all, or if you can get away with just copying over the data for a couple services.

In this article we look at the process for a full migration. If you know you want to copy more than just a few data files this is the most straightforward approach.

If you prefer a per-package approach you may want to look at the third article in this series for advice.

Prepping the new server

Start by confirming that the destination server is accessible via SSH from the origin server. You’ll also need to enable root logins via ssh on the destination server (in the /etc/ssh/sshd_config file) so rsync will be able to replace system and application files.

Check that rsync is installed on both the original server and the destination server (the package name is usually "rsync”). Running the command "which rsync” should let you know if it’s installed where you can run it.

If you’re performing a full migration it’s much more likely to go smoothly if the destination server is as similar to the original server as possible. That includes the distribution used, the system architecture, and the kernel version.


Make sure you’re running the same distribution on each server. Try to match the version of the distribution as well. The location of system files isn’t always consistent across different distributions, and sometimes when a distribution releases a new version they move some files around. If you do a straight copy without matching the distribution you may wind up with an unstable server.

If you want to combine your server migration with a distribution upgrade it’s safer to complete the migration before proceeding with the upgrade.


Next make sure both servers are using the same architecture. You can check the architecture on Linux with the "uname -a” command:

$ uname -a

Linux demo #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64 Quad-Core AMD Opteron(tm) Processor 2374 HE AuthenticAMD GNU/Linux

After the date (which ends in "UTC 2010” above) you’ll see a code representing your system’s architecture. In this case "x86_64” means it’s an x86 system running a 64-bit architecture. If you instead see i686 for the architecture that means your system is 32-bit.

If the architectures don’t match the copied programs won’t run. Software compiled for 32-bit will generally not work well on a 64-bit system, and vice-versa. If the architectures don’t match you’ll have to migrate on a per-package basis instead.

Kernel version

Try to use the same kernel version on both servers. Sometimes a new kernel will add or change features, so a different kernel can throw a monkey-wrench into the process.

You can check the kernel version by running "uname -a” like we did above. The kernel version is listed after the hostname, so in the example the kernel version was "”.

It’s generally not a good idea to copy kernels between servers. If you compile or install your own kernel (as opposed to using one provided by your hosting service) it’s safer to perform that process manually on the destination server.


Finally, try to match the versions of any software that’s already installed on the destination to what you’re running on the original server. The easiest way to make sure both systems are running the same versions of any common packages is to run an update through your package manager before the migration.

We’ll be replacing most software anyway so some version variance shouldn’t actually make much difference. We are migrating a server though, and the paramount concern for any server is stability. We want to leave nothing to chance.

Optimizing before copying

The new server generally won’t need a lot of the temporary files applications can leave lying around. The more stuff we have on the original server, the longer it will take to get everything onto the destination server.

Much of what goes on behind the scenes when you resize a virtual server is similar to what we’ll do when we use rsync to copy from one server to the other. That means a lot of the tips in our article about speeding up rsync will apply here.

In a nutshell, remove any temporary or cache files you don’t need or add their directories to the exclude file (explained below). Check the sizes of your log files and, if you can, archive or delete older logs.


We’ve compared the origin and destination servers to each other and prepared your file systems for the copy. Now it’s time to make a choice:

If you’d like to migrate using a script to handle much of the heavy lifting, proceed to our article on a script-assisted migration.

If you would like to hande the syncing yourself, running rsync manually, head to our article on migrating with rsync to start the servers syncing.

Let us know
comments (0)
Jan 26


ntended Audience

This article is intended for system administrators of at least an intermediate skill level when working with Windows Server 2012 operating system operations and administration.


If you were hoping to launch a nifty IIS Web Farm using Microsoft's Web Farm Framework in IIS8, there is some not-so-happy news: It doesn't work! Microsoft says that they are not abandoning the WFF technology, but so far, the lack of updates, including the ability to function within IIS8, is not really promising.

So, can you still utilize the awesome new Windows Server 2012 while simultaneously running a fault-tolerant web farm? Yes, you most definitely can! Most of what you will find on Technet or other relater forums on this topic will guide you through using Web Deploy in conjunction with DFS on a third "Content" server, and this usually involves bringing Active Directory into the equation as well.

Well, what about those of you that are watching the budget and don't want to have to spin up a whole new server simply to store the common configuration to be deployed amongst the various web farm nodes? And what about those of you that want to keep your web deployment simple, without further complications introduced by dealing with Active Directory? Fear not! Below I have highlighted how you can use Web Deploy and Powershell scripts to keep your web content in sync while managing it from a single "Master" server. It is not quite quick to implement and GUI-friendly as WFF, but it uses official Microsoft technology and keeps your web content synced!


To get started, you will need to create a new user account with the same username and password on each server in the farm. This account will then need to be made a member of the local "Administrators" group on each server, and on the primary server, this account needs to be added to the Log On As Batch security setting. To add the account to the Log On As Batch security setting, navigate to Administrative Tools -> Local Security Policy -> Local Policies -> User Rights Assignment. For this exercise, we are using the below credentials (obviously, you should always select a password that is much more secure):

Username: SyncMan

Password: P@ss1234

Next, on each of your secondary cloud servers, you will want to create a Windows Firewall rule to allow ALL TRAFFIC from the primary server (Master).

On the Master server only, create a common directory for storing the Web Deploy templates. For example, create a directory like C:WebSync. Next on the Master server only, open a PowerShell window and execute Set-ExecutionPolicy Unrestricted. When prompted, type Y and hit ENTER.

Lastly for preparation, for simplicity in your scripts, you will want to modify your Hosts file (located at C:Windowssystem32driversetcHosts) to include a listing for each node, matching its internal IP address to an easy host name, such as WEB2.

Web Deploy

To use Web Deploy 3.0 on the server, you will need to install it. Go to for the install. This must be done on each server in the farm.

The Scripts

Now we are going to create a couple scripts on the Master server to be run by the Scheduled Task (we will create this later). The first script is a simple batch script. 

Simply open a new Notepad file, and place the following in the file:

"powershell.exe -command C:WebSyncWebDeploySync.ps1"

We will now save this file as "WDSync.bat" in the C:WebSync folder. As I am sure you can guess from the contents of the batch file, the next script we will be creating is a Powershell script (this type of script has the .ps1 extension). In a new Notepad file, enter the following lines:

add-pssnapin wdeploysnapin3.0

New-WDPublishSettings -ComputerName [MasterServerName] -AgentType MSDepSvc -FileName c:WebSync[MasterServerName].publishsettings

New-WDPublishSettings -ComputerName [SecondaryServerName] -AgentType MSDepSvc -FileName c:WebSync[SecondaryServerName].publishsettings -UserID SyncMan -Password P@ss1234

Sync-WDServer -SourcePublishSettings c:WebSync[PrimaryServerName].publishSettings -DestinationPublishSettings c:WebSync[SecondaryServerName].publishSettings

**NOTE** The above code reflects a 2-node setup. If you wish to have more secondary nodes, you will need to add another "New-WDPublishSettings -ComputerName [SecondaryServerName]..." line for each secondary server, and you will then need to add a new "Sync-WDServer.." line that syncs the primary server to each subsequent secondary server.

For the above code, you will save the file with a name of "WebDeploySync.ps1" in theC:WebSync folder.

Schedule The Task

Now that the scripts are in place, and all the prep work has been completed. You now need to set up a Scheduled Task to run the scripts at a semi-constant rate to ensure that your web content stays synced across the nodes. This task only needs to be set up on the Master server. When setting it up, we are going to run in with the SyncMan credentials that we specified earlier, and all the task to be run even when the user is not logged on. We will make this a Daily task, that runs every 1 minute for a duration of 1 day. This schedule ensures that it will run indefinitely at a 1 minute interval, as 1 minute is the shortest available interval. To access the Task Scheduler, navigate to START -> Administrative Tools -> Task Scheduler.

Once in the Task Scheduler, highlight "Task Scheduler Library" in the left column. From this point, click on "Create Task..." in the right-hand Actions pane.

On the General tab of the Create Task box, enter a descriptive Name for the task, enter the SyncMan credentials by using the "Change User or Group..." button, and then change the radial button selection to "Run whether user is logged on or not". Lastly for the General tab, in theConfigure for: drop-down list at the bottom, select Windows Server 2012.

Your General tab should like like this:

On the Triggers tab, click the New... button. In the New Trigger box, select Daily from the radial list, and choose a start time of 5 or 10 minutes in the future. Ensure the Recur every: box says "1". In the Advanced settings section, check the box that says Repeat task every:, and manually type in "1 minutes", leaving the for a duration of: box set to 1 Day. (Note, "1 minutes is not a type; make sure you leave minutes plural). Your New Trigger box should look like this:

Click OK on the New Trigger box. Now click on the Actions tab.

On the Actions tab, click the New... button. On the Edit Action box, leave the Action: as Start a program, and in the Program/script: field, type in C:WDSyncWDSync.bat. Your Action should look like this:

Click OK on the Edit Action tab. On the Conditions tab, make sure to un-check all boxes, so that it looks like this:

Lastly, on the Settings tab, check the box for Allow task to be run on demand, and leave all other check boxes cleared. In the If the task is already running, then the following rule applies: drop-down list, select Run a new instance in parallel. The Settings tab should look like this:

You can now click OK on the Create Task box. To ensure that the task runs, you will need to click on Enable All Tasks History in the Actions pane on the right side of the Task Scheduler. Once your task starts running, you can highlight it and click on the History tab to ensure that it is running regularly every minute:


Now that everything is all set up and running, if it was done correctly, you should be able to test this by making a change on the Primary server, and ensuring that it shows up within IIS on the secondary server(s). Likewise, you should be able to make a change on the secondary server(s) in IIS or in the directories controlled by IIS, and you will notice that your change will get overwritten in a minute or less.

I hope that this has been helpful for those of you trying to implement a web farm in Server 2012 without deploying Active Directory and having to buy additional server!

Let us know
comments (0)
Jan 26

Let us know
comments (0)