Archive for November, 2011

What is this Cloud all about

November 24, 2011 Leave a comment

The cloud term has been used in various forms over the last few Akamai had it with their cached storage, then came google and microsoft with their huge geographically dispersed data centres,and along came facebook.

I’ve never really been a proponent of using the word cloud as i never felt that technology has advanced to such a stage. In the last 2 years, what cloud meant was that you pay for a “cloud” instance and then use it. This was what hosting companies have been doing for the last 5 years via virtual private server (VPS). The difference between VPS and cloud instances is that you are charged for everything you use in the cloud, VPS usually impose a monthly charge.

This to me is just server virtualization, and not a true cloud, there is still server administration that i need to take care of, which is not really what i should be doing.

Recently after getting my hands dirty with Microsoft Azure and new System Centre 2012 and Windows Server 8 offerings, i’m beginning to feel that Microsoft really does know the direction they are going with cloud. No longer is it server virtualization, it lets you host your app and scale your app. It is no longer really about having to do windows update, restarts, worrying about correct installation of the components, securing the server, it is just about my app, and how to make it run smoothly.


This i feel is cloud, me as an app owner, managing my app, making it perform as best as it can, and leave the rest to a system that knows very well what it is doing.

Categories: Azure

Eject and Close CD Tray command line utilities

November 2, 2011 Leave a comment

Been searching around the internet for a decent command line utility to eject or close the CD tray but can’t seem to find any. So i decided to write my own command line versions using visual studio 2010 and .net 4.

Note that the actual code does not utilize any of .net 4 features so you can easily take the same code and compile it against .net 1 or .net 2

The project is currently hosted in codeplex and can be found at

Categories: General, Visual Studio

Building resilient IT systems – IIS

November 2, 2011 1 comment


Another main component of windows systems is the Internet Information Services aka IIS. This is what runs the web applications which we are so familiar with today.


Previously with IIS 6 and below, there is no easy way to synchronize the IIS metabase between 2 or more servers. The only way to do that is to install the IIS metabase editor and do a copy and paste between the source and destination servers.


With the introduction of IIS 7 and shared configuration, it is now possible to share the configuration between web servers so that the IIS running in all the servers are in sync and have the exact IIS configuration (assuming that the folder structure and user permissions are the same in all the web servers)


You can use the file share method to point the other web servers to the primary web server, however this may mean that in the event that the primary server is down, the other web servers IIS configuration may not reflect correctly as well. One quick way to work around this is to use DFSR to replicate the IIS configuration folder across the servers and have IIS point to a local path instead.


How to build a resilient IIS (7.0 and above)

In order to set up shared configuration, you can refer to this article or this video guide. In fact there are a whole lot of guide which can easily be found, but in general setting up shared configuration is extremely simple.


An alternative to using DFSR can be found here, where they use offline files to ensure the files are always available. Do note that in the comments, there are mentions that using DFSR might be a better option rather than offline files.
Once you have shared configuration up and running, you will have multiple servers hosting the same IIS content. However this is purely hosting, you still need a load balancing appliance/application to perform the load balancing.


To solve this, you can either use Windows network load balancing or a hardware load balancer box such as the likes from F5, NetApp, Radware  (in no particular order).

This link explains how to use Microsoft Application Request Routing and in addition, the article provides 2 links somewhere at the end which teaches you how to integrate IIS with a hardware or software load balancer.


The benefits of a load balancer appliance is that generally they offer something known as global load balancing, where you have your servers all over the world and the load balancer has the capability to load balance your requests across these servers

Building resilient IT systems – File Replication

November 2, 2011 1 comment

Some history about the File Replication found in Windows Servers

Since the days of Windows Server 2000, Microsoft has been providing distributed file system as an add on to the server systems.

This service known as File Replication System (FRS) basically detects changed files and copies the entire file to all the servers associated with the FRS.


In Windows Server 2003 R2, Microsoft shipped an updated version of FRS known as Distributed File System Replication (DFSR) which detects the changes in the files and only copies these changes to the other servers, using a technology called Remote Differential Compression (RDC), along with giving you the ability to schedule network usage.


One major caveat to using this technology is the fact that it only works with closed files It is not a suitable candidate for any files which are constantly opened e.g database files.


What is the purpose of DFS and DFSR

Distributed File System aka DFS is the technology where you access a share via a domain name. So instead of the traditional \\server\fileshare, you will instead type \\domain\fileshare.

There are several purposes of DFS

  1. Load balance usage: There is a limit to how many shares can be created and maintained by a single server, with DFS you can effectively load balance multiple servers to increase the number of available connections.
  2. Reduce network bandwidth (2003 R2 and above): Since only the changes are compressed and sent, it drastically reduces the amount of data that is being sent. Do note though that this comes at the expense of transmission speed and CPU clock cycles (Windows first needs to find the changes and then compress them before sending it over the pipe)
  3. Added redundancies: Althought DFS works with at least one server, adding an additional server effectively provides a basic form of redundancy to guard against single server failure. The cost to this then is that the storage requirement of any file is doubled or tripled depending on the number of servers participating in this share.
  4. Virtualize shares: Since you can effectively remap any share to a main domain share, you are able to then create your own share directory structure and DFS will automatically map it to the appropriate file share for you
  5. Ease of adding new hosts: Utilizing both DFS and DFSR, adding and synchronizing the new host is done with just a few clicks. DFSR will automatically synchronize the file contents for you in the backgroud
  6. Ease of recovery: Should you have a host that went down unexpectedly, once it is up, DFSR will automatically synchronize the changes to the host

The main role of DFSR is purely to replicate the data for the various DFS shares. One of the main things to note is that although remote differential is good in a WAN setup, if your servers are all connected in a LAN (at least 1gbps), then it becomes a hindrance and you will do better to turn it off. The steps can be found here.


Guides to setup DFS

There are already detailed steps for setting up DFS and DFSR for the various Windows Server editions at technet, just click on the server edition which you are using to get the guide.

Windows Server 2008

Windows Server 2003 R2

Windows Server 2000


Important links you should take a look at before using DFS

Technet : How DFS Works

DFS Whitepaper