This is part two of a three part case study of a recent network rebuild I carried out. For part one - click here: http://blog.sembee.co.uk/post/Case-Study-2-Part-1-Network-Rebuild-Intro-and-Workstations.aspx
Servers
Now to the interesting bit.
The server design was in my head for months, and then got completely redesigned following the client wanting to go with my suggestion of replicating the data off site.
What we had was two HP ML350s, an old IBM and a HP desktop as the BES server.
What we ended up with is three DL380s, two on site, one in the datacentre.
All three DL380s are running VMWARE vSphere 4.1.
VM1 - Two Windows VMs - a DC and a SQL Database server and a Linux based firewall.
VM2 - Three VMs - a DC, Exchange 2010 and an application server.
VM3 (in the data centre) - a DC, Exchange 2010 and a SQL database, plus a Linux based firewall.
As we are going to replicate Exchange data using a Database Availability Group, we needed to use Windows 2008 Enterprise edition. As Enterprise edition allows multiple installations of Windows on one physical machine, I decided to split up the functions in to dedicated servers.
Furthermore, with more and more software products using SQL, and the client using SQL for an internal task, a dedicated SQL server was used.
All three servers lived on the same network for a week, before the third server went off to the data centre.
Data Replication
For real time data replication of the file structure, the network uses the latest version of DFS, built in to Windows 2008 R2. This works very well.
For replication of Exchange data, a DAG is used for mailbox data, and native Public Folder replication.
For SQL, this is mainly in the form of a backup, which is replicated to the data centre server shortly afterwards. Nothing the client does requires live replication of the SQL data.
Exchange
Being an Exchange MVP, the design of the Exchange part of the platform was quite important, and everything has worked as I expected.
The server that lives in the data centre is the only one that is exposed to the internet. All email comes in and leaves through that server. This provides a number of key benefits.
- In the event of a loss of the main office, all email is coming in to a server that is under our control. We don't have to worry about email bouncing or being lost.
- The dependency on the ISP at the main office is also removed, which I discuss further in part 3 networking.
- Spam filtering is being done on the faster bandwidth available in the data centre.
- I have also pointed OWA and Outlook Anywhere traffic at the data centre server, not only for speed reason but if we have to use a backup internet connection, the clients don't have to be touched. This means that all inter-server traffic goes over the WAN connection.
An RPC Client Access array is configured for outlook.example.local which points at the local CAS server, but allowing for easy changes in the event of a full failure.
We also updated the Blackberry Enterprise Server from a very old version 4.0 to a 5.02 Express server. This is installed on the application server, with its database on the SQL server.
Other Bits
WSUS - there are two WSUS servers in place, with the workstations pointing at a server in their office, and the laptops pointing to a child WSUS on the Exchange server in the data centre. This means that the laptops can pull their updates straight from Microsoft, whereas the desktops pull theirs from the local WSUS server. This saves bandwidth.
As we had to use Windows Server Enterprise edition, which allows the use of four virtual machines, the server in the data centre had a spare. Therefore I have built a web server. Installed SmarterStats on to the server, which can only be accessed from the internal network. This means the client was able to change their public web site hosting arrangement and save money there.
SmarterStats also allows use of OWA to be tracked.
For backups, we dumped tapes, and Backup Exec. Switched to two Iomega Network Attached drives, with the backup job controlled by Backup Assist. The drives are exchanged each day, but are being used for archive purposes only. For full scale recovery, the copy in the data centre would be used. Shadow Copies is also enabled to provide additional levels of security.
The VMWare platform is managed by a vCenter server installed on the application server, with monitoring provided by Veeam's monitoring application.
Remote access to the site is available via Log Me In, Remote Desktop Gateway and VPN. There is also the option of accessing the network resources with their Blackberries. This came in very handy when I couldn't remember a password in the data centre and needed to look it up on the password database (SecretServer from Thycotic) which has a mobile interface.
Server Conclusion
In effect, the client now has their own mixed cloud and on site implementation, just they aren't sharing anything with anyone else. Data is stored off site, in real time. Traffic from the internet comes in through a static location which is secure, and fast. The client almost has a complete business continuity plan for a lot less than they would ever dream of.
Part Three - Network is here: http://blog.sembee.co.uk/post/Case-Study-2-Part-3-Network-Rebuild-Networking.aspx