Microsoft Exchange and Remote Desktop Services Specialists


Microsoft Exchange Server and
Blackberry Enterprise Server news, views and fixes.



The blog has been very quiet for a few months, and even my forum posting level has dropped, and that is because I have moved house. 

I am now located in the Thames Valley, midway between Newbury (home of Vodafone) and Reading (home of Microsoft UK). It took me a little while to settle in, as I moved from a very small one bedroom flat (apartment) in to a three bedroom detached house. Although oddly my stuff appears to have grown to fill the space!

I now have a real office instead of the corner of the lounge, a real kitchen instead of the corner of the lounge and a lounge without a kitchen and my office in it. I also have a garden, which is somewhat of a shock as I am so not green fingered. 

Anyway all good fun and I am now back up to speed with client work. 


Introduction of a New CAS Server Causes Certificate Prompts

An increasing issue appears to be a certificate prompt being seen by Outlook 2007 and higher clients following the introduction of additional CAS servers, or new multiple role servers holding the CAS role. 

While this has been an issue for some time and well known to those running a multiple server environment, the increasing number of postings on forums about this problem has probably occurred as single Exchange 2007 servers start to get to end of life and people migrate to Exchange 2010. 

The cause of this is usually autodiscover. 

What is Happening

CAS Servers have a value called "AutoDiscoverServiceInternalUri". This is published in to the domain as a Service Connection Point (SCP) and is queried by Outlook 2007 and higher as part of the internal autodiscover process. It tells the client where to connect to for the account information. 

If you have multiple CAS servers then they will all be publishing this information to the domain, in effect overwriting each other. 

This command will show you the name and the value set on all Client Access Servers in the org:

Get-ClientAccessServer |select name,AutoDiscoverServiceInternalUri

The Resolution

There are two resolutions to this issue, depending on your setup, and future plans. 


  1. The simple fix is to bring forward the introduction of the trusted SSL certificate and get it installed on to the new server. The value for "AutoDiscoverServiceInternalUri" should match one of the host names on the SSL certificate. Remember that most SSL providers will not allow multiple certificates with the same names on them to be issued, so you may have to get a new certificate issued to cover all servers with the CAS role. 
  2. Set the value for AutoDiscoverServiceInternalUri to be the same on all CAS Servers. If this is a specific server name, rather than a generic name, then you will need to change that value on all servers if you remove that server from production. Alternatively you could ensure that resolves internally on your network to the IP address of a CAS server, then set all CAS servers to use that value. Then when the servers are changed, all you need to do is update the DNS. If you have clients on your internal network which are not members of the domain, then you may well have already configured this. 

Multiple AD Sites

If you have your CAS servers in multiple AD sites, then you may well have to consider using site scope to control which server the clients will connect to. There are other things to consider if this is the best thing to do or not and this Technet article explains how to use Site Scope:

CAS Array

This is not related to the Exchange 2010 CAS Array function, and you shouldn't use the CAS array host name for this. The CAS array doesn't use HTTPS and also shouldn't be resolvable from outside. 

Blackberry "Buyer's Remorse" Screen

Does someone at RIM have a sense of humour I ask myself?

While playing around with a couple of Blackberry devices that belong to a client, I went through the common list of Blackberry diagnostic codes to see if they worked on an OS6 device (they do). 

When I came to the one for the Voice and Data use (BUYR), I had a surprise when the additional information was labelled "Buyer's Remorse". See the screenshot below. 

This is from my own 9700 that I have upgraded to OS 6. I only use it for Data, it doesn't have a voice subscription. 

Wondering if this was an OS 6 thing, I checked another device. This was a brand new 9780. 

Slightly different OS versions ( on the 9780, versus on the 9700). However no label on the sections. Therefore it would appear to be a 9700 only thing. A curious way to label that information - perhaps an indication of how addictive the Blackberry can be - not known as the Crackberry for no reason!

Case Study 2 Part 3 - Network Rebuild - Networking

This is part three of a three part posting of a recent case study.

Part 1 - Part 2


With all the changes we had to look at the networking. 

Internet Access

With the server in the data centre, the issue of bandwidth over the WAN connection became critical. 

Therefore the client upgraded their line to a 2mb SDSL line, although due to the distance from the exchange, we only get about 1.5mb. 

A second internet connection was also brought in. This is a basic connection which will be used for backup purposes only. In the meantime we have put a wireless connection on to it for use as a guest wireless. No connection to the production network. In the event of a failure of the SDSL line, a cable will be moved to use the backup connection. Not completely automated, but for this client, good enough. 

The servers in the data centre are connected to the production network via a site to site IPSEC VPN. This VPN is managed by pfSense, which sits in a virtual machine. Using the VMWARE virtual switches, the internal servers are isolated from the internet. 

As I wrote in part 2 about the servers, all traffic between the two servers and traffic from the internet goes across the VPN. What this means is that if the primary SDSL link is dropped, then all I have to do is reconfigure the VPN to use the backup connection. No need to make any DNS changes, and data remains under our control. 

All three internet connections - the SDSL, ADSL backup and data centre are covered by OpenDNS to provide a first line of protection against nasty's, but also stopping staff from browsing to sites they shouldn't be. For the guest wireless, the settings are more strict, so that the link cannot be abused. 

Internal Network

A production wireless network was also introduced, using two access points that have covered most of the building. This gives freedom to locate printers and other networking hardware. 

We also used the Windows 7 excuse to remove the last desktop printers, so the only printers left are networked. Although a HP Deskjet 4 which has been recently serviced was reprieved and a Jet Direct card picked up off eBay for £20 meant it was back in action as a network printer. 

When I did the original network I implemented a dual speed network. This is where all workstations are connected to a 10/100 switch, with a gb uplink to a 1000 switch. This was retained. A further switch was put in between the router from the ISP and the software firewall. This allows a machine to be connected to be outside the firewall. 

An APC UPS with a built in network card was also retained, which has more than enough capacity for the two servers and with the APC network tool installed on all the virtual servers, it will shut them down gracefully. 

Network Documentation

The network is documented live through OneNote. An Office 2010 licence has been used on one of the domain controllers which allows access to OneNote. Of course this is replicated live. As changes are made, they can be quickly updated in OneNote. So while the network documentation isn't any kind of formal, well written format, it is in such a way that could allow the network to be rebuilt. 

Did everything go to plan?

Given the size of the job, and the massive change that went through, things went quite smoothly. 

One of the servers was dead on arrival, BT took a while to install the SDSL line, and then more time to get the backup ADSL line to run at a decent speed. 

Printer publishing didn't work correctly, I had to completely redo group policy, the VPN didn't work initially for the clients and I completely forgot about expiring passwords with the roaming users (its been a while since I ran a large laptop fleet). Drive mappings initially worked when they felt like it. 

However overall the client is very pleased with what they have. 


At the end of 2010, the client's location had issues with access due to the weather. However the replacement network configuration allows all staff with computers at home to work from home, connecting via remote desktop gateway. 

The future

Now this work has been done, we can look ahead. 

With complete control over the entire platform server and workstation side, internal applications can be developed easily. An internal web application is already under development, and I have told the web developer to develop for Internet Explorer 9. It is my intention to implement the new IE 9 jump lists. A Blackberry interface is also under development, as this can be accessed via the BES Express that has been installed. The new Blackberry Playbook is being looked at with some interest. 

This new deployment provides a firm platform for some time to come, while significantly increasing the productivity of the end users. 

Project Conclusion

By making use of VPN technology and the server that has been located in the cloud, we have removed the dependency on any one ISP. This plays a key part in any business continuity, and in the day to day use of remote access for the mobile workers. It also means that as new internet technologies, such as Fibre to the Cabinet become available, those can be easily implemented with very little disruption to the business. 

Crucially though, by using native to Windows and Exchange technologies, the complexity of the network has not increased very much. There is very little proprietary technology in the network, so there is no vendor dependency other than Microsoft and VMWARE.

By using virtual machines, we have removed most of the hardware dependency, so replacement servers could be deployed from pretty much anyone in the event of a significant problem. 

Finally, it just works. Since it went live in late September 2010, it has not provided any major problems.  The business just gets on with what it does. 

Case Study 2 Part 2 - Network Rebuild - Servers

This is part two of a three part case study of a recent network rebuild I carried out. For part one - click here: 


Now to the interesting bit. 

The server design was in my head for months, and then got completely redesigned following the client wanting to go with my suggestion of replicating the data off site. 

What we had was two HP ML350s, an old IBM and a HP desktop as the BES server. 

What we ended up with is three DL380s, two on site, one in the datacentre. 

All three DL380s are running VMWARE vSphere 4.1. 

VM1 - Two Windows VMs - a DC and a SQL Database server and a Linux based firewall. 

VM2 - Three VMs - a DC, Exchange 2010 and an application server. 

VM3 (in the data centre) - a DC, Exchange 2010 and a SQL database, plus a Linux based firewall.

As we are going to replicate Exchange data using a Database Availability Group, we needed to use Windows 2008 Enterprise edition. As Enterprise edition allows multiple installations of Windows on one physical machine, I decided to split up the functions in to dedicated servers. 

Furthermore, with more and more software products using SQL, and the client using SQL for an internal task, a dedicated SQL server was used. 

All three servers lived on the same network for a week, before the third server went off to the data centre. 

Data Replication

For real time data replication of the file structure, the network uses the latest version of DFS, built in to Windows 2008 R2. This works very well. 

For replication of Exchange data, a DAG is used for mailbox data, and native Public Folder replication. 

For SQL, this is mainly in the form of a backup, which is replicated to the data centre server shortly afterwards. Nothing the client does requires live replication of the SQL data. 


Being an Exchange MVP, the design of the Exchange part of the platform was quite important, and everything has worked as I expected. 

The server that lives in the data centre is the only one that is exposed to the internet. All email comes in and leaves through that server. This provides a number of key benefits. 

  • In the event of a loss of the main office, all email is coming in to a server that is under our control. We don't have to worry about email bouncing or being lost. 
  • The dependency on the ISP at the main office is also removed, which I discuss further in part 3 networking. 
  • Spam filtering is being done on the faster bandwidth available in the data centre.
  • I have also pointed OWA and Outlook Anywhere traffic at the data centre server, not only for speed reason but if we have to use a backup internet connection, the clients don't have to be touched. This means that all inter-server traffic goes over the WAN connection. 

An RPC Client Access array is configured for outlook.example.local which points at the local CAS server, but allowing for easy changes in the event of a full failure. 

We also updated the Blackberry Enterprise Server from a very old version 4.0 to a 5.02 Express server. This is installed on the application server, with its database on the SQL server. 

Other Bits

WSUS - there are two WSUS servers in place, with the workstations pointing at a server in their office, and the laptops pointing to a child WSUS on the Exchange server in the data centre. This means that the laptops can pull their updates straight from Microsoft, whereas the desktops pull theirs from the local WSUS server. This saves bandwidth. 

As we had to use Windows Server Enterprise edition, which allows the use of four virtual machines, the server in the data centre had a spare. Therefore I have built a web server. Installed SmarterStats on to the server, which can only be accessed from the internal network. This means the client was able to change their public web site hosting arrangement and save money there. 

SmarterStats also allows use of OWA to be tracked. 

For backups, we dumped tapes, and Backup Exec. Switched to two Iomega Network Attached drives, with the backup job controlled by Backup Assist. The drives are exchanged each day, but are being used for archive purposes only. For full scale recovery, the copy in the data centre would be used. Shadow Copies is also enabled to provide additional levels of security.

The VMWare platform is managed by a vCenter server installed on the application server, with monitoring provided by Veeam's monitoring application. 

Remote access to the site is available via Log Me In, Remote Desktop Gateway and VPN. There is also the option of accessing the network resources with their Blackberries. This came in very handy when I couldn't remember a password in the data centre and needed to look it up on the password database (SecretServer from Thycotic) which has a mobile interface. 

Server Conclusion

In effect, the client now has their own mixed cloud and on site implementation, just they aren't sharing anything with anyone else. Data is stored off site, in real time. Traffic from the internet comes in through a static location which is secure, and fast. The client almost has a complete business continuity plan for a lot less than they would ever dream of. 

Part Three - Network is here: