Sembee Blog of Exchange MVP Simon Butler

Future Version of Exchange Error When Removing Public Folder Database

During a recent migration from Exchange 2007 to 2010 I found I was unable to remove the public folder store from the Exchange 2007 server. 

It was returning the following error when using remove-publicfolderdatabase or using EMC on Exchange 2007. 

Remove-PublicFolderDatabase : Object is read only because it was created by a future version of Exchange: 0.10 (14.0.100.0). Current supported version is 0.1 (8.0.535.0).

Obviously the Exchange 2010 server had touched the database in some way, probably due to the Offline Address Book migration. 

The fix was quite simple - remove it using the Exchange 2010 Exchange Management Shell. Can't use the GUI as the Exchange 2007 public folders do not appear in there.

Get-PublicFolderDatabase -Server EXCH2007 | Remove-PublicFolderDatabase

Where "Exch2007" is the name of the Exchange 2007 server. 

After removing the database I refreshed the GUI and was then able to drop the Storage Group and complete the removal of Exchange 2007. 

Odd SBS 2011 Receiving Email Issue

 

Recently deployed an SBS 2011 server for a client down in the New Forest. Shortly after going live with this server, we experienced one of the oddest issues I have experienced. The fix was very simple, but the symptoms left us scratching our head. 

The server was intermittently receiving email. I could send it messages, but other accounts could not. Sometimes email from Google Mail would come through, other times they wouldn't. Same for Hotmail and other services. 

As it was intermittent, I was confidently ruling out the Exchange part as I said I could send it email. It was responding to telnet commands quite happily. 

Therefore we started to consider issues such as the router (it was something odd), the ISP as it was one that I hadn't used before and wasn't quite the same as others in the UK. Things were changed around and still the problem continued. 

The major symptom was the "Service Unavailable" was received by the clients, but it was on a 4.x.x error code, so email wasn't failing immediately. That error message usually means the anti-spam filtering it blocking the email. As the anti-spam agents are installed by default on SBS 2011, they were removed, no change. We had also installed AV on to the server, so that was checked and removed to ensure it wasn't affecting anything. 

This went on for a few days.

Then clutching at straws I started to go through the entire setup comparing it to my reference SBS 2011 server here in my home office. This reference server is basically an SBS 2011 installation that has had the wizards run, is kept patched, but isn't used or touched in any other way. It is an out of the box install. No third party software installed, and it isn't exposed to the internet. I have them for all three versions of SBS (2003, 2007 and 2011) that I work with. 

When I got to the Receive Connectors, I immediately noticed something was wrong, and I had overlooked something. 

This is a screenshot of the Receive Connector as I saw:

 
The key bit is at the bottom. 
It appears that the SBS setup wizards configure the receive connector to not receive email from the internal subnet. However for some reason the third line to allow IP addresses above 192.168.x.x had not been written. 
 
This is a screenshot of the correctly configured connector:

 

What this meant was that any email server with an IP address of below 192.168 was able to send email to the server, but anything above that couldn't. It would appear that some of the major email providers like Google Mail are routing their email out through high number IP addresses!

Furthermore, this wasn't being corrected by the fix my network wizard, which I had run a number of times to ensure that I hadn't missed something. 

As soon as I corrected the setting and restarted the Microsoft Exchange Transport Service for good measure, the email started to flood in. 

 

Case Study 2 Part 3 - Network Rebuild - Networking

This is part three of a three part posting of a recent case study.

Part 1 - Part 2

Networking

With all the changes we had to look at the networking. 

Internet Access

With the server in the data centre, the issue of bandwidth over the WAN connection became critical. 

Therefore the client upgraded their line to a 2mb SDSL line, although due to the distance from the exchange, we only get about 1.5mb. 

A second internet connection was also brought in. This is a basic connection which will be used for backup purposes only. In the meantime we have put a wireless connection on to it for use as a guest wireless. No connection to the production network. In the event of a failure of the SDSL line, a cable will be moved to use the backup connection. Not completely automated, but for this client, good enough. 

The servers in the data centre are connected to the production network via a site to site IPSEC VPN. This VPN is managed by pfSense, which sits in a virtual machine. Using the VMWARE virtual switches, the internal servers are isolated from the internet. 

As I wrote in part 2 about the servers, all traffic between the two servers and traffic from the internet goes across the VPN. What this means is that if the primary SDSL link is dropped, then all I have to do is reconfigure the VPN to use the backup connection. No need to make any DNS changes, and data remains under our control. 

All three internet connections - the SDSL, ADSL backup and data centre are covered by OpenDNS to provide a first line of protection against nasty's, but also stopping staff from browsing to sites they shouldn't be. For the guest wireless, the settings are more strict, so that the link cannot be abused. 

Internal Network

A production wireless network was also introduced, using two access points that have covered most of the building. This gives freedom to locate printers and other networking hardware. 

We also used the Windows 7 excuse to remove the last desktop printers, so the only printers left are networked. Although a HP Deskjet 4 which has been recently serviced was reprieved and a Jet Direct card picked up off eBay for £20 meant it was back in action as a network printer. 

When I did the original network I implemented a dual speed network. This is where all workstations are connected to a 10/100 switch, with a gb uplink to a 1000 switch. This was retained. A further switch was put in between the router from the ISP and the software firewall. This allows a machine to be connected to be outside the firewall. 

An APC UPS with a built in network card was also retained, which has more than enough capacity for the two servers and with the APC network tool installed on all the virtual servers, it will shut them down gracefully. 

Network Documentation

The network is documented live through OneNote. An Office 2010 licence has been used on one of the domain controllers which allows access to OneNote. Of course this is replicated live. As changes are made, they can be quickly updated in OneNote. So while the network documentation isn't any kind of formal, well written format, it is in such a way that could allow the network to be rebuilt. 

Did everything go to plan?

Given the size of the job, and the massive change that went through, things went quite smoothly. 

One of the servers was dead on arrival, BT took a while to install the SDSL line, and then more time to get the backup ADSL line to run at a decent speed. 

Printer publishing didn't work correctly, I had to completely redo group policy, the VPN didn't work initially for the clients and I completely forgot about expiring passwords with the roaming users (its been a while since I ran a large laptop fleet). Drive mappings initially worked when they felt like it. 

However overall the client is very pleased with what they have. 

Finally

At the end of 2010, the client's location had issues with access due to the weather. However the replacement network configuration allows all staff with computers at home to work from home, connecting via remote desktop gateway. 

The future

Now this work has been done, we can look ahead. 

With complete control over the entire platform server and workstation side, internal applications can be developed easily. An internal web application is already under development, and I have told the web developer to develop for Internet Explorer 9. It is my intention to implement the new IE 9 jump lists. A Blackberry interface is also under development, as this can be accessed via the BES Express that has been installed. The new Blackberry Playbook is being looked at with some interest. 

This new deployment provides a firm platform for some time to come, while significantly increasing the productivity of the end users. 

Project Conclusion

By making use of VPN technology and the server that has been located in the cloud, we have removed the dependency on any one ISP. This plays a key part in any business continuity, and in the day to day use of remote access for the mobile workers. It also means that as new internet technologies, such as Fibre to the Cabinet become available, those can be easily implemented with very little disruption to the business. 

Crucially though, by using native to Windows and Exchange technologies, the complexity of the network has not increased very much. There is very little proprietary technology in the network, so there is no vendor dependency other than Microsoft and VMWARE.

By using virtual machines, we have removed most of the hardware dependency, so replacement servers could be deployed from pretty much anyone in the event of a significant problem. 

Finally, it just works. Since it went live in late September 2010, it has not provided any major problems.  The business just gets on with what it does. 

Case Study 2 Part 2 - Network Rebuild - Servers

This is part two of a three part case study of a recent network rebuild I carried out. For part one - click here: http://blog.sembee.co.uk/post/Case-Study-2-Part-1-Network-Rebuild-Intro-and-Workstations.aspx 

Servers

Now to the interesting bit. 

The server design was in my head for months, and then got completely redesigned following the client wanting to go with my suggestion of replicating the data off site. 

What we had was two HP ML350s, an old IBM and a HP desktop as the BES server. 

What we ended up with is three DL380s, two on site, one in the datacentre. 

All three DL380s are running VMWARE vSphere 4.1. 

VM1 - Two Windows VMs - a DC and a SQL Database server and a Linux based firewall. 

VM2 - Three VMs - a DC, Exchange 2010 and an application server. 

VM3 (in the data centre) - a DC, Exchange 2010 and a SQL database, plus a Linux based firewall.

As we are going to replicate Exchange data using a Database Availability Group, we needed to use Windows 2008 Enterprise edition. As Enterprise edition allows multiple installations of Windows on one physical machine, I decided to split up the functions in to dedicated servers. 

Furthermore, with more and more software products using SQL, and the client using SQL for an internal task, a dedicated SQL server was used. 

All three servers lived on the same network for a week, before the third server went off to the data centre. 

Data Replication

For real time data replication of the file structure, the network uses the latest version of DFS, built in to Windows 2008 R2. This works very well. 

For replication of Exchange data, a DAG is used for mailbox data, and native Public Folder replication. 

For SQL, this is mainly in the form of a backup, which is replicated to the data centre server shortly afterwards. Nothing the client does requires live replication of the SQL data. 

Exchange

Being an Exchange MVP, the design of the Exchange part of the platform was quite important, and everything has worked as I expected. 

The server that lives in the data centre is the only one that is exposed to the internet. All email comes in and leaves through that server. This provides a number of key benefits. 

  • In the event of a loss of the main office, all email is coming in to a server that is under our control. We don't have to worry about email bouncing or being lost. 
  • The dependency on the ISP at the main office is also removed, which I discuss further in part 3 networking. 
  • Spam filtering is being done on the faster bandwidth available in the data centre.
  • I have also pointed OWA and Outlook Anywhere traffic at the data centre server, not only for speed reason but if we have to use a backup internet connection, the clients don't have to be touched. This means that all inter-server traffic goes over the WAN connection. 

An RPC Client Access array is configured for outlook.example.local which points at the local CAS server, but allowing for easy changes in the event of a full failure. 

We also updated the Blackberry Enterprise Server from a very old version 4.0 to a 5.02 Express server. This is installed on the application server, with its database on the SQL server. 

Other Bits

WSUS - there are two WSUS servers in place, with the workstations pointing at a server in their office, and the laptops pointing to a child WSUS on the Exchange server in the data centre. This means that the laptops can pull their updates straight from Microsoft, whereas the desktops pull theirs from the local WSUS server. This saves bandwidth. 

As we had to use Windows Server Enterprise edition, which allows the use of four virtual machines, the server in the data centre had a spare. Therefore I have built a web server. Installed SmarterStats on to the server, which can only be accessed from the internal network. This means the client was able to change their public web site hosting arrangement and save money there. 

SmarterStats also allows use of OWA to be tracked. 

For backups, we dumped tapes, and Backup Exec. Switched to two Iomega Network Attached drives, with the backup job controlled by Backup Assist. The drives are exchanged each day, but are being used for archive purposes only. For full scale recovery, the copy in the data centre would be used. Shadow Copies is also enabled to provide additional levels of security.

The VMWare platform is managed by a vCenter server installed on the application server, with monitoring provided by Veeam's monitoring application. 

Remote access to the site is available via Log Me In, Remote Desktop Gateway and VPN. There is also the option of accessing the network resources with their Blackberries. This came in very handy when I couldn't remember a password in the data centre and needed to look it up on the password database (SecretServer from Thycotic) which has a mobile interface. 

Server Conclusion

In effect, the client now has their own mixed cloud and on site implementation, just they aren't sharing anything with anyone else. Data is stored off site, in real time. Traffic from the internet comes in through a static location which is secure, and fast. The client almost has a complete business continuity plan for a lot less than they would ever dream of. 

Part Three - Network is here: http://blog.sembee.co.uk/post/Case-Study-2-Part-3-Network-Rebuild-Networking.aspx

Case Study 2 Part 1 - Network Rebuild - Intro and Workstations

Very occasionally, you get to do a job which you really enjoy. Being able to put lots of things that you have learnt over time in a single client deployment and make a very satisfying job. 

At the end of 2010 I completed just such a deployment.  

I could go on for hours about this deployment, as there are so many little things that were done, which I haven't had the chance to do before, or just make it a much better network. As I have complete control over the network, and have done for some time, I can ensure it runs exactly as it should. 

Only 40 users, so enough to use networking kit with. 

First, some background. This particular client is my oldest client. I have had them since about week six of my company. 

Just over 5 years ago I rebuilt their network, replacing their servers with a new domain, and all workstations were rebuilt. This was the first time I could try the locked down workstation method, as they had no proprietary or awkward third party application that "required" admin rights to run correctly. All desktops, and the one laptop didn't leave the building. 

Windows 2003, Exchange 2003 at the back end, on three servers, two HP and a very old clunky IBM which died last year. 

Clients were Windows XP, Office 2003. 

However it was starting to show its age. Three hours to setup a new workstation was becoming a joke, and the cost of server maintenance was getting higher all the time. 

Therefore it was decided that it was time to change the lot, all in one hit. 

Yes, you read that correctly. On the Monday they had the above, by the end of the week it was all changed. 

The first question then is how we could get away with doing a big bang change like this. 

It wasn't the original plan. I was looking at maybe changing the servers this year, then the workstations next. Office 2010 had just been released when planning started. However there was a keenness to do more, introduce laptops for some mobile workers so it was decided to make the change all at once. 

Furthermore, because the workstations were locked down, and were a basic build (Windows XP, Office 2003, AV, and a terminal application), with all relevant data redirected to a server, the amount of work that the move required would be minimal. The key company application is a database system that runs on Unix (which fortunately I have nothing to do with). The workstations are basically an office document and web browsing station. 

Then in a planning meeting I just happened to mention that we could replicate all of their data off site in real time for a lot less than they thought. So replacing the two servers became three, with replication thrown in as well. 

So this and the next two blog postings are a quick overview of what was done. If you would like to see it in action, and want me to do the same for your company, please let me know (UK Only). 

I am going to divide the rest of this blog in to three - workstations (below) and servers and networking which will have separate posts.

Workstations

This is quite easy. 

During the last 12 months of the previous XP/2003 based network, all replacement workstations were bought with the upgrade in mind. Minimum of 2gb of RAM and Windows 7 licences where possible. 

However a number had to be replaced, plus for the first time an active laptop fleet was introduced. 

This initial preparation work though made the initial deployment much easier. 

Desktops were Windows 7 Pro, Office 2010, Adobe Acrobat Reader, AV. The flash player was installed fresh, plus the terminal application. Installing off a memory stick, I was turning each machine around in about 45 minutes. 

Laptops were Dell Latitude, software as above. However we also added built in 3g cards so the users could work anywhere. Part of the plan (which I am not involved in) is to provide a web based access to their core database and inventory system. 

I also suggested, and was taken up, that every user, from the CEO down, was given a mandatory training session. So each staff member did a half day on Windows 7 and Office 2010. We found a local trainer, who created a bespoke course for the client. I explained what I wanted them to know. 

It should be pointed at this point that a large number of staff in this client are rather mature - I think I am still one of the youngest in the building when I go to visit. A change from Windows XP to Windows 7 would be quite different. The training was not only to show them how to do things, but also to simply give them confidence that they wouldn't break it. 

Therefore they were trained how to change the wallpaper, jump lists, gadgets. A brief overview on internet security and the like. They were trained on their actual workstations, so after the training was complete, there was a frantic period of machine change rounds. This meant that when they returned to their desks, things that they had done during training were still there. I felt this was important for adoption of the new platform. 

The new laptop users were given a slightly different course, which gave them a grounding in looking after the laptop. For most of them, this was the first time with a laptop. 

The client operates a conveyor belt system with desktops. New desktops go to the power users, with the slower ones going down the food chain, before eventually being removed. Therefore we started training with the power users on new desktops, while their older machines were rebuilt for the next session, and so on. This meant that during the training sessions I was rebuilding machines the users had just left. It got rather frantic. 

I rebuilt 9 machines in one day at one point, and put in 11 hour days four days on the trot. 

The end result though is that the client now has a complete desktop and laptop fleet that is on the latest OS and Office version, locked down, with the benefits that brings from a management and security point of view. 

In Part Two, I shall go over the server configuration. http://blog.sembee.co.uk/post/Case-Study-2-Part-2-Network-Rebuild-Servers.aspx 

Exchange 2010 Database White Space

31. December 2010 17:50 by Simon Butler in Exchange 2010, MS Exchange Server

With Exchange 2007 and older versions, one of the key elements that an Exchange administrator needed to keep an eye on, and caused confusion for newcomers to Exchange was the amount of white space in the database.
This is reported as free space in the event viewer via event ID 1221 during the night and is the result of content being removed from the database by the online defrag process.

I have written about this event ID and the white space elsewhere:
http://www.amset.info/exchange/event1221.asp

With Exchange 2010, the behaviour of the database has changed.
Instead of doing a online defrag during a fixed time window, it now does it constantly. This means that content that has passed the deleted item retention period, is removed from the database shortly afterwards, rather than waiting for the next online defrag window.

However because the process is running constantly, event ID 1221 isn't written to the event log. Therefore an administrator may not have a clue as to how much of the database is white space, and how much is actual content.

This question can be easily answered, using EMS, as the amount of free space in the database is available via get-mailboxdatabase -Status:

Get-MailboxDatabase -Status | Select Servername, Name, AvailableNewMailboxSpace

This command will show you the name of the Server the database is mounted on, the name of the database (which is unique across the Exchange org with Exchange 2010) and the mount of space available in the database for new content.
The result will be something along the lines of this:

ServerName                     Name                          AvailableNewMailboxSpace
----------                          ----                            ------------------------
SMB-A                             Mailbox Database         27.75 MB (29,097,984 bytes)

The command used -get-mailboxdatabase -status can provide quite a bit of information about the databases in your Exchange org, use the |fl command to see the full list.

RPC Client Access Array

29. November 2010 18:30 by Simon Butler in Exchange 2010, MS Exchange Server

One of the new features with Exchange 2010 is the client access array. When configured correctly, this is quite a useful feature. In my view it is something that should be configured on all Exchange 2010 servers, even on a single server deployment.

Background

The full explanation of the CAS Array feature is available on Technet, but in short, the reason it was introduced was due to the changes in the way connections to the database are now handled. With Exchange 2007 and older, Outlook connected directly to the mailbox server (unless using Outlook Anywhere). With Exchange 2010 all clients now connect to the CAS servers. The CAS servers then manage the connection to the database.
With the Database Availability Group (DAG) meaning that an active mailbox could be moved between servers easily, connecting directly to the mailbox server wasn't really practical.

The simple way to think of a CAS array is like a virtual Exchange server. Clients see this virtual name instead of the actual name of either the CAS server or the mailbox server.

Why you should configure a CAS Array

If you are deploying multiple CAS servers, or a DAG, then a CAS array is pretty much mandatory. However if you are on a single server, or are separating the mailbox and the CAS role on to separate machines, then a CAS array is still of value.
If you have ever done a migration or disaster recovery, one of the key pain points has been getting Outlook to point to the new server in a timely manner. As long as the original server was alive, then Outlook will redirect to the correct server automatically. During a migration though, it may not be possible to get all clients to connect to the old server in a timely manner and the old server has to be removed.

However as the CAS array is simply a DNS entry and a small configuration in Exchange, it is completely under the control of the network administrator. A change to the DNS will make all Outlook clients point to another server.

If there is a possibility at any time in the future of additional Exchange servers being introduced, or the CAS role moved to its own server, the use of the CAS array from the start will become invaluable for easing that transition. All MAPI clients will use it, so as well as Outlook, this can also include things like Blackberry Enterprise Server.

CAS Array Configuration Notes

Ideally the CAS array should be configured before any mailboxes are moved to Exchange 2010. If you don't, then the clients that are moved will use the true name of the CAS server, and even after the CAS array has been configured, they will not change unless the mailbox is moved between servers or the Outlook profile is changed.
If CAS Array is therefore introduced retrospectively, it can produce mixed results if all clients haven't been updated with the new value some how.

You can use the CAS array with Network Load Balancing (NLB), but if the server  has all of the roles and is also a DAG member, then you must use an external load balancer. Using NLB on the same server as the DAG is not supported.

A CAS array cannot go across Active Directory sites. Therefore if you are doing a two host DAG, with the second (passive) host in a data centre or similar, and have separated the AD sites, you will need two CAS arrays. In the event of a full failover, you will need to change both the CAS Array value on the database and the DNS. While this is a manual intervention, it does mean the process remains under your control.

The CAS array host does not have to be in the SSL certificate, simply because Outlook doesn't make any http connections to that host name.
You should not use the same host name for other services, particularly anything that is being accessed externally (like OWA), but you can use the same IP address and therefore NLB virtual IP.
For example, you could use outlook.example.local as the CAS Array host, then mail.example.com for OWA, SMTP, Outlook Anywhere etc.
If your internal and external domain are the same, then ensure the internal name doesn’t resolve, externally so no wildcard in the domain etc. Failure to do so will result in a confused Outlook, and will probably mean Outlook Anywhere has performance issues, if it connects at all.

Finally, on the DNS entry for the CAS array, turn the TTL time down. This will ensure that if you do have to change the host name IP address, it is picked up quickly.

Background and Configuration of the CAS Array: http://technet.microsoft.com/en-us/library/ee332317.aspx

Sent Items Storage for Shared Mailboxes

The default behaviour of Outlook with regards to sent items continues to come up on forums as an issue.

By default, when you send an email using the From field via your Send As permissions, the item you have sent goes in to your own Sent Items folder. This is because you sent it, not the person whose mailbox it was sent from. This can be useful from a tracking point of view (who sent the email).

However it may also be useful for the item to be stored in the Sent Items folder of the Shared Mailbox so that other users or even the mailbox owner can see what was sent.

How you achieve this depends on the version of Outlook that you are running. The version of Exchange doesn't matter.

For Outlook 2003 and 2007, a registry change is required, following the installation of an update. If you are keeping the machines up to date, then further updates should not be required.
These registry changes are outlined in the following articles:

Outlook 2007
http://support.microsoft.com/kb/972148
Requires Outlook 2007 Hotfix: 970944
http://support.microsoft.com/kb/970944/

Outlook 2003
http://support.microsoft.com/kb/953804/
Requires Outlook 2003 Hotfix: 953803
http://support.microsoft.com/kb/953803/


For older versions of Outlook, you will need to look at third party tools. The only one that I am aware of are the tools from Ivasoft: http://www.ivasoft.biz/

For OWA, you will need to use a server side tool, again the third party tools from the above site are the only ones that I am aware of - and support for latest version of Exchange isn't available.

For Outlook 2010, no registry change is required, you just need to add the mailbox in a different way.
Instead of adding the mailbox as an additional mailbox through the Properties of the primary mailbox, add the additional mailbox as an additional Account. That means going through the new Account wizard again. This feature also allows you to have connections to another mailbox in another Exchange forest at the same time - I have used this to migrate public folders (see http://blog.sembee.co.uk/post/Cross-Forest-Public-Folder-Migration.aspx)

However if you are using Outlook 2010, you should also be aware of the issues  in this KB article: http://support.microsoft.com/kb/2297543 (Performance problems when you try to access folders in a secondary mailbox in Outlook 2010).

(Late posting because I forgot to press publish).

After Moving Mailbox, Type is Set to "Linked"

During a recent mass migration from Exchange 2003 to Exchange 2010, I had a large number of mailboxes that appeared on the new server as the type "Linked".

Now while you can change the mailbox type between normal, shared and resource, it isn't supported to change it from Linked. The only fix is to disconnect the mailbox from the user and reconnect it. This of course has other consequences if not done with care, including breaking internal email replies to old messages that user sent.

I therefore went looking for the actual cause.

The most common cause is another account having the permission "Associated External Account". That is an Exchange 2003 permission, which I cannot find the equivalent of in Exchange 2007/2010. Therefore the only way I found to remove that permission was to move the mailbox back to the Exchange 2003 server. This allowed me to look at the mailbox permissions through ADUC and remove the permission. It should only be on the "Self". In this client's case I found it was allocated to a SID, so a broken account.

If you attempt to modify the permission while the mailbox is on Exchange 2007/2010 you will be unable to and will simply get an error message instead. 

After removing the permission, you can move the mailbox back and it shouldn't be linked.

Very occasionally you will get a mailbox where this does not work, when the drop and reconnect method needs to be used, or you might have a large number of mailboxes that are linked where moving them back is impractical. For those occasions, a script may well be more appropriate.

Fellow Exchange MVP Tony Murray has a PowerShell script to automate this, and it is available from this location:

http://www.open-a-socket.com/index.php/2010/08/30/powershell-script-to-bulk-convert-linked-mailboxes/

Cross Forest Public Folder Migration The Easy Way - Use Outlook 2010

Anyone who has done a cross forest public folder migration will almost certainly be reliving their nightmares about it simply from reading the title.
I was just the same.

Extract the content to a PST file, either manually (selecting about 1000 items at a go) or by using a rule, move the PST file to a machine in the new forest, then import.

Slow, mind-numbing dull and therefore not the most fun part of a migration and always the bit that I don't look forward to. 

However a recent migration was done almost completely hands free. I moved almost 9gb of data in an afternoon, while I went to the cinema.

To do this, I took advantage of the new feature of Outlook 2010 that allows it to connect to two different Exchange organisations at the same time.

This allowed me to create a rule to move the content between the two public folders. Once the rule was set, I left it to get on with it. The speed wasn't great, but compared to moving it manually, it was a considerable time saver. After returning from cinema I was able to do more of the migration work, while I waited for the rule to finish.
Furthermore, by using multiple machines, I could move lots of large public folders at once. Once the process was completed, the rule was discarded.
Even before moving the data, when creating the new folders it was easy to setup the permissions as I could compare them side by side.

Where an item was corrupt and couldn't be moved, or the few items that didn't match the rule, I simply moved those items manually or deleted them. In most folders this was only a few hundred items at most.

You still can't copy and paste large numbers of items, as the problem with trying to copy/cut and paste more than about 1500 items is still in Outlook, but a rule effectively moves each item individually, so that isn't a problem.
For the folders with small number of items, a straight copy and paste works well.

I used the same procedure to move a stubborn mailbox which wouldn't move on the regular cross forest move mailbox procedure. Much faster than exporting the mailboxes out to PST file and then importing them. It also allowed me to identify the corrupt items and deal with them.


Even if you aren't deploying Outlook 2010 for your migration, it is worth downloading the trial for this feature alone. Then once the migration is complete, discard the trial software.