Outlook connectivity issues after mailbox migration to Exchange 2013

After a number of relatively pain-free Exchange transitions, it was only a matter of time before I found an issue during mailbox migration. This one particular client had previously experienced a number of issues with their exiting Exchange deployment and the configuration was absolutely not best practice, so I was very apprehensive about the migration.

Up until the point of switchover, it had been relatively painless, save for the previous consultants using the same FQDN for the CAS Array and virtual directories (See Ambiguous URLs). However, shortly after we moved live mailboxes (test mailboxes worked flawlessly) one issue came to light – once the migration completed (note completed, not just synced) a user would receive the prompt that an Administrator has made a change to the mailbox, and Outlook needs to be restarted. Once restarted, the user would not be able to connect to Exchange 2013. It appeared that the internal clients were still hanging on to the 2010 CAS server and creating a new profile produced the same error.

Whilst spending time researching the problem, I received notification that the mailbox was suddenly working! No changes were made, which made me look at IIS caching. As it turns out, this is a known issue. When a mailbox is migrated to 2013, the mailbox still has a cache entry pointing the client back to Exchange 2010 CAS server. This cache expires supposedly every two and half hours (From CU5).

There is currently no KB article.

As to why this has happened to this one particular client, and not the many others I’ve done, I’ve no idea.

So, we have two options – resetting IIS, or recycle app pools. I’m a fan of trying the least disruptive workaround first.

By recycling an App Pool, a new w3wp process is created which serves subsequent requests, while the previous w3wp process has a configurable amount of time to complete all outstanding requests (by default 90 second). There is a performance impact since the items in memory have to be reloaded, but there is no outage.

Exchange 2013 CAS Server:



Exchange 2013 Mailbox Server:



Exchange Application Pools

We could potentially change the interval temporarily by clicking on each app pool, and selecting Advanced Settings. This could be useful during migrations, but given the slight overhead, I would recommend changing this back to 0 after migrating.

IIS Application Pool Settings











Hopefully this assists some of you who run into similar issues.


Cheers, Steve!


Automating Mailbox Repairs – Exchange 2010

While running through a series of mailbox repairs, I was looking for a way at automating this task. Since Exchange 2010 logs the output of the command New-MailboxRepairRequest to the Event Viewer, I would have to pull the results from here as part of this automation.

A brief outline of the automation to follow goes like this. Run New-MailboxRepairRequest against a mailbox database with a detectonly parameter, once completed gather all mailboxes from Event Viewer which contain corruption and continue to run a mailbox repair over them.


Some of the errors which can be resolved include;

10033 – A folder is being scoped by a search which no longer exists. 

Part of the script can also be used if you want to pull a list of corruption affected mailboxes.


  • New-MailboxRepairRequest has a maximum of simultaneous repair requests to 1 database at a time or 100 mailboxes.
  • The command may impact user connectivity to the mailbox while repairing corruption not while using the -detectonly parameter.
  • If running against a database rather than per user mailbox Exchange will only disconnect the user during their mailbox scan, in other words the command is completed in sequence rather than in parallel.
  • A repair request can only be cancelled by dismounting the entire database.

Begin by running the New-MailboxRepairRequest over each database.(Due to potential performance issues you are limited to running 1 repair per database)

You will see the request begin in the Application log of the Event Viewer with Event 10059;


You will then need to wait until the scan has finished at which point you will see the Event 10047;


Once the initial detection has been completed as above, we can continue to pull the data from Event Viewer and run a per mailbox repair on each of the mailboxes found to have corruption. I have broken down what exactly the script is doing below.

Custom event query for codes relevant to mailbox repair. (Can be found by creating custom a custom view and selecting the XML tab and copying the XML code.)

Get events using the XML filter supplied in $EventXML

Declare $Accounts variable as array to be able to query after ForEach

For each object in $events variable get data in brackets which will be the mailbox name and add it to the $Accounts variable

Select the unique objects in $Accounts variable and store in $Mailboxes variable

The final part of the script will take each object in the $mailboxes variable, store in $Mailbox variable and if the object isn’t empty then get the mailbox using the object as the identity using Get-Mailbox and store in $UserMailbox. We then Run a New-MailboxRepairRequest on each valid mailbox.

Once the script has completed, clear and save off the Application log ready to run through the same for any additional databases which need to be checked!


Hopefully the above will save some time and effort, you can of course run a repair using the command over the database without the -detectonly parameter but if you, like me, would prefer to only run against the corrupt mailboxes this should assist to some degree.

Two scripts are available, the first can be used just to view the corrupt mailboxes and write them to screen EventScript.ps1. The second is the script that has been outlined in this article PerMailboxRepair.ps1.







VMware ESXi Upgrades and Updates with ESXCLI

This post details the steps to upgrade ESXi where VUM is not available or direct access to the hosts is difficult e.g remote office/datacentre.


  • Root or similar access to the host (can be via the VMA)
  • Offline bundle of the ESXi image (.zip format)
  • Putty or similar to access the shell
  • Host should be in maintenance mode
  • HA should be disabled on cluster

Start the ESXi Shell and SSH services, select the host –> Configuration –> Security Profile –>Properties –> ESXi Shell –> Options –> Start

Repeat for the SSH Service


Open a shared or local datastore on the applicable host, select the host –> Configuration –> Storage, right-click on the applicable datastore and select Browse Datastore.


Select the Upload Files to this Datastore button, select Upload File and browse to and select the .ZIP of the ESXi offline bundle to upload.


Run an SSH client e.g putty to SSH to the shell of the host.

Find Image Profiles

Find the available image profiles within the offline bundle by running the esxcli command:

esxcli software sources profile list -d /vmfs/volumes/<volume>/<ZIPFILE>.zip

*esxcli is case sensitive and paths can be tab completed*

Validate Image Profile

Validate current profile against new image profile use the command:

esxcli software profile validate -d /vmfs/volumes/<volume>/<ZIPFILE>.zip -p <profile>

You should see:

Profile Validation Result
Compliance: True

If the validation result returns false, you can still proceed with the update or upgrade if you confirm the invalid vib is not applicable.


An update will replace existing vibs with new vibs and include vibs that are not currently on the installed image profile.

esxcli software profile update -d /vmfs/volumes/<volume>/<ZIPFILE>.zip -p <profile>

*use -f to force the update, will be required for a false validation of the image profile*


An upgrade replaces the current image profile with the image profile specified in the command. All vibs including manual additions to the current installed image profile may be removed. If there are custom vendor vibs, either use the Update option or manually apply the custom vibs after the upgrade.

esxcli software profile install -d /vmfs/volumes/<volume>/<ZIPFILE>.zip -p <profile>

*if the upgrade fails due to custom vibs, you can use the parameter –ok-to-remove at the end of the command*


Once the update or upgrade is complete, type the command reboot

Importing contacts into Exchange user mailbox

This is the second part of the original post Exporting Outlook contacts with PowerShell. I will be going through the process of importing these exported contacts directly into Exchange user mailboxes, in this case we will be using Exchange 2013. If you are using an older or newer version of Exchange server, you will need to use the relevant version of the EWS API, also you will need to adjust the dll path that exists in the PowerShell script supplied.


The brief steps to complete are as follows.

Install EWS API 2.1

Assign Role ApplicationImpersonation to Account used to complete this procedure

Modify and Save the ContactImport.ps1 script with your Exchange CAS server, Impersonation Account + Credentials and CSV share

Save the Import-MailboxContacts.ps1 script to the location specified in ContactImport.ps1

Open an Exchange Management Shell  and run ContactImport.ps1 script to import 

Else create session to Exchange CAS with the below and run ContactImport.ps1 changing EXCHANGESERVER to your Exchange CAS


Prerequisites to this procedure include;

EWS API 2.1  – Install –> https://www.microsoft.com/en-us/download/details.aspx?id=42022 (Enables enhanced exchange management for third party applications)

CSV Files – If the CSV files have been created per user with the post Exporting Outlook contacts with PowerShell, the below script already includes all possible mapping for contacts properties, if a custom CSV file has been created then these mapping will need to be modified.

Exchange Impersonation Rights (Allows impersonation of users to enable the ability to import the contacts directly into mailboxes without the users credentials or full access rights to mailbox) See below –>

To configure impersonation rights, you will need to complete through either the Exchange Control Panel or Exchange Management Shell.

The steps to configure impersonation rights through ECP:

Access the ECP URL where EXCHANGESERVER is the name of your CAS and login with an administrative account e.g Exchange Domain Admins–> https://EXCHANGESERVER/ecp/

Select permissions –> admin roles –>


Enter a relevant name e.g Impersonation –> Leave scope as Default –> Add Role ApplicationImpersonation –> Add the user in which you will use to complete the import under Member –> Click Save.


Steps to configure impersonation through PowerShell:

Open Exchange Management Shell with an administrative account

Now that impersonation is configured we can look to start the import process. In this specific use case the name of the CSV files are the name of the user account in the new domain, if you have a case where the new mailbox names differ from the CSV generated name, you will either need to change the generated name of the CSV or create a mapping between the CSV name and the new user account name.



Breakdown of the ContactImport script

Get all CSV file names from share and store in $list variable (Change SERVER\SHARE as appropriate)

Loop through each CSV name in Share, If the name matches the UserPrincipalName property of the mailbox then import to users mailbox else display “No Address Found”

The ForEach uses the Import-MailboxContacts script which will be explained later in the post with the relevant parameters for EWS. You will need to change the EXCHANGESERVER name to your Exchange CAS server with the user name used earlier which has the impersonation rights.



Import-MailboxContacts script

At the bottom of this post is the Import-MailboxContacts script, thanks to Steve Goodman and has been configured to be used with the Exporting Outlook contacts with Powershell post.

The script needs to be saved as Import-MailboxContacts.ps1 and is called by the ContactImport script. The ContactMappings array has been modified to work with the export from Outlook and the script has been updated with the corrects paths for use with EWS 2.1.

And thats it, if all is configured correctly, your users should have newly imported contacts in their mailbox.

Hope this helps!

Full Scripts with comments below;


Import-MailboxContacts script




ContactImport Script


SMTP Unable to send External – 550 5.7.1 Unable to relay

I recently setup some SMTP Receive Connectors and realised quite quickly that internal anonymous users where unable to send externally. You could argue that they shouldn’t be allowed to do so and should be authenticated to be able to do this. That wasn’t the case here though.

The quickest way to test the ability for a system to send external is through a telnet client, find a system with a telnet client (Putty will do) and add the IP address to the connector in which you are testing.

Open the telnet client and enter the IP address and port of the Exchange server and the Port in which the connector is listening on.

*Note that if you type the incorrect word and backspace it includes your mistakes, so you will need to hit enter, wait until after the error and type with no mistakes.

Type HELO to initiate a session with the Exchange Server ***Take note of the IP that the Exchange Server comes back with, it thinks that you are that IP, if you are sat behind a firewall then you will have to put the returned IP address in the Receive Connector!****

Enter MAIL FROM:someone@domainname.com

Type RCPT TO:someone@ExternalDomain.com

If the error 550 5.7.1 Unable to relay is returned, then this confirms that the connector cannot send externally.

What we need to do is to give the connector the correct permissions to send externally, this can be completed through PowerShell as below.

Log into an Exchange Management Shell and use the command below to get the receive connector and pipe it to an Add-ADPermission command for the Anonymous permission.

Test the relay again through the telnet session and if all is well you will see returned a 250 2.1.5 Recipient OK

Hope this helps!



Exporting Outlook contacts with PowerShell

Who new you could utilise PowerShell to drill into Outlook (while running) and pull out tons of stuff, awesome. I have documented a script which can be used to do just this, a couple of caveats…

*Outlook must be running

*This doesn’t export the contacts picture

*PowerShell will need an Execution Policy set during running e.g Bypass (unless run in a PowerShell windows on user session)

See more about execution policies on the Microsoft technet site –> https://technet.microsoft.com/en-us/library/ee176961.aspx


A bit of background: I had a rare instance where the data held in a user mailbox who was moving to a new company within the umbrella of a corporation was sensitive, so a mailbox migration couldn’t be completed and they wanted to take across their contacts to their new mailbox, which spurred the creation of this script and in turn this post.

In this instance for ease of use I will be running the script initiated from a batch script with a bypass execution policy.

Copy the line of code above into notepad and save as ContactExport.bat, you will be running this batch script through whatever means you choose e.g GPO, management agent, SCCM etc.

I have broken the script down into sections to explain each part:

The $Outlook variable holds the New-Object command which is allowing control of the current session of Outlook, you need to have Outlook already running else PowerShell will attempt to create a new session and error.

We can then drill down to individual folders to extract information, in this case (10) is contacts.

*** Additional script below to include folders within contacts (pointed out by Mike in comments section)



Exempt additional folders as by default there are Recipient Cache folder, Global address lists and any other type of created address lists. (the one variable exempted all folders other than user created ones, which I found strange, but it works so hey!)

Declare the array so that objects gathered within the For loop can be used outside of itself

For loop to loop through each folder and pull contact items, exempting additional folders. (the Folders.items array only accepted integers, which may be a restriction of using Outlook this way )

***Note you need the exemption as if your users have GAL’s this is going to pull all the contacts in there! So be warned

Finally add the contacts within the contact folders to the original $Contacts variable.



I have listed all of the different folder numbers and what they relate to below:

Next we get the OS’s environmental variable UserName (currently logged in user) ready for naming the exported .csv file.

We then need to select all of the attributes and details for each contact from the $Contacts variable, here I have selected everything but have listed it all to pick and choose.

This is then exported to a .csv file named as the logged in user with the $User variable. The Encoding is set to ensure any contacts which contain funky characters are not made worse.

That’s it, this should export all contacts to a .csv file ready for importing elsewhere. I will be writing an article on importing this into users mailboxes through Exchange using PowerShell in the coming weeks.

Full Script with comments below, hope it helps.(Updated to include Contact Folders)

PowerShell Script to run commands per Active Directory OU

I regularly run into a case in which it is handy to have a script to hand to run against a group of windows desktops or servers in an Active Directory OU.

Requirements to run the below are below.

  1. WinRM needs to be running on the relevant desktops and servers (can be completed by GPO) or by running “winrm quickconfig” in a PowerShell session on the machine
  2. Remote Server Admin Tools need to be installed on the desktop or server in which you are running the script (not required on DC’s)

The script is broken down below.

Import the AD module (RSAT requirement)

The $OU variable holds the full LDAP filter of the targeted OU

The $Script variable holds the command to which you would want to run against the computers. (installations, batch scripts or any other commands)

The window title of the PowerShell windows will display “Processing Computers in OU OU=SETOFCOMPUTERS,OU=COMPUTEROU,DC=DOMAINNAME,DC=COM” while the Connectivity Timeout variable is used later to complete inital connectivity of the computer before completing the script.

The $ComputerNames variable uses the AD command Get-ADComputer with the filter of the $OU variable to select all computers in the targeted OU.

The foreach loop runs a test-connection or ping with a TTL of 20 seconds, if this fails the “Computer Not Found COMPUTERNAME” message will be returned. If successful then the invoke-command will run a remote PowerShell session to execute the $Script variable on the targeted desktop.


Full Code:


Orphaned VM Error

Orphaned VM’s can be caused by a number of things such as database, storage or network connectivity issues to vCenter server during a vmotion. I have listed a few methods to resolve the orphaned VM error below.

Method 1 – Restarting VPXA service

Initially this can sometimes be resolved by simply restarting the vpxa service on each esxi host. The service is the agent which services the connectivity between the esxi host and the vCenter Server for management. This will not cause a HA failover but a brief disconnect of the host from vCenter while the service is restarted. This can be completed as below.

Select the relevant host in the vclient

Select the Configuration tab

Select Security Profile under Software

On the right-hand side of the page select Properties

Select the vpxa service and click on Options

Click on Restart


If this hasn’t resolved the orphaned VM issue then move on to Method 2.

Method 2 – Kill the world

Open SQL Management studio and create a connection to the vCenter database, run the below query to find the last host in which the VM was or is running on.

Start the Esxi Shell and SSH services

Select the relevant host in the vclient

Select the Configuration tab

Select Security Profile under Software

On the right-hand side of the page select Properties

Select the Esxi Shell and SSH services and click on Options

Click on Start

Using a SSH client connect to the Esxi Host, login and type esxcli vm process list and copy the world ID for the relevant VM.

Type esxcli vm process kill -t soft -w <WORLD ID> to kill the process of the VM, if you know which datastores the VM exists on then right-click the VM and select remove from inventory DO NOT SELECT DELETE FROM DISK!.

Browse to the datastore in which the VM files exist and right-click on the relevant .vmx file and select Add to Inventory.

SCCM Site Code Change with PowerShell

I came across a case whereby a test SCCM installation had been completed and needed to be removed and replaced with a production instance. There are a few cleanup operations but in this case I needed to automate a way to change the clients to point to the new Site Code.

This can be completed with the below PowerShell command replacing SITECODE with the new Site Code (PowerShell run as administrator)

Running across VM’s in vCenter

I also have a script for completing in PowerCLI by using the Get-VM command, this is broken down below.

This presumes you have already created a connection to your vCenter through PowerCLI

*Use PowerCLI x86

Note that there is no error checking on the below, replace the GuestUser and GuestPassword parameters with your own credentials with administrative rights on the VM Guest OS.

In the perfect world this environment would have WinRM enabled across the server estate but alas it didn’t. This saved a fair amount of manual work for me and I hope it does you too!

Complete script available here –> SCCMSiteCodeChange just paste into PowerShell ISE, save and run from PowerCLI.

SCCM Update Download Error 0x80070005

Today I came across an issue with updates which were unable to download from an SCCM Distribution Point. Further investigation pointed to the fact that the Content Transfer Manager log located under C:\Windows\CCM\Logs\ contained error code 0x80070005 relating to ACCESS DENIED.


Pasting one of the URL’s of an update from the Content Transfer Manager Log into the browser confirmed that indeed access was denied to that content.




Opening IIS on the SCCM Distribution Point it was noted that no authentication was specified on the sub-sites as below. On this particular instance Windows Authentication was missing and the only options were Anonymous Authentication or ASP.NET Impersonation.


We could enable Anonymous Authentication to resolve this issue but in terms of security that would not be best practice. If like this instance Windows Authentication is missing (it appears as though the SCCM installation of a DP will continue if Windows Authentication is missing) you will need to enable this in Server Manager under the feature Web Server/Security as below.


Restart IIS and enable Windows Authentication on both the SMS_DP_SMSPKG$ and SMS_DP_SMSSIG$ sub-sites of the Default Web Site.

Updates downloading and installing successfully!