Tag Archives: DAG

Installing and Configuring MetaCache Database (MCDB)

Installing and configuring MCDB in Exchange 2019 has been on my bucket list for a long time, but like most organizations my Exchange servers have been running on either Hyper-V or VMware. I have seen posts on the forums where people were able to publish SSD disks on VMware to VMs, but MCDB has been targeted towards bare metal deployments. But the good news is that I managed to get two big HP boxes to play around with šŸ™‚

MCDB was introduced in Exchange 2019 to speed up access to information in mailboxes that is used frequently. According to the Exchange Preferred Architecture, Mailbox database are stored on (relatively) slow SATA disks. When using MCDB, frequently accessed mailbox information is also stored on SSD disks instead of spinning disks and as such it is much faster to access this information. It is a cache mechanism, so the information on SSD is a copy of information on the spinning disks. If the SSD disk is lost, performance will drop but no information will be lost.

MCDB is useful when running Outlook clients in online mode, for example in a Citrix environment. OWA can also benefit from the improved performance.

MCDB is built on top of Database Availability Groups, so it is not available on single servers (I assume you don’t have a DAG with only one server). It also depends on the AutoReseed feature, so you must deploy this first before configuring MCDB.

My Servers have eight disks installed:

  • 2 SSD disks for boot and system.
  • 4 SAS disks (10 krpm) for Mailbox databases (I preferred SATA disks, but the servers came with these disks).
  • 1 SSD disk for MCDB.
  • 1 (large) SATA disk for storing other information (IIS logs, Queue database, ISO image etc.).

Three of the SAS disks contain two Mailbox databases each, the remaining SAS disk is used as a hot spare for AutoReseed.

I have blogged auto AutoReseed in Exchange 2013 a long time ago (https://jaapwesselius.com/2014/08/09/implementing-and-configuring-autoreseed/ and it hasn’t changed much) but I will repeat the most important parts of it.

AutoReseed

In short, AutoReseed is using the ā€˜multiple mount points per volume’ option in Windows. For example, the first disk is mounted in C:\ExchVols\Vol1, but this disk is also mounted in C:\ExchDbs\MDB11 as shown in the following image (where only 2 disks are shown):

To achieve this, add additional mount points using the disk management MMC snap-in. In the screenshot below, Disk1 is mounted in C:\ExchVols\Vol1, and C:\ExchDbs\MDB11 and C:\ExchDbs\MDB12 are additional mount points on this disk:

Mailbox database locations are strict when using AutoReseed. For example, MDB11 mailbox database and logfiles must be created in the following directories:

C:\ExchDbs\MDB11\MDB11.db

C:\ExchDbs\MDB11\MDB11.log

The locations of the mount points are properties of the Database Availability Group:

  • AutoDagDatabaseRootFolderPath
  • AutoDagVolumesRootFolderPath
  • AutoDagDatabaseCopiesPerVolume

You can check the correct values using the Get-DatabaseAvailabilityGroup command in Exchange PowerShell:

[PS] C:\>Get-DatabaseAvailabilityGroup -Identity DAG11 | select AutoDag*

AutoDagSchemaVersion             : 1.0
AutoDagDatabaseCopiesPerDatabase : 2
AutoDagDatabaseCopiesPerVolume   : 2
AutoDagTotalNumberOfDatabases    : 6
AutoDagTotalNumberOfServers      : 2
AutoDagDatabasesRootFolderPath   : C:\ExchDbs
AutoDagVolumesRootFolderPath     : C:\ExchVols
AutoDagAllServersInstalled       : False
AutoDagAutoReseedEnabled         : True
AutoDagDiskReclaimerEnabled      : True
AutoDagBitlockerEnabled          : False
AutoDagFIPSCompliant             : False
AutoDagAutoRedistributeEnabled   : True
AutoDagSIPEnabled                : False

Create all Mailbox databases, mount them and create Mailbox database copies in the correct locations with the correct names. That’s all it takes to configure AutoReseed. When one of the disks containing Mailbox databases fail, the repair workflow will kick in and in about an hour the spare disk is configured, and a reseed will start. All steps of the repair workflow will be logged in the event log. The last entry with a successful reseed is shown in the following screenshot:

Now that we have AutoReseed up-and-running, we can continue with configuring the MetaCache Database.

Configuring MetaCache Database

One of the prerequisites of the MetaCache Database is a fully functional AutoReseed configuration as outlined in the previous steps. Of course, you also need one or more SSD disks, depending on your disk layout.

The official recommendation for the SSD disks is one SSD disk on three spinning disks. Also, the SSD disk should be a raw disk (not formatted) and it must be exposed as MediaType SSD in Windows. To check this, use the Get-PhysicalDisk | sort DeviceID command in PowerShell as shown in the following screenshot:

As for sizing, 5% or 6% SSD capacity of the total HDD capacity is sufficient. So, if you have 8TB of storage for databases, your SSD capacity can be approx. 400 GB.

Important to note is that the disks must be symmetrical amongst all Exchange 2019 servers in the DAGs.

The first step is to configure the DAG for use with MCDB. MCDB uses the following properties of the DAG for its configuration:

  • AutoDagTotalNumberOfDatabases. The number of Mailbox databases in the DAG.
  • AutoDagDatabaseCopiesPerDatabase. The total number of copies (active and passive) of each individual Mailbox database.
  • AutoDagTotalNumberOfServers. The number of Exchange 2019 Mailbox servers in your DAG.

In my lab, there are two Exchange 2019 Servers in a DAG, 3 spinning disks (plus 1 hot spare), 6 Mailbox databases and 2 copies (one active, one passive) per Mailbox database.

Use the following command to set these properties:

[PS] C:\> Set-DatabaseAvailabilityGroup DAG11 -AutoDagTotalNumberOfDatabases 6 -AutoDagDatabaseCopiesPerDatabase 2 -AutoDagTotalNumberOfServers 2

The second step is to configure the MCDB prerequisites using the Manage-MCDB command. This command takes the DagName, SSDSizeInBytes and SSDCountPerServer options.

Notes:

The Manage-MDCB command is default not available in PowerShell. You must import the Manage-MetaCacheDatabase.ps1 script (found in $Exscripts) first using the following command:

CD $ExScripts
Import-Module .\Manage-MetaCacheDatabase.ps1

This step is not in the Microsoft documentation, it took me quite some time to figure it out 😦

The SSDSizeInBytes can be found using the following command:

Get-PhysicalDisk -DeviceNumber x | Select Size

The command for the MCDB prerequisites will be something like this:

Manage-MCDB -DagName DAG11 -ConfigureMCDBPrerequisite -SSDSizeInBytes 119998218240 -SSDCountPerServer 1

The third step is to allow (or disallow) an Exchange 2019 server to use MCDB using the ServerAllowMCDB parameter of the Manage-MCDB command. To do this, execute the following Exchange PowerShell command on each DAG member:

[PS] C:\> Manage-MCDB  -DagName DAG11 -ServerAllowMCDB:$True -ServerName EXCH11

This is shown in the following screenshot:

The fourth step is to actually configure MCDB on each Exchange 2019 server. In this step, the raw (unformatted) SSD disk is formatted and the mount points for the MCDB instances are created. To do this, execute the following Exchange PowerShell command, again on each DAG member:

[PS] C:\> .\Manage-MCDB -DagName DAG11 -ConfigureMCDBOnServer -ServerName EXCH11 -SSDSizeInBytes 119998218240

As shown in the following screenshot:

This is all it takes to configure MCDB on a DAG and it is now ready to create the MCDB instances and populate it with cached data. The creation and population (en thus enable acceleration) is initiated by a fail-over. You can use the following MCDB command to initiate this fail-over:

[PS] C:\> .\Manage-MCDB.ps1 -DagName DAG11 -ServerAllowMCDB:$True -ServerName EXCH11 -ForceFailover $true

And fail-over back to the previous state:

[PS] C:\> .\Manage-MetacacheDatabase.ps1 -DagName DAG11 -ServerAllowMCDB:$True -ServerName EXCH12 -ForceFailover $true

When it comes to monitoring there’s not much to see. You can use the Get-MailboxDatabase command to retrieve configuration properties of MCDB, and you can use the Get-MailboxDatabaseCopyStatus command to see ā€˜some’ health information regarding MCDB as shown in the following two screenshots:

Get-MailboxDatabase -Identity MDB11 | fl *metacache*
[PS] C:\> Get-MailboxDatabaseCopyStatus | Select Identity,MetaCacheDatabaseStatus

Unfortunately, Unfortunately that’s it, no more options for monitoring, not even counters in performance monitoring.

So how do you know it works?

Besides the Get-MailboxDatabaseCopyStatus command, you can check the SSD disk which is visible in the Explorer. When configured, the SSD disk is mounted in C:\ExchangeMetaCacheDbs and C:\ExchangeMCDBVolumes. You will find a special (small) MCDB version of the Mailbox database as shown in the following screenshot:

Since this is a regular physical disk you can find it in perfmon, but there are no MCDB specific counters here.

The most interesting thing to test is just login to a Mailbox in one of these Mailbox databases. The look and feel is seriously better than without MCDB. When opening a mailbox in Outlook online mode, or in OWA is just much faster. I have also tried opening a Mailbox remotely via a 20 Mbit line (fiber, so low latency) and it also works better than Exchange without MCDB.

Summary

Exchange 2019 came with this new feature called MetaCache Database, where mailbox data is stored on SSD disks. In the Preferred Architecture, mailbox databases are stored on large SATA disks, but to improve performance frequently accessed data is stored on SSD disk.

The tricky part in configuring MCDB is the configuration of AutoReseed which I find more complex. The lack of proper monitoring is disappointing, but when configured it works very well and you will experience an improved performance. Like most of us, I have worked a lot with properly designed virtualized Exchange environments, but never seen an Exchange environment working as fast as a bare metal Exchange environment with Exchange 2019 and MCDB.

Exchange 2016 CU9 and Exchange 2013 CU20 released

On March 20, 2018 Microsoft has released two new quarterly updates:

  • Exchange 2016 Cumulative Update 9 (CU9)
  • Exchange 2013 Cumulative Update 20 (CU20)

There aren’t too many new features in these CUs. The most important ā€˜feature’ is that TLS 1.2 is now fully supported (most likely you already have TLS 1.2 only on your load balancer). This is extremely supported since Microsoft will support TLS 1.2 ONLY in Office 365 in the last quarter of this year (see the An Update on Office 365 Requiring TLS 1.2 Microsoft blog as well).

Support for .NET Framework 4.7.1, or the ongoing story about the .NET Framework. The .NET Framework 4.7.1 is fully supported by Exchange 2016 CU9 and Exchange 2013 CU20. Why is this important? For the upcoming CUs in three months (somewhere in June 2018) the .NET Framework 4.7.1 is mandatory, so you need these to be installed in order to install these upcoming CUs.

Please note that .NET Framework 4.7 is NOT supported!

If you are currently running an older CU of Exchange, for example Exchange 2013 CU12, you have to make an intermediate upgrade to Exchange 2013 CU15. Then upgrade to .NET Framework 4.6.2 and then upgrade to Exchange 2013 CU20. If you are running Exchange 2016 CU3 or CU4, you can upgrade to .NET Framework 4.6.2 and then upgrade to Exchange 2016 CU9.

Schema changes

If you are coming from a recent Exchange 2013 CU, there are no schema changes since the schema version (rangeUpper = 15312) hasn’t changed since Exchange 2013 CU7. However, since there can be changes in (for example) RBAC, it’s always a good practice to run the Setup.exe /PrepareAD command. For Exchange 2016, the schema version (rangeUpper = 15332) hasn’t changed since Exchange 2016 CU7.

As always, check the new CUs in your lab environment before installing into your production environment. If you are running Exchange 2013 or Exchange 2016 in a DAG, use the PowerShell commands as explained in my earlier EXCHANGE 2013 CU17 AND EXCHANGE 2016 CU6 blog.

More information and downloads

Exchange 2016 Database Availability Group and Cloud Witness

When implementing a Database Availability Group (in Exchange 2010 and higher) you need a File Share Witness (FSW). This FSW is located on a Witness Server which can be any domain joined server in your internal network, as long as it is running a supported Operating System. It can be another Exchange server, as long as the Witness Server is not a member of the DAG you are deploying.

A long time ago (I don’t recall exactly, but it could well be around Exchange 2013 SP1) Microsoft started to support using Azure for hosting the Witness server. In this scenario you would host a Virtual Machine in Azure. This VM is a domain joined VM, for which you most likely also host a Domain Controller in Azure, and for connectivity you would need a site-2-site VPN connection to Azure. Not only from your primary datacenter, but also from your secondary datacenter, i.e. a multi-site VPN Connection, as shown in the following picture:

image

While this is possible and fully supported, it is costly adventure, and personally I haven’t seen any of my customers deploy it yet (although my customers are still interested).

Windows 2016 Cloud Witness

In Windows 2016 the concept of ā€˜Cloud Witness’ was introduced. The Cloud Witness concept is the same as the Witness server, but instead of using a file share it is using Azure Blob Storage for read/write purposes, which is used as an arbitration point in case of a split-brain situation.

The advantages are obvious:

  • No need for a 3rd datacenter hosting your Witness server.
  • No need for an expensive VM in Azure hosting you Witness server.
  • Using standard Azure Blob Storage (thus cheap).
  • Same Azure Storage Account can be used for multiple clusters.
  • Built-in Cloud Witness resource type (in Windows 2016 of course).

Looking at all this it seems like a good idea to use the Cloud Witness when deploying Windows 2016 failover clusters, or when deploying a Database Availability Group when running Exchange 2016 on Windows 2016.

Unfortunately, this is not a supported scenario at this point. All information you find on the Internet is most likely not officially published by the Microsoft Exchange team. If at one point the Cloud Witness becomes a supported solution for Exchange 2016, you can find it on the Exchange blog. When this happens, I’ll update this page as well.

More information

Using a Microsoft Azure VM as a DAG witness server – https://technet.microsoft.com/en-us/library/dn903504(v=exchg.160).aspx

The Microsoft Exchange Replication service does not appear to be running.

Last week we had a major outage in our Exchange 2010 environment (28 multi-role servers in 2 DAGs). The provisioning system (based on Quest software) did some unexpected things after a restore of the provisioning database, resulting in (lots of) security groups in Active Directory being deleted. We were relatively lucky since the default groups (Domain Admins, Enterprise Admins etc.) were not deleted, but all Exchange Security Groups (in OU=Microsoft Exchange Security Group) were deleted.

These Exchange Security Groups can be recreated using the Setup.com /PrepareAD and Setup.com /PrepareDomain commands.

All seems to be running fine, but when executing PowerShell commands against a remote server (i.e. not the server being logged on to) would result in error message. For example, it was not possible to move an active Mailbox database from server1 to server2 in a DAG using the Move-ActiveMailboxDatabase command. When executing this command it would return the following error:

The Microsoft Exchange Replication service does not appear to be running on ā€œcomputernameā€. Make sure the server is operating, and that the services can be queried remotely.

image

Continue reading The Microsoft Exchange Replication service does not appear to be running.

Install Exchange 2013 Cumulative Update 8

On March 17 Microsoft released the 8th Cumulative Update for Exchange Server 2013, 98 days after the release of CU7 which is nicely in line with the quarterly release cadence of Cumulative Updates. This Cumulative Update is called CU8, not a word about Service Pack 2, so SP1 still continues to be the officially supported Service Pack.

There are some new features in CU8 that are worth noticing.

  • With CU8 there are improvements for mobile clients in a Hybrid Configuration. When a Mailbox is moved the Outlook client will automatically detect and reconfigure accordingly. This was not the case with Mobile clients. This behavior has changed in CU8. When a mobile client connects the local Exchange server and the Mailbox is moved to Exchange Online an additional check for the TargetOWAUrl on the Organization Relationship object is performed. This will return an HTTP/451 redirect to the mobile client which in turn will be redirected to this new URL. This feature will be available to all EAS compatible devices that can handle the HTTP/451 redirect option. Unfortunately this feature is only available for onboarding customers (i.e. to Office 365) and not for offboarding (from Office 365) customers.
  • There an improved migration for Public Folders migration, now supporting batch migrations. This is faster (supports multiple jobs), more reliable and provides an easier migration management.
  • CU8 supports viewing calendar and contact types of modern Public Folders in OWA

Continue reading Install Exchange 2013 Cumulative Update 8