Quantcast
Channel: System Center Data Protection Manager
Viewing all 339 articles
Browse latest View live

KB: "The system cannot find the file specified" error when you use Data Protection Manager to restore DPMDB.bak

$
0
0

KB7334333232

When you run the C:>DpmSync.exe –restoredb –dbloc command line to restore the DPMDB.bak file, the operation fails with the following error:

Unhandled Exception: Microsoft.SqlServer.Management.Smo.FailedOperationException: Restore failed for Server <DPMDBName>. ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Directory lookup for the file "<D:\Microsoft System Center 2012\DPMDB\MSDPM2012$DPMDB.mdf>" failed with the operating system error 2(The system cannot find the file specified.). File 'MSDPM2012$DPMDB_dat' cannot be restored to 'D:\Microsoft System Center 2012\DPMDB\MSDPM2012$DPMDB.mdf'. Use WITH MOVE to identify a valid location for the file.Directory lookup for the file "<D:\Microsoft System Center 2012\DPMDB\MSDPM2012$D>PMDB_log.ldf" failed with the operating system error 2(The system cannot find the file specified.). File 'MSDPM2012$DPMDBLog_dat' cannot be restored to 'D:\Microsoft System Center2012\DPMDB\MSDPM2012$DPMDB_log.ldf'. Use WITH MOVE to identify a valid locationfor the file.Problems were identified while planning for the RESTORE statement. Previous messages provide details.RESTORE DATABASE is terminating abnormally.at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException)at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType)--- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType)at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType)at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries)at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries)at Microsoft.SqlServer.Management.Smo.Restore.SqlRestore(Server srv) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Smo.Restore.SqlRestore(Server srv) at Microsoft.Internal.EnterpriseStorage.Dls.RestoreDbSync.RestoreDBHelper.RestoreFromBackupFile(String dbLocation) at Microsoft.Internal.EnterpriseStorage.Dls.RestoreDbSync.RestoreDBHelper.RestoreDb(String dbLocation)at Microsoft.Internal.EnterpriseStorage.Dls.RestoreDbSync.RestoreDbSync.Main(String[] args) <D:\Microsoft System Center 2012\DPM\DPM\bin>> .//

This problem may occur if the System Center 2012 Data Protection Manager (DPM 2012 or DPM 2012 R2) installation directory has changed. For example, assume that the DPM installation path was originally the following: 

D:\Microsoft System Center 2012

However, at some point a reinstall was performed and this is now the installation path:

D:\Program Files\Microsoft System Center 2012

For all the details as well as a resolution please see the following:

KB3047774 - "The system cannot find the file specified" error when you use Data Protection Manager to restore DPMDB.bak (https://support.microsoft.com/en-us/kb/3047774)

J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

Main System Center blog: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv

Forefront Endpoint Protection blog: http://blogs.technet.com/b/clientsecurity/
Forefront Identity Manager blog: http://blogs.msdn.com/b/ms-identity-support/
Forefront TMG blog: http://blogs.technet.com/b/isablog/
Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/
Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/
The Surface Team blog: http://blogs.technet.com/b/surface/

ConfigMgr 2012 R2


Support Tip: Consistency Check fails with “DPM encountered a retryable VSS error”

$
0
0

image4

Hi everyone, Dwayne Jackson here with another tip for you in case you ever run into an issue where a consistency check in System Center 2012 Data Protection Manager (DPM 2012 or DPM 2012 R2) fails for an Exchange database in a non-clusteredconfiguration with the symptoms below.

SYMPTOMS

1. The consistency check job displays the following error:

Type: Consistency check
Status: Failed
Description: DPM encountered a retryable VSS error. (ID 30112 Details: VssError: The writer experienced a transient error.  If the backup process is retried, the error may not reoccur. (0x800423F3))

More information
End time: Date/Time
Start time: Date/Time
Time elapsed:
Data transferred: 0 MB
Cluster node -
Source details: Problem Mailbox Database Name
Protection group: Protection Group Name
Items scanned: 0
Items fixed: 0

2. The MSDPM.Error log shows the following during the time that the job failed:

0BEC      2788       03/12     14:09:02.601       02           EventManager.cs(98)                     2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             Publishing event from ServerAlert.cs(827): AlertStateChange, [ObjectId=ca92a311-9f46-45bd-b9e9-3564d8c5c7f7]
0BEC      2788       03/12     14:09:02.616       02           EventManager.cs(98)                     2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             Publishing event from Replica.cs(2059): ReplicaStatusChange, [DataSourceID=ca05c3f6-973c-4c00-80e1-b7d5ab440edc]
0BEC      2788       03/12     14:09:02.632       27           OperationTypeLock.cs(628)                         2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             UnlockServer() of ReplicationOperationTypeLock, returning
0BEC      2788       03/12     14:09:02.632       27           OperationTypeLock.cs(518)                         2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             In Unlock() of ReplicationOperationTypeLock, returning
0BEC      2788       03/12     14:09:02.648       27           BackupMachine.cs(2159)                              2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           BackupMachine : FAILURE - BACKUP, errorCode=PrmVssErrorRetryable
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           Task Diagnostic Information - <?xml version="1.0" encoding="utf-16"?>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           <TaskExecutionContext>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmWriterId>ddd34536-63d2-4f6c-98c1-2a4ad30d1ee4</PrmWriterId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmDatasourceId>ca92a311-9f46-45bd-b9e9-3564d8c5c7f7</PrmDatasourceId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmActiveNodeName> ProtectedServerName.Contoso.Lab </PrmActiveNodeName>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmLogicalReplicaId>ca05c3f6-973c-4c00-80e1-b7d5ab440edc</PrmLogicalReplicaId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmDatasetId>3807eb97-065b-41ed-b721-0fb04ba76ded</PrmDatasetId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmPhysicalReplicaId>d3fac23a-e9b0-46bb-a5e8-01b8c9181aff</PrmPhysicalReplicaId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmReplicaValidity>Allocated</PrmReplicaValidity>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmReplicaStatus>Idle</PrmReplicaStatus>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <PrmOwnerLockId>00000000-0000-0000-0000-000000000000</PrmOwnerLockId>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <TEVerb>InitialReplicate</TEVerb>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <TEErrorState>Backup.RAPreBackupPending</TEErrorState>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             <TEErrorDetails>&lt;?xml version="1.0" encoding="utf-16"?&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           &lt;q1:ErrorInfo ErrorCode="30112" DetailedCode="-2147212301" DetailedSource="2" ExceptionDetails="" xmlns:q1="http://schemas.microsoft.com/2003/dls/GenericAgentStatus.xsd"&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             &lt;q1:Parameter Name="protectedgroup" Value=" Protection Name" /&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             &lt;q1:Parameter Name="servername" Value=" ProtectedServerName.Contoso.Lab " /&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             &lt;q1:Parameter Name="datasourcename" Value=" Problem Mailbox Database Name " /&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             &lt;q1:Parameter Name="agenttargetserver" Value=" ProtectedServerName.Contoso.Lab " /&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING             &lt;q1:Parameter Name="datasourceid" Value="ca92a311-9f46-45bd-b9e9-3564d8c5c7f7" /&gt;
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           &lt;/q1:ErrorInfo&gt;</TEErrorDetails>
0BEC      2788       03/12     14:09:02.648       01           TaskInstance.cs(798)                      2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  WARNING           </TaskExecutionContext>
0BEC      2788       03/12     14:09:02.648       02           EventManager.cs(98)                     2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             Publishing event from TaskInstance.cs(823): TaskStop, [TaskID=2e9fdddb-8cc2-41c4-bb54-0b33c6dfd398]
0BEC      2788       03/12     14:09:02.648       01           TaskExecutor.cs(843)                     2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  FATAL   Task stopped (state=Failed, error=PrmVssErrorRetryable; -2147212301; WindowsHResult), search "Task Diagnostic Information" for details.
0BEC      2788       03/12     14:09:02.663       16           ActiveJob.cs(745)                                             WARNING           Fail: Task '2e9fdddb-8cc2-41c4-bb54-0b33c6dfd398' failed with error during execution.
0BEC      2788       03/12     14:09:02.663       16           Task.cs(235)                                       NORMAL             Changing task state from 'GenerateWorkplan' -> 'Failed' (2e9fdddb-8cc2-41c4-bb54-0b33c6dfd398)

3. You find events similar to the following on the protected server where the problem database resides:

Log Name:      Application
Source:        MSExchangeRepl
Date:          Date/Time
Event ID:      2024
Task Category: Exchange VSS Writer
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      ProtectedServerName.Contoso.Lab
Description:
The Microsoft Exchange Replication service VSS Writer (Instance 52a69217-4a51-4a78-a400-d46ee7cc1c8f) failed with error 80131516 when preparing for a backup.

Event Xml:
<Event xmlns="
http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="MSExchangeRepl" />
    <EventID Qualifiers="49156">2024</EventID>
    <Level>2</Level>
    <Task>2</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-03-12T14:15:02.000000000Z" />
    <EventRecordID>8418</EventRecordID>
    <Channel>Application</Channel>
    <Computer> ProtectedServerName.Contoso.Lab </Computer>
    <Security />
  </System>
  <EventData>
    <Data>52a69217-4a51-4a78-a400-d46ee7cc1c8f</Data>
    <Data>80131516</Data>
  </EventData>
</Event>

*******************

Log Name:      Application
Source:        MSExchangeRepl
Date:         
Event ID:      2140
Task Category: Exchange VSS Writer
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      ProtectedServerName.Contoso.Lab
Description:
The Microsoft Exchange Replication service VSS Writer encountered an exception in function Microsoft::Exchange::Cluster::ReplicaVssWriter::CReplicaVssWriterInterop::PrepareBackup. HResult -2146233066. Exception System.OverflowException: Arithmetic operation resulted in an overflow.
   at Microsoft.Isam.Esent.Interop.JET_LOGTIME..ctor(DateTime time)
   at Microsoft.Isam.Esent.Interop.JET_SIGNATURE..ctor(Int32 random, Nullable`1 time, String computerName)
   at Microsoft.Exchange.Cluster.Replay.ReplicaInstanceManager.GetRunningInstanceSignatureAndCheckpoint(Guid instanceGuid, Nullable`1& logfileSignature, Int64& lowestGenerationRequired, Int64& highestGenerationRequired, Int64& lastGenerationBackedUp)
   at Microsoft.Exchange.Cluster.ReplicaVssWriter.CReplicaVssWriterInterop.PrepareBackupReplica(IVssComponent* pComponent, ReplayConfiguration replica, BackupInstance backupInstance)
   at Microsoft.Exchange.Cluster.ReplicaVssWriter.CReplicaVssWriterInterop.PrepareBackup(IVssWriterComponents* pComponents).

Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="MSExchangeRepl" />
    <EventID Qualifiers="49156">2140</EventID>
    <Level>2</Level>
    <Task>2</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-03-12T14:15:02.000000000Z" />
    <EventRecordID>8417</EventRecordID>
    <Channel>Application</Channel>
    <Computer> ProtectedServerName.Contoso.Lab</Computer>
    <Security />
</System>
  <EventData>
    <Data>Microsoft::Exchange::Cluster::ReplicaVssWriter::CReplicaVssWriterInterop::PrepareBackup</Data>
    <Data>-2146233066</Data>
    <Data>System.OverflowException: Arithmetic operation resulted in an overflow.
   at Microsoft.Isam.Esent.Interop.JET_LOGTIME..ctor(DateTime time)
   at Microsoft.Isam.Esent.Interop.JET_SIGNATURE..ctor(Int32 random, Nullable`1 time, String computerName)
   at Microsoft.Exchange.Cluster.Replay.ReplicaInstanceManager.GetRunningInstanceSignatureAndCheckpoint(Guid instanceGuid, Nullable`1&amp; logfileSignature, Int64&amp; lowestGenerationRequired, Int64&amp; highestGenerationRequired, Int64&amp; lastGenerationBackedUp)
   at Microsoft.Exchange.Cluster.ReplicaVssWriter.CReplicaVssWriterInterop.PrepareBackupReplica(IVssComponent* pComponent, ReplayConfiguration replica, BackupInstance backupInstance)
   at Microsoft.Exchange.Cluster.ReplicaVssWriter.CReplicaVssWriterInterop.PrepareBackup(IVssWriterComponents* pComponents)</Data>
  </EventData>
</Event>

*******************

Log Name:      Application
Source:        MSExchangeRepl
Date:         
Event ID:      2112
Task Category: Exchange VSS Writer
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      ProtectedServerName.Contoso.Lab
Description:
The Microsoft Exchange Replication service VSS Writer instance 52a69217-4a51-4a78-a400-d46ee7cc1c8f failed with error code 0 when preparing for a backup of database ' Problem Mailbox Database Name '.

Event Xml:
<Event xmlns="
http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="MSExchangeRepl" />
    <EventID Qualifiers="49156">2112</EventID>
    <Level>2</Level>
    <Task>2</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-03-12T14:15:02.000000000Z" />
    <EventRecordID>8416</EventRecordID>
    <Channel>Application</Channel>
    <Computer> ProtectedServerName.Contoso.Lab</Computer>
    <Security />
  </System>
  <EventData>
    <Data>52a69217-4a51-4a78-a400-d46ee7cc1c8f</Data>
    <Data>0</Data>
    <Data> Problem Mailbox Database Name </Data>
  </EventData>
</Event>

***********

4. On the protected server experiencing the problem you see the following:

**DPMRA.Error**

03/12     14:08:49.672       31           vssbaserequestor.cpp(194)         [00000000011E7010]       2E9FDDDB-8CC2-41C4-BB54-0B33C6DFD398  NORMAL             CVssBaseRequestor::StartGatherWriterMetadata [00000000011E7010]
2FAC      0994       03/12     14:08:50.672       31           vssbaserequestor.cpp(943)         [00000000011E7010]                       NORMAL             QueryStatus returned 0x4230a, Releasing VssAsync [0000000001480DE0]
2FAC      0994       03/12     14:08:50.844       31           createsnapshotsubtask.cpp(1798)           [0000000001237660]                       NORMAL             m_fIsSnapshotLessBackup 0
2FAC      0994       03/12     14:08:50.844       31           createsnapshotsubtask.cpp(1808)           [0000000001237660]                       NORMAL             Using AUTO-RELEASE Snapshot
2FAC      0994       03/12     14:08:50.844       31           createsnapshotsubtask.cpp(1892)           [0000000001237660]                       NORMAL             snapshotContext 2
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(624)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::StartPrepareForBackup [00000000011E7010] m_snapshotInfo.snapshotContext 2
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(265)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddComponentForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(345)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddComponentForSnapshot: Seeing if caption: [Mailbox Database NC1] matches
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(360)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddComponentForSnapshot: found matching caption for : [Mailbox Database NC1]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(2051)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForFile [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(1578)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::GetVolumeMountPointPath [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(404)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(436)                                                NORMAL                ssLocalVolumeGuid = [\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\], ssClusterVolGuid=[\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(472)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(495)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor: AddVolumeForSnapshot - Marked volume C:\ to be snapshot
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(2051)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForFile [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(1578)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::GetVolumeMountPointPath [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(404)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(436)                                                NORMAL                ssLocalVolumeGuid = [\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\], ssClusterVolGuid=[\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(472)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(2051)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForFile [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(1578)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::GetVolumeMountPointPath [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(404)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(436)                                                NORMAL                ssLocalVolumeGuid = [\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\], ssClusterVolGuid=[\\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\]
2FAC      0994       03/12     14:08:50.844       31           vsssnapshotrequestor.cpp(472)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::AddVolumeForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.876       31           vsssnapshotrequestor.cpp(656)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::StartPrepareForBackup [00000000011E7010]
2FAC      0994       03/12     14:08:50.891       31           vsssnapshotrequestor.cpp(1734)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::ReleaseVolumesForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.891       31           vsssnapshotrequestor.cpp(1631)             [00000000011E7010]                       NORMAL             CVssSnapshotRequestor::PrepareVolumesForSnapshot [00000000011E7010]
2FAC      0994       03/12     14:08:50.891       31           vsssnapshotrequestor.cpp(720)                [00000000011E7010]                       NORMAL             CVssSnapshotRequestor: Using provider {B5946137-7B9F-4925-AF80-51ABD60B20D5} for volume \\?\Volume{797158a7-c357-11e4-80c0-806e6f6e6963}\
2FAC      0994       03/12     14:08:53.094       31           vssbaserequestor.cpp(943)         [00000000011E7010]                       NORMAL             QueryStatus returned 0x4230a, Releasing VssAsync [0000000001480DE0]
2FAC      0994       03/12     14:08:53.094       31           vssbaserequestor.cpp(1067)      [00000000011E7010]                       NORMAL             CVssBaseRequestor::StartGatherWriterStatus [00000000011E7010]
2FAC      0994       03/12     14:08:54.094       31           vssbaserequestor.cpp(943)         [00000000011E7010]                       NORMAL             QueryStatus returned 0x4230a, Releasing VssAsync [00000000014807A0]
2FAC      0994       03/12     14:08:54.094       31           vssbaserequestor.cpp(1100)      [00000000011E7010]                       NORMAL             CVssBaseRequestor::CheckWriterStatus [00000000011E7010]
2FAC      0994       03/12     14:08:54.094       31           vssbaserequestor.cpp(1131)      [00000000011E7010]                       NORMAL             Checking Writer status for writerid: {76FE1AC4-15F7-4BCD-987E-8E1ACB462FB7}                                       writerName: Microsoft Exchange Writer
2FAC      0994       03/12     14:08:54.094       31           vssbaserequestor.cpp(1139)      [00000000011E7010]                       WARNING           Failed: Hr: = [0x800423f3] CVssBaseRequestor: CheckWritersStatus -                                                         Writer instance - {2AC7AF99-6588-4B1C-A41E-C13DDAC88602} writer id - {76FE1AC4-15F7-4BCD-987E-8E1ACB462FB7} writer name - Microsoft Exchange Writer                                                         writer state - 1
2FAC      0994       03/12     14:09:00.485       05           fsmtransition.cpp(111)  [0000000001245A60]                       WARNING                Failed: Hr: = [0x800423f3] HasEventErrorCode: completion: 0xa10c, signature: 0xaabbcc00

5. If you examine the writers on the protected server where the problem database resides leveraging the vssadmin list writers command via an administrative command line you see the following:

Writer name: 'Microsoft Exchange Writer'
   Writer Id: {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}
   Writer Instance Id: {2ac7af99-6588-4b1c-a41e-c13ddac88602}
   State: [7] Failed
   Last error: Retryable error

CAUSE

If you encounter the above symptoms then most likely the Microsoft Exchange Writer has crashed and needs to be restarted. 

RESOLUTION

You can do this by completing the steps below, however these steps should be DONE DURING A MAINTENANCE WINDOW ONLY in order to avoid impacting the Exchange users.

Step1: From the problem server where the database resides, open Services, select the Microsoft Exchange Replication service and restart it.

clip_image002

Step2: From an administrative Command Prompt (Run As Administrator) run the vssadmin list writers command. Here’s what the output looks like when in a failed state:

clip_image004

Here’s what the output looks like when working properly:

clip_image006

Step 3: From Data Protection Manager Console, navigate to Protection and select the problem data source.

clip_image008

Select Perform consistency check...

clip_image010

From Monitoring we can review the status of the consistency check job.

clip_image012

When complete, from Protection we now have 1 recovery point on disk for the data source and everything works as expected.

clip_image014

Dwayne Jackson| Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news on Facebookand Twitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPMDB Maintenance Part 1: Database consistency check and your DPMDB

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer

FIXHi folks, Chris Butcher here again with another DPM blog entry for you. I’ve had a few questions recently about upkeep on the DPM database, and while most of the time questions start about maintaining the size, once the conversation gets going it expands well beyond that. I have talked to several people and in the end, this really can be grouped into 4 distinct pieces, so I decided it was time we covered these pieces in depth.

To that end, I am writing a series of three blog posts to cover three areas and will point to an existing post for the final piece. This will be broken down into an approach that starts by checking the database consistency, followed by a look at fragmentation (or eliminating it) to optimize performance. Third, we make sure there is no extra growth and that it is sized optimally, and lastly, we talk about backing up the DPMDB in order to have a good copy available should it ever be needed.

I decided to cover this in separate posts because A) the great Mike Jacquet has already written a great article on protecting/backing up your DPMDB, and B) no one wants to read a 100 page blog post. It just makes more sense to put these into smaller consumable chunks, plus it also helps that each action can be done alone and none are dependent on any other steps.

So with all that said, we can tackle the first task in the chain and that is checking for corruption in the DPMDB. While this sounds super vital, and it can really cause issues, it oddly isn’t something we see too much of in the DPM world. So, this is to say it is not a big problem. That’s not to be confused with “It is not a problem.” I have seen instances where there was corruption in the database, it’s just that this is a very rare situation.

Regardless, if you want to run a database consistency check (DBCC) against your DPMDB, I will cover some options to do that. This is addressed from the angle of a person who is familiar with SQL, but far from DBA. Whether you simply want to manually check the database from time to time or set up a recurring job to run, the process is very similar. The difference will be called out in the steps below.

These operations will be done through the SQL Management Studio where your DPMDB resides. If you are not familiar, note that you may have to run SQL Management Studio as administrator. If you don’t do this, it may fail to connect to SQL because of permissions.

clip_image001

When SQL launches, it should automatically fill in the servername\instance name as shown below.

image

If this isn’t populated for you then you can open DPM to find out this information. By clicking on the About DPM button, it will open up and towards the bottom give you the name of the SQL server, the SQL instance name and the DPMDB name in the format: SQLServerName\SQLInstanceName\DPMDBName

To connect in SQL, you will simply need the SQLServerName\SQLInstanceName.

clip_image003

With a connection to SQL, we will now walk through the steps to run DBCC.

1. Expand your SQL instance and then expand Management until you see Maintenance Plans. Right click Maintenance Plans and select Maintenance Plan Wizard. This will walk us through what we need to simply run DBCC.

clip_image004

2. Once the Maintenance Plan Wizard opens you will first give this job a name that is easy to recognize. You then will decide if you want to have this run on a regular basis or manually run it on demand. To run on demand, just click Next. If you want to schedule it, click Change. Set the schedule according to what works best for you. I have a note on frequency at the bottom of the page.

clip_image006

3. On the next page, simply select the box to Check Database Integrity and then click Next on the next two screens.

clip_image007

clip_image008

4. To define the task, pull down the menu to <Select on or more>. This will bring up a new screen where you will select your DPMDB by name.

clip_image009

5. Finally, select the location where you want a report written. You will need to remember this, as running the DBCC itself will output all information to this file.

clip_image011

Now that the job is set up, you can either let the job run on schedule and check the log file after each run, or run it manually. To do this, simply expand the same Maintenance Plans item and right click on the job you created and choose Execute.

Either way, once it is run, be sure to review the log created as specified in step 5 above. If it shows any errors, you will then have to investigate accordingly to determine the next steps, as there is no way to cover all of the possible outcomes and their corresponding remedies.

The standard report will only report success and little else if there are no issues found.

Frequency

How often is the right amount to run this? That’s a great question with no real answer. This can be researched and set according to your comfort level but I will say this about DPMDB: From the support view of things, we very rarely have seen a call that was due to actual database corruption. So, for the suggestions which might say to do this daily, I think it is overkill for sure. When you factor in the process intensive nature of this, I think that using something along the line of a quarterly (minimum) to annual (maximum) schedule is probably where you should initially target things.

Finally, if you do schedule this to run automatically, it is ideal to avoid midnight to 3 AM. This is when DPM will have house cleaning jobs run. While it most likely will only affect performance of these jobs, it is best to stay away from this time frame to be sure we don’t run into any conflicts.

You can continue on to Part 2 in this series here.

Chris Butcher| Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPM 2012 R2

DPMDB Maintenance Part 2: Identifying and dealing with fragmentation in your DPMDB

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer 

FIX

Hi folks, it’s Chris Butcher again with part 2 in my series of posts on DPMDB maintenance. This installment covers identifying and dealing with fragmentation in your DPMDB, and if you happened to miss Part 1 you can find it here:

DPMDB Maintenance: Database consistency check and your DPMDB

I often hear questions around maintenance of the DPM database and there are two areas where these questions come in. First is how to manage the size of the database, although I won’t be going into detail on that here as that is covered in my next blog on this series. The other question concerns DPMDB optimization. The reality of things is that many DPM admins are not also SQL admins but they’re still stuck with hoping their DPM database is optimized. Well, depending on how long your DPM server has been running, this may not be the case and there are some things you can do to try to help with that.

This article breaks this down into four phases:

  • Determining current fragmentation levels
  • One time job to reorganize the indexes
  • One time job to rebuild the indexes
  • Scheduling ongoing reorganization job

SQL databases, similar to the file system on a disk, will fragment over time. As tables grow and shrink, this fragmentation can ultimately play a role in how well SQL runs. When fragmentation happens, SQL has two ways it can address the issue: Reorganizing, which is a light weight job that will defragment the leaf level of clustered and non-clustered indexes on tables, and Rebuilding, which will drop and re-create the indexes.

Both methods have a small side effect of reclaiming some disk space by compacting. More information on this topic from a SQL standpoint can be found here:

SQL Server - Reorganize and Rebuild Indexes: https://technet.microsoft.com/en-us/library/ms189858.aspx

Database maintenance in relation to reorganizing or rebuilding indexes in the DPMDB will vary by workload. The following script can be run against your DPMDB to determine the amount of fragmentation. Specifically, this will show how many indexes have >30% fragmentation and how many tables there are at that level.

NOTE These scripts are written using DPMDB for the name. Most servers will likely have a different DPMDB name (usually DPMDB_Comuptername). Be sure to change this within the script to reflect your DPMDB name.

Determining current fragmentation levels

USE [DPMDB]

SELECT OBJECT_NAME(object_id),index_id, avg_fragmentation_in_percent, fragment_count, page_count, avg_fragment_size_in_pages, index_type_desc

FROM sys.dm_db_index_physical_stats(DB_ID(), Null,Null,Null,Null)

WHERE avg_fragmentation_in_percent > 30

AND index_type_desc IN('CLUSTERED INDEX', 'NONCLUSTERED INDEX')

order by avg_fragmentation_in_percent desc

To run the query, open SQL Management Studio (you may have to “Run as Administrator”).

1. Expand Databases to show the name of the database you will need to run this against.

clip_image002

2. Click New Query to open the query window.

clip_image004

3. Copy the query above into the query window. On the top line, change the USE DPMDB command to reflect the correct name of the database in your environment.

clip_image006

4. Select Execute from the top menu bar.

clip_image008

5. Review the output in the bottom pane. Look for avg_fragmentation_in_percent to determine the amount of fragmentation in your given database.

clip_image010

Using these steps, you can identify how many tables have high fragmentation and determine if you should rebuild or simply reorganize. I have some general suggestions below but please note that these values are guidelines, not hard and fast rules.

- Index should be rebuild when index fragmentation is greater than 30% or 40%.
- Index should be reorganized when index fragmentation is between 10% to 40%.
- Index rebuilding process uses more CPU and it locks the database resources and requires DPM to be turned off while it runs.

A caveat to this is that if you have fragmentation percentages higher than the suggested numbers but the fragmentation count is low (which is possible after rebuilding). In those cases, generally speaking you shouldn’t need to worry about it.

Based on your findings, you have two options as mentioned above. Option one is to reorganize the indexes when fragmentation is between 10 and 40 percent. This can be done while the DPM server is online and operating normally. It can have a slight impact on performance but shouldn’t have any noticeable negative impact.

IMPORTANT As with anything else you do with the DPMDB, it is important to back up the database before running any scripts that modify it in any way. This allows you to go back to a point in time in case of a failure. You can reference this blog post and scroll down to the sections titled Using DPMBACKUP to back up the DPMDB or Use Native SQL Server backup and not use DPM for backup at all for the easiest ways to get a backup of your DPM database.

One time job to reorganize the indexes

Using steps similar to the ones above, you can run a script to manually re-index all of the tables. Just substitute the lines below and paste them in step 3 above. Be sure to change the DPMDB name to match yours before executing.

USE [DPMDB]

EXEC sp_MSforeachtable @command1="print '?'", @command2="ALTER INDEX ALL ON ? REORGANIZE"

One time job to rebuild the indexes

If you find a very high amount of fragmentation, it may be more valuable to have SQL rebuild the indexes. In order for this to happen you will have to stop the DPM services so it should be done during a slow time in the DPM cycle.

Before running this command, open Services Manager on the DPM server, stop DPM and set it to disabled.

clip_image011

Once the service is stopped, go back to SQL Management Studio and execute the following command to have SQL rebuild the fragmented indexes. As usual, be sure to change the DB name on the first line to match with your system.

USE [DPMDB]

EXEC sp_MSforeachtable @command1="print '?'", @command2="ALTER INDEX ALL ON ? REBUILD WITH (ONLINE=OFF)"

Now that we have things running a bit more optimized, the question arises, “How do we keep it that way?” Great question and there are several possible answers.

The article below talks about the options. I find the wizard to be the easiest to follow and of course covers what we want from a DPM perspective, so that is the one I will highlight here:

SQL Server - Create a Maintenance Plan: https://technet.microsoft.com/en-us/library/ms189953.aspx

Using the Maintenance Plan Wizard, we will walk through creation of a new plan to automatically run a reorganization of the DPMDB indexes on a scheduled interval. The schedule for this should be done to best meet your usage and needs, but for a heavily used DPMDB a general guideline for this would be to reorganize once a week and then rebuild as needed. Since that is a manual process, it can be checked as often as monthly (using the first script mentioned) but at least once per quarter. Of course if the script shows a high fragmentation again, a rebuild should be run.

Scheduling ongoing reorganization job

1. In SQL Management Studio, expand Management , right-click Maintenance Plans and select Maintenance Plan Wizard.

clip_image012

2. In the Wizard, give your plan a name to help recognize it, then click on Change to create a schedule for this job.

clip_image013

3. Choose to make the job recurring and set the schedule to something that meets your needs. As stated above, for reorganizing the indexes, once a week is the most frequent schedule you will likely need. Since this will be done while DPM is still running, try to schedule it at an hour when the fewest backups will be running to minimize the impact.

clip_image015

4. When selecting the maintenance tasks, check Reorganize Index only.

clip_image016

5. Click Next on the Task Order screen.

6. For the Define Reorganize Index Task screen, change Databases to Specific databases and select your DPMDB by name.

clip_image017

7. Click Next on Report Options unless you want to specify something different to receive reports.

8. Select Finish to complete the process and allow SQL to configure the schedule to run automatically.

You can continue on to Part 3 here.

Chris Butcher| Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

System Center 2012 Data Protection Manager System Center 2012 R2 Data Protection Manager DPM 2012 R2

TCO reduction with the Azure Backup pricing in DPM deployments

$
0
0

Azure Backup has announced a new pricing model for data being backed-up to Azure. This new model is based on the number of machines that are being backed-up, and is measured and reported in the monthly Azure bill as Protected Instances. The new pricing has an impact on the Azure Backup bill for the data sources being backed-up to Azure via DPM servers, and this blog provides an overview of the changes that DPM customers should expect.

To begin with, you can read more about the new Azure Backup pricing and have a look at the FAQs on the pricing page.

Protected Instances in a DPM deployment

In deployments where DPM is protecting your data sources, the primary site machines are the Protected Instances being counted for billing. The notion is simple – Azure Backup is charged based on the machines being protected to Azure, and DPM is just the conduit for the data flow. Thus in the sample deployment below, the servers marked in blue will be counted towards the Protected Instance usage and the DPM server (in grey) and the local data is not counted for the purposes of billing.

Slide1 (2)

In this example, the monthly software management cost is calculated and shown in the table below:

S. No. Machine typeSize of machineSize bucketMonthly cost
1File ServerPhysical host600 GBbetween 500GB and 1000GB$20
2SQL ServerPhysical host75 GBbetween 50GB and 500GB$10
3Virtual machineVM30 GBless than 50GB$5
4Virtual machineVM150 GBbetween 50GB and 500GB$10
     TOTAL: $45

Note that Hyper-V hosts are not counted for the Protected Instances calculation. Instead, the Hyper-V virtual machines are used for the calculation. You can find more information on the datasources supported with DPM in the Pricing FAQ, along with more examples.

Estimating the Protected Instances and Storage

A DPM deployment could be a few tens of machines or could scale to a few hundred machines. In order to estimate the number of Protected Instances and the storage utilization, we have published a PowerShell script that is run on the DPM server:  The script collects information from the DPM database on the datasource sizes, machines, and recovery points. This information is processed, aggregated, and presented as an HTML output file.

Sample output of PowerShell script run on DPM server to get usage information 

You can download the script from the TechNet Gallery: https://gallery.technet.microsoft.com/Estimating-Azure-Backup-e0d4abbc/.

Once you have the usage numbers, you can plug the numbers into the Azure Backup TCO calculator– an excel sheet to simplify the work of estimating the Azure bill.

 

Related links/content:

DPMDB Maintenance Part 3: Dealing with a large DPMDB

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer 

FIXHi everyone, Chris Butcher here again with part 3 in my series of posts on DPMDB maintenance. This article addresses DPMDB size and the possibility to shrink. I heavily leveraged some information already put together by one of my colleagues in the UK, Emily McLaren, so she deserves credit for much of this information. And “heavily leveraged” in many parts means I have used her exact information including screen shots, so thanks Emily!

In case you missed parts 1 and 2 you can find those here:

DPMDB Maintenance: Database consistency check and your DPMDB

DPMDB Maintenance: Identifying and dealing with fragmentation in your DPMDB

You can also consider the blog post by Mike Jacquet below to be part 4 which completes this series:

How to protect your Data Protection Manager SQL database

DPMDB Size

While most people may not realize that there is database corruption (out first post), or that their performance has slowly been degrading (our second post), we do get quite a few questions about the size of the DPMDB. Is it normal that it is so large? Why does it get so large? Is there anything we can do to shrink it?

These are all valid questions so let’s first start with the actual size. There is no guideline that will tell you how large you should expect yours to be. This is driven by a couple of major factors like retention range and what type of data you are keeping. It is normal for sure that these can grow to be quite large, and I regularly see them over 100GB, so a large DPMDB is not something to necessarily worry about.

What we can do, however, is look at what may be causing the large size and address that. There are some issues where you can decide how much data DPM should actually retain which can help with keeping the size as small as possible.

To get started, we need to open SQL Management Studio from the SQL server that houses the DPMDB. Right click on the DB in question and select properties.

clip_image001

clip_image003

If the DB is large you will then want to identify which table(s) in the DB are responsible. Large can be fairly subjective, as certain configurations (e.g. protection of large SharePoint Farms) will naturally lead to DB growth. If it is causing issues with disk space or there are performance issues, then the following steps should be followed regardless.

To find out which tables are large, run the Disk Usage by Top Table report against the DPMDB:

clip_image005

Based on the output of this report, which should list the tables in order of size (descending), take a look at the top tables. Below I listed some of the tables that are most frequently large in size. 

Tbl_RM_SharePointRecoverableObject

This table can become large when large SharePoint farms are being protected (e.g. farms with millions of items), or multiple farms are protected.

The following formula will approximately tell you how big the DPMDB can get when protecting a large SharePoint farm:

((Number of items in the Farm in millions) x  3) + ((number of content DBs x Number of SQL servers in farm x 30) / 1024) = size of DPMDB (GB)

This only takes into account the SharePoint related DB growth though, so it may be larger than this value overall, meaning you may need to work out the size per farm and then total them. Unfortunately there is not much we can do to reduce the disk space used by the DB in this scenario.

Tbl_ARM_DirAndFile and/or Tbl_ARM_Path

The tape catalog pruning settings can cause these tables to be large. To reduce the size of these tables, modify the tape catalog retention values to reduce the amount of data we are storing in the DPMDB:

clip_image007

The default is to allow it to remove the entries as the tapes expire. If tapes are kept for multiple years, this can lead to a large amount of data retained in the DB. Change the settings to “Prune catalog for tapes older than” and set it to a sensible value (e.g. one month).

clip_image008

Updating this will not delete data on tape, but will mean that tapes older than this value will need to be re-cataloged in order to restore data from them.

Tbl_TE_TaskTrail

If this table is growing large then it is generally a sign that the overnight jobs to clean up the DB are failing, as garbage collection should clean up any data older than 33 days in the task trail table. To check if garbage collection is doing its job, we need to run a query on the SQL server where DPMDB resides. Open SQL Management Studio and run the following query:

use DPMDB --you will need to put in your DPMDB name here

select*fromtbl_TE_TaskTrailtt
wheredatediff(day,tt.stoppeddatetime,getutcdate())> 33

This will tell you how many entries there are that are greater than 33 days old. If 0 then GC is clearly working. However, if you see this return some entries, you will likely want to open a case with our support teams to help determine what is going on there.

TempDB Bloating

This is not as common as bloating of the DPM DB, but it can be a cause/symptom of some performance issues. Equally, if you are trying to understand where the disk space on the DPM server is going it could be being used up the TempDB.

What is the TempDB?

Briefly, TempDB is used for temporary objects (as one would expect) that are being used whilst queries are running. These variables or tables may be explicitly created by a query (e.g. a stored procedure generating output) or implicitly created (e.g. temp tables for sorting objects). This is useful to know in order to understand why it may be getting large.

If you want to know more about the TempDB there is further detail on MSDN here:

SQL Server tempdb Database: http://msdn.microsoft.com/en-us/library/ms190768.aspx

Why might it grow?

It could be simple in that we have a query working with a large amount of data, however this is not likely to be the case if it is growing by several GB. More commonly though, with DPM it is caused by a long running transaction. This can prevent cleanup of the transaction log which causes it to continue growing until the transaction completes, or indefinitely if it never completes.

Also, if other databases are collocated on the same instance, it may not be DPM at fault, as the TempDB is available to all DBs on an instance. For example, if another application is causing a problem with the TempDB it could therefore impact on DPMs console performance. By this same rationale, if you are sharing an instance with other DPM servers they all can be using it at the same time and thus growth should not be a surprise.

How do we tell if there is a problem?

First, in SQL management Studio open the TempDB properties:

clip_image010

If it is GB in size then there is likely a problem. Note that with TempDB it is not possible to enable AutoShrink, and if AutoGrow is enabled, once the DB grows large it will not drop back in size.

Troubleshooting

In the properties you can see if there is any currently free space in the TempDB. You can see this just below the size in the Properties window. If so, it is unlikely that whatever caused the growth is currently happening. If there is available free space in the TempDB then from the SQL management console you can try to shrink the TempDB size:

clip_image011

If there is available free space in the DB you should be able to shrink it. If you find that the size does not decrease as much as expected, check what the initial size of the DB is set to:

clip_image013

Sometimes this value can be set quite large. If so, drop it back to 8Mb and try the shrink again (this will be the lower limit of the shrink).

Restarting the SQL instance will also reset the TempDB size back to its initial size.

Once you shrink the DB, monitor it to see if the growth reoccurs. If so, try to identify when the growth occurs. You can look for AutoGrow events on the TempDB in the default trace files that are created by SQL in C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log(or a similar install location for the SQL instance). Identifying a particular time (or times) each day when the growth occurs is useful if you need to further engage support.

Chris Butcher| Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

New update for Azure Backup to enable Protected Instances pricing

$
0
0

DOWNLOADWhen you use Microsoft Azure Backup to back up data, the size of the primary data source is currently not sent to the service. The new Protected Instances pricing model needs this information for accurate billing because if there is no primary data source size information, the billing will be based on the total backup data that is stored in the service.

There is a new update available that allows the size information to be sent to Azure Backup as a part of each backup.

For more information on this update as well as a download link, please see the following:

3050804 - Update for Azure Backup to enable Protected Instances pricing (https://support.microsoft.com/en-us/kb/3050804/)

J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

Key takeaways: DPM Protection of Microsoft Workloads to Azure

$
0
0

All Microsoft workloads that can be protected by Data Protection Manager (DPM) on disk or tape can also be protected to Azure Backup. At a high level, table below describes granularity at which DPM can protect various Microsoft workloads to Azure. It also describes granularity at which DPM can perform recovery from Azure.

Microsoft Workload

DPM Protection Granularity

DPM Recovery Granularity

Original Location Recovery (OLR)

Alternate Location Recovery (ALR)

Network Location Recovery

File Folder on Windows Server

Folder

Folder, Item

Yes

Yes

Yes

Hyper-V VMs

VM

VM

Yes

Yes

Yes

SharePoint Farm

Farm

Farm, SharePoint DB

Yes

Yes for DB

Yes

SQL

SQL Instance, SQL DB

SQL DB

Yes

Yes

Yes

Exchange

Exchange

Exchange DB

Yes

Yes (Exchange 2010+ Only)

Yes

Windows Client

Volume, Folder

Folder, Item

Yes

Yes

Yes

Why choose DPM for Long Term Retention of Microsoft Workloads to Azure?
Data resiliency is an important Enterprise strategy as compliance requires point in time data to be reproduced. Long term retention of data is an obvious outcome to meet compliance requirement. Data growth rate coupled with pressure on IT spending demands competitive total cost of ownership (TCO) for backup data.

Azure offers a competitive cost savings in comparison with tape. Gartner released a report (G00261961 – How to Determine If Cloud Backup Is Right for Your Servers, Published: 13 February 2014) which reported TCO of cloud backups significantly lower than tape-based back-up – “Although tape media is very inexpensive, a majority of the cost of tape-based backup is the “soft” costs, which occur around backup software, maintenance and staff time”.

TCO Comparison Between Tape and Cloud Backup for 1 TB of Initial Full Backup

As rightly called in the Gartner’s report “Is Cloud Backup Right for Your Servers?” backup window, restore time, bandwidth and latency play an important role in evaluating cloud backup strategy.

For Initial Replica, DPM provides capability to offline send the data to Azure which not only improves backup seeding time but also saves on network bandwidth. For more details on how to send Initial Replica offline to Azure refer to this TechNet article

With DPM sending incrementals to Azure, backup window can be contained to non-productions hours without any significant addition to bandwidth costs. An example for 1 TB Microsoft workload is shared below –

Microsoft Protected Workload Size

1 TB

Daily Churn

5% ~ 50GB

Backup Window

8 hrs. (Non-production hours)

Compression

30%

Effective Bandwidth Required for Daily Backup (After latency and packet loss adjustments)

~10 Mbps

As DPM sends changed file contents efficiently to Azure, it significantly reduces storage and network costs.

Recommendation
Looking at the TCO benefits that DPM - Azure Backup brings and the fact that DPM sends forever incrementals to Azure as compared with fulls on tape; we recommend the following backup and retention schedule

Disk

Daily incremental backup, retention for 7 days

Azure

Daily incremental backup, retention as per your industry compliance and company policies

How to use DPM to protect Microsoft workloads to Azure?
Once a DPM user configures Azure Backup with DPM, Create Protection Group Wizard now shows option to backup to the Azure (as shown below)

DPM - Create New Protection Group Screen

Once the user selects online protection, retention ranges can be selected as described in the following blog.

For all Server and Client workloads, DPM continues to provide similar integrated user experience for protection to Azure. This simplifies the admin overhead as opposed to using native tools or different backup applications for different workloads
For more information on how to create/manage protection group using DPM, one can refer to the TechNet article.

Quick Reference
Download DPM UR5 and follow the steps outlined in the article for installation.


Considerations when applying Update Rollup 5 for System Center 2012 R2 Data Protection Manager

$
0
0

FIXHello, Dwayne Jackson and Andy Nadarewistsch here. We wanted to take a minute today to detail a quick point to consider after applying Data Protection Manager (DPM 2012 R2) Update Rollup 5 (UR5).

Issue

When creating a protection group after applying DPM 2012 R2 Update Rollup 5, not all data sources may display as expected.

This can occur as a result of the improved inquiry performance provided in the update. Specifically, Data Protection Manager brings speed improvements to the inquiry step in the Create/Modify Protection Group wizard. This is accomplished by the persistent caching of data sources, the pruning of unused inquiry data during nightly database cleanup, as well as a more optimized inquiry on servers that have clustered virtual machines on CSV volumes.

Because of this the behavior of the wizard has changed. The Clear Cache button is replaced with the Refresh button, and expanding a production server in the selection page now instantly obtains the data sources from the last inquiry that are cached in the database without actually triggering an inquiry on the production server. This can cause some newly added data sources to not appear until the data is refreshed. 

To trigger a fresh inquiry to obtain the latest data sources (and to update the cache), you must click Refresh after you select the production server.

Reference: https://support.microsoft.com/en-us/kb/3021791/en-us?wa=wsignin1.0

Example scenario

Note that the example scenario below is not specific to an Exchange work load - this can apply to various workloads that DPM supports.

In this scenario, we have added a second mailbox database after initial protection. As seen below, the DPM server has cached only one, named Mailbox Database NC1, even though we actually have two mailbox databases mounted.

clip_image002

We can also confirm from the Exchange admin center that we actually have two mailbox databases mounted.

In order to get our newly added data source to show up we need to open the DPM console and go to the Select Group Members page, then click on the DAGName\Node name and select the Refresh button. This will trigger a fresh inquiry to obtain the latest data sources and update the cache.

clip_image006

As seen below, DPM now displays all of the data sources.

clip_image008

Dwayne Jackson | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division
Andy Nadarewistsch | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

Keywords: DPM 2012 R2 Create New Protection Group data source enumeration fails, Data Sources not listed, Data Protection Manager SQL enumeration fails, Data Protection Manager Exchange enumeration fails, Data Protection Manager volume enumeration fails, Data Protection Manager Share Point enumeration fails, Data Protection Manager client enumeration fails, Data Protection Manager Hyper-V enumeration fails Data Protection Manager Virtual Machine(VM) enumeration fails,

Announcing Centralized and Customizable backup reports using Data Protection Manager

$
0
0

This blog post explains how DPM reports have been enhanced with Update Rollup 5. Till Update Rollup 4, we shipped six standard reports inbox with DPM which were useful for auditing and monitoring basic details of the production environment independently for each DPM Server.

In Update Rollup 5, the reporting infrastructure is greatly enhanced to integrate it with System Center Operations Manager (SCOM), so you can now generate customized reports which aggregates data from multiple DPM Servers. We are shipping new DPM Reporting Management Pack (MPs) along with 3 other MPs that need to be imported on SCOM Server to use the new reporting framework.

 Aggregated reports from multiple DPM Servers

The new reporting framework seamlessly integrates with SCOM using DPM Central Console, so you can now generate aggregated reports with data from various DPM Servers that are being managed by DPM Central Console. For example, if your DPM Central Console is managing 100 DPM servers, each containing 20 data sources configured into a separate Protection Group (PG), the new framework will enable SCOM to monitor and report data for all 100 x 20 = 2000 data sources.

Generating custom backup reports

Each organization has unique reporting requirements for backup data, so the basic reports shipped inbox with DPM may not be sufficient for your backup auditing needs. As a simple example you may want a report for Tape Backup or Cloud Backup storage utilization every night which the backup administrator can analyze next morning. The new reporting framework gives you the power to generate such customized reports.

When you import the Reporting Management Pack in SCOM it will create SQL views in the SCOM Data Warehouse (DW). A comprehensive list of all the views are exposed and documented in Generate DPM custom reports article. You can create custom reports by querying these views from SCOM DW using any framework, scripting or programming language of your choice.

We have also shipped the DedupReporter Management pack with this update which is an optional MP. You can import this management pack to get access to dedup compression sample report. The sample report uses the same set of tables and views for organizing and displaying reporting data so you can use it as a reference to generate more customized reports.

Steps to generate enhanced reports

  • Download new DPM Management Packs which contain following MPs and a MP guide with details on how to import MPs
    1. Library
    2. Discovery & Monitoring
    3. Reporting
    4. DedupReporter
  • Once you import the MPs in SCOM and add SCOM Agent on managed DPM Server as mentioned in the MPguide, reporting data will start coming to SCOM Server. You can view the reports by clicking on the “Reporting” tab at the bottom left pane in DPM Central Console then navigating to “System Center 2012 R2 Data Protection Manager Reporting” as shown in the screenshot below



Figure1: UI to generate reports from SCOM Server

  • A new UI will open where you can select the date range and managed DPM Servers for which you want the reporting data and click on Run as shown in the screenshot below.


Figure2: Selecting DPM Servers and Report time range

  • The screenshot below shows a sample report shipped with the DPM Reporting Management Pack.

Figure3: Sample DPM report

Frequently Asked Questions (FAQ)

Q1) Can I use the new reporting infrastructure without DPM Central Console?

No, the new infrastructure only works when you import “Reporting” Management Pack and install DPM Central Console on the SCOM Server.

Q2) Can I continue to use the old reports shipped in-box with DPM?

Yes, you can go to individual DPM Servers and click on Reporting tab to view the in-box reports as before.

Q3) Can I use the new Reporting Management Packs without upgrading to UR5

No, All DPM Servers and DPM Central Console should be running UR5 or later to use the new reporting functionality.

Q4) Can I base my customized reports on the exposed SQL views?

Yes, the views are standard and well documented here, you can create custom reports using these views.

Related links/content:

OpsMgr Management Pack for Data Protection Manager 2012 R2 Reporting, DedupReporter, Discovery and Monitoring has been released

$
0
0

announcement_5951A951

Just a quick note to let you know that the Data Protection Manager 2012 R2 Management Pack for System Center Operations Manager has been released.

You can find more details about the reporting features introduced in this management pack here.

Please install the DPM 2012 R2 management pack that shipped with DPM RTM bits if you intend to install the DPM central console. If you install this management pack directly, the DPM central console won't install because the central console installer has a hard dependency on the version of the DPM 2012 Management Pack that shipped with the DPM RTM bits. You can upgrade to this version of the DPM 2012 R2 Management Pack after you have installed the DPM Central Console.

Suraj Suresh Guptha | Program Manager | Microsoft

Get the latest System Center news on Facebook and Twitter:

clip_image001 clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

How to configure SharePoint protection in Data Protection Manager and troubleshoot related issues

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer

FIXHi folks, Chris Butcher here again with another DPM blog series for you. We get quite a few calls regarding SharePoint protection in System Center 2012 Data Protection Manager (DPM 2012) so I decided it was time to take a more in-depth look at this and go over some of the more common problems you may run into.

I decided to break this down into three parts, both to make it easier to digest and to keep the article from getting too long and hard to manage. The first part, which you’ll find below, will focus on issues you might encounter when configuring SharePoint Protection in Data Protection Manager. In this scenario we have no protection set up and we are going from ground zero. Part 2 will cover problems you may see when backing up SharePoint. This will be broken down into three areas: Backing up SharePoint Configuration and content databases from SQL, creating SharePoint metadata, and creating the SharePoint catalog. Lastly, part 3 will cover issues with restoring SharePoint with Data Protection Manager.

So with that out of the way, let’s go ahead and get started by taking a look at creating SharePoint protection in DPM 2012 and some points in the process where you might run into issues. This scenario assumes that DPM 2012 is installed, storage is provisioned and agents have been deployed, however we have no protection set up yet for SharePoint. 

Creating SharePoint Protection

1. To configure SharePoint protection, open the DPM management console, click Protection, then New to start the Create New Protection Group wizard. Click Next on the Welcome to the New Protection Group Wizard.

clip_image002

2. Select Servers and then click Next on the Select Protection Group Type page as shown here:

clip_image004

3. Expand the server that holds the SharePoint Web Front End (WFE) role. If you have more than one SharePoint WFE server, the one to select is the server where you previously ran ConfigureSharePoint.exe –enablesharepointprotection.

clip_image006

NOTEIf you don’t see SharePoint as a protectable item under your WFE server, this usually indicates that you have not run configuresharepoint –enablesharepointprotection on the WFE server. You must run this from an administrative command prompt on the Web Front End server where you want to enable protection. This will enable the SharePoint writer to allow DPM to protect it.

When we expand the SharePoint server, this is what happens.

a. DPM queries VSS to see what data DPM can protect from that server.

b. If the SharePoint database is on a remote server, the DPM Server will connect to the SQL server that is hosting the SharePoint configuration database. For that to work, the DPM Agent needs to be installed on that server or servers. If that SQL Server is member of a cluster, all nodes of that cluster must have a DPM agent installed as well.

c. If all is fine up to this point, we should be able to expand the SharePoint data source and see the farm information details.

clip_image007

There are few things that could go wrong and cause you to not be able to see the SharePoint icon shown above, or if you do see it, you may get an error when trying to expand it. If you see either of these problems, here are some things to check:

1. If the SharePoint writer is disabled then there is no way for DPM to know that SharePoint is installed on that box. By Default, SharePoint VSS Writer is set to disabled and you must enable it by running the ConfigureSharePoint –EnableSharePointProtection command.

NOTEThe SharePoint writer is named SharePoint 2010 VSS Writer in Microsoft SharePoint 2010.

2. Make sure that SharePoint VSS Writer is started and running.  

3. Make sure that the SQL Server VSS Writer on the SQL Server is started and running.

4. Make sure that the user account used for SharePoint VSS Writer has sufficient permissions on the SQL server.

5. Make sure the SQL server has the DPM agent installed. If it is not, you will probably get the error below, however the wizard will still allow you to continue.

clip_image010Error text: DPM cannot protect your Windows SharePoint Services farm until you install agents on the following servers…

6. Make sure that the SharePoint databases are not being protected as SQL data sources.

7. Make sure that the SQL Server VSS Writer is running on the SQL server that holds the SharePoint databases.

It’s possible that you may also receive the error below after expanding the SharePoint icon and selecting the SharePoint farm for protection:

clip_image012

Error text: DPM cannot protect this SharePoint farm as it cannot detect the configuration of the dependent SQL databases. (ID: 32008)

If this happens, it could be that SharePoint is using a SQL alias and there is no alias configured in cliconfg or SQL Server Configuration Manager.

So now that we’ve broached the SQL alias, cliconfg and SQL Server Configuration Manager, let’s take a quick look at these.

CLICONFG.EXE (%windir%\system32) allows you to create a SQL alias for 64-bit clients. If your application uses 32-bit calls to access SQL, an alias defined with this tool won’t do any good.

CLICONFG.EXE (%windir%\SysWOW64) allows you to create a SQL alias for 32-bit clients. If your application uses 64-bit calls to access SQL, an alias defined with this tool won’t do any good.

NOTEIf you are running a 32-bit operating system, %windir%\system32 will contain the 32-bit version of CLICONFG. The 64-bit of CLICONFG will not be available.

NOTEThere is no 32-bit version of SharePoint 2010. All versions of Windows come with cliconfg.exe

Here is what CLICONFG.EXE looks like:

clip_image014

CLICONFIG and SQL Server Configuration Manager share the same information so if you have an alias configured in one location it will also show up in the other.

If you open SQL Server Configuration Manager you will see that there is a SQL Native Client ** Configuration (32bit) and a SQL Native Client ** Configuration. By expanding these, you can see that each has aliases and you can view and modify the aliases there.

In the view shown below you can see that the 32-bit aliases can be found by expanding the red box and that 64-bit aliases are found by expanding the blue box.

clip_image016

The biggest issue of note when dealing with aliases is to make sure that the version of SQL Client Connectivity components are the version that DPM requires. This is based on the version of SharePoint being protected and not the version of DPM. These tools must be installed on the WFE server where you are establishing protection and are installed by launching the SQL installation media and selecting this option. It will not affect any other settings.

SharePoint Version

SQL Client Connectivity Tools Required

2007

SQL 2005

2010

SQL 2008 (or SQL 2008 R2)

2013

SQL 2008 (or SQL 2008 R2)

Assuming there was no error selecting the SharePoint farm for protection, you can continue by setting up group information such as group name, retention rage, disk/tape backup etc. and complete the wizard.

That should cover the most common problems you’re likely to encounter when configuring SharePoint protection in Data Protection Manager. Next time we’ll take a look at some of the issues you might see when actually backing up SharePoint.

Chris Butcher | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division
Wilson Souza | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news on FacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPM 2012 R2 SharePoint 2010 SharePoint 2012

Backing up SharePoint with Data Protection Manager and troubleshooting related issues

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer

FIXHi folks, Chris Butcher here again with part 2 in our series on protecting SharePoint with System Center 2012 Data Protection Manager. In part 1 we went through the process of enabling protection and examined issues you might encounter when configuring SharePoint Protection in Data Protection Manager. If you missed part 1 you can find it here:

Troubleshooting issues encountered when configuring SharePoint protection in Data Protection Manager

In this installment we’ll cover the backup process and some of the more common issues you may see when backing up SharePoint. This is broken down into three areas:

  • Backing up SharePoint Configuration and content databases from SQL
  • Creating SharePoint metadata
  • Creating the SharePoint catalog

Similar to part 1, this scenario assumes that DPM 2012 is installed, storage is provisioned, agents have been deployed and protection has been enabled SharePoint. 

SharePoint backup

SharePoint backups will touch the SharePoint Web Front End (WFE) server as well as the SQL server that hosts the SharePoint Configuration/Content databases. This means that both servers need to be accessible for SharePoint backup to complete successfully.

The screenshot below is from Monitoring/Jobs after a successful recovery point. Note that Configuration and Content database jobs were executed on the SQL server and SharePoint Farm\SharePoint_Config ran on the SharePoint server

clip_image001

SharePoint backup is a three steps process and each is examined below.

Step One: Backup SharePoint Configuration/Content Databases from SQL Server

This step doesn’t require access to SharePoint WFE. It is simply SQL Database Protection and will just use the SQL writer to back up the content databases (as well as SharePoint_Config and SharePoint_AdminContent_*).

If you encounter errors in this area, it should be looked at on the SQL server where the content databases are located. The basic approach to troubleshooting is as follows:

1. Repro the failure
2. Gather sys and app logs from the SQL server computer
3. Gather the most recent *.errlog files from the SQL server computer
4. Gather the most recent *.errlog files from the DPM server
5. Troubleshoot the problem as a standard SQL backup/protection issue

Step Two: Create SharePoint Metadata

In the screenshot above, the metadata generation occurs in the SharePoint Farm\SPSQL\SharePoint_Config job and is the last job to be completed in a Recovery Point. During metadata enumeration, the SharePoint WFE server connects to the SQL server. If the SharePoint WFE server cannot communicate to the SQL server, the end result will be “Backup metadata enumeration failed” as shown below.

clip_image002

You will also see an error similar to the following:

clip_image004

Error text: DPM could not resolve the SQL alias <serverName> on the SharePoint front-end web server. (ID 31250 Details: Unknown error (0x80131534))

OK, so we’re talking about SharePoint metadata, but what is that exactly? Good question, and I’ll explain it this way: So there is one command that we are very familiar with which is VSSADMIN LIST WRITERS. We run this command to check if a writer is available in a system as well as to get the state of that writer (stable, failed, etc). However, VSSADMIN LIST WRITERS doesn’t tell us everything we need to know about writers. If we need more details, we need to use other tools. In Windows Server 2008 or newer, the tool we use is DISKSHADOW.EXE which is built into Windows. For Windows Server 2003 we use VSHADOW.EXE.

When you launch DISKSHADOW.EXE and type in LIST WRITER, you are requesting every single writer to show its metadata information. The metadata information returned by those writers is what they will be backing up if there is a backup request for that writer. Since we are talking about SharePoint, the metadata information from a SharePoint VSS Writer contains the following:

 

  • The Configuration Database
  • Content databases
  • Application databases

With SharePoint protection, DPM only cares about the Configuration Database and the Content databases.

Below is a snippet of a SharePoint metadata. You can use the following to pipe the output into a text file to be reviewed and searched:

C:\>DISKSHADOW /l writer.txt

If you run DISKSHADOW> LIST WRITER then you should see output similar to the snippet I have below.

* WRITER "SharePoint Services Writer"

           - Writer ID   = {da452614-4858-5e53-a512-38aab25c61ad}

           - Writer instance ID = {d36db7cf-c2b4-4977-a18e-a642987f9c85}

           - Supports restore events = TRUE

           - Writer restore conditions = VSS_WRE_ALWAYS

           - Restore method = VSS_RME_RESTORE_AT_REBOOT_IF_CANNOT_REPLACE

           - Requires reboot after restore = FALSE

           - Excluded files:

           + Component "SharePoint Services Writer:\SPSQL\SharePoint_Config"

                - Name: SharePoint_Config

                - Logical path: SPSQL

                - Full path: \SPSQL\SharePoint_Config

                - Caption: Configuration Database SharePoint_Config

                - Type: VSS_CT_DATABASE [1]

                - Is selectable: TRUE

                - Is top level: TRUE

                - Notify on backup complete: FALSE

                - Paths affected by this component:

                - Volumes affected by this component:

                - Component Dependencies:

                     - Dependency to "{a65faa63-5ea8-4ebc-9dbd-a0c4db26912a}:\\WILSON-SQL\WILSON-SQL\SharePoint_Config"

           + Component "SharePoint Services Writer:\SPSQL\WSS_Content"

                - Name: WSS_Content

                - Logical path: SPSQL

                - Full path: \SPSQL\WSS_Content

                - Caption: Content Database WSS_Content

                - Type: VSS_CT_DATABASE [1]

                - Is selectable: TRUE

                - Is top level: TRUE

                - Notify on backup complete: FALSE

                - Paths affected by this component:

                - Volumes affected by this component:

                - Component Dependencies:

                     - Dependency to "{a65faa63-5ea8-4ebc-9dbd-a0c4db26912a}:\\WILSON-SQL\WILSON-SQL\WSS_Content"

           + Component "SharePoint Services Writer:\SPSQL\SharePoint_AdminContent_abda792e-1e17-4119-99b1-34f6fc61a6c1"

                - Name: SharePoint_AdminContent_abda792e-1e17-4119-99b1-34f6fc61a6c1

                - Logical path: SPSQL

                - Full path: \SPSQL\SharePoint_AdminContent_abda792e-1e17-4119-99b1-34f6fc61a6c1

                - Caption: Content Database SharePoint_AdminContent_abda792e-1e17-4119-99b1-34f6fc61a6c1

                - Type: VSS_CT_DATABASE [1]

                - Is selectable: TRUE

                - Is top level: TRUE

                - Notify on backup complete: FALSE

                - Paths affected by this component:

                - Volumes affected by this component:

                - Component Dependencies:

Highlighted is what DPM uses to figure what each database returned by the writer is used for. And by the way, DPM won’t populate the DPMDB with the output above. Instead, it will generate an XML file based on that output and insert it into DPMDB via the stored procedure dbo.prc_RM_RecoverySource_UpdateWithPreBackupDetails.

You can retrieve the metadata information from the database (if one was created) by running the SQL query below:

 

select      DatasetId,
            BackupMetadataXml,
            BackupTime

from tbl_RM_RecoverySource where WriterId ='DA452614-4858-5E53-A512-38AAB25C61AD'orderby BackupTime

 

where WriterID  = 'DA452614-4858-5E53-A512-38AAB25C61AD'= SharePoint VSS Writer

So with all of that in mind, we can conclude that if SharePoint metadata is what the SharePoint VSS Writer is able to return, a failed metadata means that the SharePoint VSS Writer wasn’t able to return anything. Below are some common reasons why that may occur.

 

  • The SharePoint VSS Writer is in failed state.
  • The SharePoint VSS Writer isn’t running.
  • The SharePoint farm administrator don’t have SQL Permissions.
  • The account used by configuresharepoint –enablesharepointprotection doesn’t have SQL permissions. The account used needs to be SharePoint Farm administrator and hold a sysadmin role on the SQL server that is hosting the SharePoint Farm databases.
  • A SQL alias is used but the alias can’t be ‘translated’ to <SQLServerName>\<InstanceName>.
  • A SQL alias can be translated to <SQLServerName>\<InstanceName> but SQL Browser on the SQL Server side can’t redirect that string to the port used by that SQL named instance.
  • SharePoint 2010 has Search configured and the registry key below exists on the SharePoint WFE Server:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\2.0\SharePointSearchEnumerationEnabled

The reason for the error is that SharePoint Search protection for SharePoint 2010 has been deprecated in DPM 2010 and later.

So now that we know what SharePoint Metadata is, the next question is where DPM will use it.

When we start a Recovery Point or Consistency Check, several individual jobs will kick in: One for each content database of that farm as well as the one for the metadata itself.

DPM relies on the metadata to know which content database it needs to run a backup job for.

NOTEIf metadata fails and a new content database is added to the farm, this new content database won’t be added to SharePoint protection when the autoprotect job kicks in at midnight.

Metadata information can only be added – it cannot be removed from an existing job. This means that if a content database is removed from the farm, you can only remove that reference from DPMDB if you stop/restart protection for that farm.

Step Three: Create the SharePoint Catalog

This is what allows DPM to drill down through the SharePoint content database, site collection, sites, etc. for Item Level Recover (ILR). Things to note:

  • The SharePoint catalog job is executed on the SharePoint WFE.
  • The SharePoint VSS Writer isn’t needed for Catalog generation.
  • A connection to the SQL server is required for the catalog to complete.
  • The SharePoint catalog needs access to the MTATempStore$ share (C:\Program Files\Microsoft DPM\DPM\Temp\MTA) from the SharePoint WFE.

NOTE: If DPM cannot access MTATempStore$ you will see the following:

 

Type:                 SharePoint Catalog Task

Status:               Completed

Description:          The job completed successfully with the following warning:

                      File copy failed due to an I/O error.

Source location:      \\wilson-sp.wsouza.local\MTATempStore$\{242f99e3-a60e-40f1-ab16-9ab7616ed87d}\2.rocatalogDestination

                      location: c:\Program Files\Microsoft DPM\DPM\Temp\60423b5c-780f-4162-bde1-7ee980fbe093\2.rocatalog.

                      (The network name cannot be found.)

                      Exception trace :  

                         at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)

                         at System.IO.File.InternalCopy(String sourceFileName, String destFileName, Boolean overwrite)

                         at Microsoft.Internal.EnterpriseStorage.Dls.ArmCommon.FileCopyBlock.DoFileCopy(Object msgObject) (ID 30123)

End time:             5/15/2012 8:47:28 PM

Start time:           5/15/2012 8:47:24 PM

Time elapsed:         00:00:04

Data transferred:     0 MB (0 bytes)

Source details:       wilson-sp.wsouza.local

Target details:       wilson-sp.wsouza.local

Cluster node          -

NOTE: The Catalog task will run 3 hours after a recovery point, however you can manually trigger a catalog task by running the PowerShell command Start-CreateCatalog

Start-CreateCatalog or a scheduled catalog job performs the following:

1. On the SharePoint Web Front End Server you will find a subfolders under the MTATempStore$ share (C:\Program Files\Microsoft Data Protection Manager\DPM\Temp\MTA).

clip_image006

The GUID above is the DatasourceID for the SharePoint farm. This folder will be created at the first time the SharePoint catalog task is executed.

clip_image008

Underneath the GUID folder shown above we have one or more of the following files:

clip_image010

Every file represents a SharePoint Configuration/content database. As we can see, we have three X.rocatalog files. So the question is “where did the numbers (2, 3 and 4) come from?” Every data source that is protected by DPM gets a Datasource sequence number. This sequence number can be found in table tbl_RM_RecoverableObject. Whatever sequence number the SharePoint database gets on that table will reflect on the rocatalog file name.

We can see this by running the following:

select DatasourceSequenceNumber, LogicalPath, ComponentName from tbl_RM_RecoverableObject orderby RecoverableObjectId

That gives us the following:

 

DatasourceSequenceNumber   LogicalPath              ComponentName

-----------------------------------------------------------------

1                          Sharepoint Farm          SPSQL\SharePoint_Config

2                          WILSON-SQL               SharePoint_AdminContent_abda792e-1e17-4119-99b1-34f6fc61a6c1

3                          WILSON-SQL               WSS_Content

4                          WILSON-SQL               SharePoint_Config

If there was a change within a content database (like new site collection, new site, new item, removed site, removed item, etc), the rocatalog file will have that change in it. Here is a snippet from 3.rocatalog:

 

FullCatalog

SiteCollection*http://wilson-sp/sites/Blog/*1c29c4ec-a676-41c4-9737-9988378fd87a

Site*http://wilson-sp/sites/Blog/documents3/*43f6e062-b5c5-4f9e-a4ac-4a5ebdaa92f3

List*http://wilson-sp/sites/Blog/documents3/_catalogs/*6a368c11-20da-4630-8cff-5a7c03c92a15

List*http://wilson-sp/sites/Blog/documents3/_catalogs/masterpage/*3687946b-17ba-4918-966f-ffe447fcaa09

List*http://wilson-sp/sites/Blog/documents3/_catalogs/masterpage/Forms/*63077659-a5e3-4e45-90bf-0ee8f4db686e

List*http://wilson-sp/sites/Blog/documents3/_catalogs/masterpage/Forms/MasterPage/*b697225e-2d8b-4472-9512-d10982af5a3e

If there is no change, this is what will be within the file (snippet from SharePoint_Config)

 

NoCatalog

If the catalog ran was an incremental, the file should look like this:

 

IncrementalCatalog

ADD*ListItem*http://wilson-sp/sites/Blog/documents3/Shared Documents/sammui.log*57c11095-c0da-4011-a3a2-3ee786396f63

After the catalog job is complete, all rocatalog files will be removed. This is why it can be tricky to troubleshoot SharePoint catalog issues since we don’t necessarily have a file to look into. The only way we can see catalog content is via DPM verbose tracing.

1. Folder SPCatalogDump is created while the SharePoint catalog is being generated. This folder is created under C:\Program Files\Microsoft Data Protection Manager\DPM\Temp on the SharePoint WFE server.

Folder SPCatalogDump will only be used for Incremental catalogs, and for every full/incremental backup, a folder called SPXXXXXXXXXXXXXXXXXX will be created and left behind (its content will be deleted though).

clip_image011

When an incremental backup takes place and there is a change in it, two files will be created in that location.

clip_image012

The rocatalog file is similar to the one mentioned earlier, however it will only include the changed items

 

IncrementalCatalog

ADD*ListItem*http://wilson-sp/sites/Blog/documents3/Shared Documents/DownloadCenter.xml*c2a7341b-3d35-4d48-a32e-125cd4e485c9

The spdump file contains a map for the newly added SharePoint recoverable item (I added document DownloadCenter.xml to my SharePoint site)

 

I:1c29c4ec-a676-41c4-9737-9988378fd87a:43f6e062-b5c5-4f9e-a4ac-4a5ebdaa92f3:c2a7341b-3d35-4d48-a32e-125cd4e485c9:Add:1;0;9d9844a7-7a9b-419c-ab5d-16fae7b346d6;634728378915800000;916

A:c2a7341b-3d35-4d48-a32e-125cd4e485c9

Now even though this information looks like gibberish, it does make sense. Here is the breakdown:

 

select     RecoverableObjectId,
ComponentName,
ComponentType,
Caption
from tbl_RM_SharePointRecoverableObject
where caption in
('1c29c4ec-a676-41c4-9737-9988378fd87a',
  '43f6e062-b5c5-4f9e-a4ac-4a5ebdaa92f3',

  'c2a7341b-3d35-4d48-a32e-125cd4e485c9')

Output:

NOTE: In this output, column ComponentType give us information about the SharePoint content. These are the Component Types in SharePoint:

SiteCollection: This is the ‘root’ site
Site: A site can be created underneath a site collection
List: Is a path within a Site Collection/Site
ListItem: A file within a Site Collection/Site

RecoverableObjectId   ComponentName                                                                ComponentType    Caption
--------------------------------------------------------------------------------------------------------------------------------------------------------
1                     http://wilson-sp/sites/Blog/                                                 SiteCollection   1c29c4ec-a676-41c4-9737-9988378fd87a
2                     http://wilson-sp/sites/Blog/documents3/                                      Site             43f6e062-b5c5-4f9e-a4ac-4a5ebdaa92f3
458                   http://wilson-sp/sites/Blog/documents3/Shared Documents/DownloadCenter.xml   ListItem         c2a7341b-3d35-4d48-a32e-125cd4e485c9

Other information we get from spdump is the Token. The Token is created by SharePoint on every change that occurs in it.

To identify these items in DPM you can go to Recovery tab and highlight a SharePoint farm. On the right hand side you will see the content databases

clip_image013

When you double click a content database, if there was no problem creating the catalog, and if there is at least one site collection defined, you will see something like this. In this case, content database WSS_Content has one site collection:

clip_image014

If you double-click a site collection you can see List, ListItem and Sites, if they exist. The yellow icon on the left means that this is a list or a ListItem. If the icon is four little colorful people then it means it is a site.

clip_image015

And if there are ListItems within Lists or Sites, you can double-click them to go down into the tree.

You can get that same information by running this T-SQL query:

 

select * from tbl_RM_SharePointRecoverableObject

Full SharePoint Catalog runs only once: The first time a Catalog is generated against a content database or after a content database is restored.

If you ever need to run a full SharePoint Catalog again, you can run the SQL query below.

 

UPDATE tbl_PRM_DatasourceConfigInfo
SET ROCatalogCheckPoint =NULL
WHERE DatasourceId IN
(
SELECT DatasourceId FROM tbl_im_Datasource
WHERE AppId ='C2F52614-5E53-4858-A589-38EEB25C6184'

OR AppId ='DA452614-4858-5E53-A512-38AAB25C61AD'

NOTE: The T-SQL query above will reset SharePoint catalog for ALL SharePoint Farms protected by this DPM server. If you want to run it against a specific farm you will need to find the datasource for the desired farm.

There are a few different issues that you might run into after a scheduled backup/Catalog is completed.

 

  • A content database that was being protected by DPM was removed from the farm. This is an easy one to fix; stop SharePoint protection (retaining data) and protect it again.
  • Something is blocking metadata enumeration from occurring. If so refer to metadata troubleshooting mentioned previously.
  • The SharePoint catalog job doesn’t complete, or it completes with warnings. When a catalog is not successfully completed, you might not be able to drill down through a content database to a site collection/site/item, etc. This doesn’t mean that the data is missing from the backup, it just means that you can’t see them from DPM UI.

In the screenshot below we can’t go further the content database. Double clicking any database on the right hand side won’t do a thing. If you have successful backups but failing catalogs and you need to restore an item. The solution is to restore the whole content database, attach to SQL, then use SharePoint tools to export/import the desired data.

clip_image017

That should cover the most common problems you’re likely to encounter when backing up SharePoint with Data Protection Manager. Next time we’ll take a look at some of the issues you might see when restoring SharePoint.

Chris Butcher | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division
Wilson Souza | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news on FacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPM 2012 R2 SharePoint 2010 SharePoint 2012

Restoring SharePoint with Data Protection Manager

$
0
0

~ Chris Butcher| Senior Support Escalation Engineer

FIXHi folks, Chris Butcher here again with the final part in my series on protecting SharePoint with System Center 2012 Data Protection Manager. In part 1 I went through the process of enabling protection and examined issues you might encounter when configuring SharePoint Protection in Data Protection Manager. If you missed part 1 you can find it here:

Configuring SharePoint protection in Data Protection Manager and troubleshooting related issues

In part 2 we took a look at the backup process and some of the more common issues you may see when backing up SharePoint. That installment can be found here:

Backing up SharePoint with Data Protection Manager and troubleshooting related issues

Today with part 3, we’re going to talk about restoring SharePoint and a few of the more common issues you may run into during that process.

Restoring a Content Database

This will restore the selected content database as well as SharePoint Farm configuration. There are three things you should be aware of when restoring a content database:

- When the SharePoint Administrator deletes a content database, all references of this database are removed from the farm but the database itself can be kept attached to SQL assuming that the administrator didn’t select the option to delete the content database as well.

- Every time a content database is restored, the SharePoint Data source will become inconsistent.

- Restoring a content database won’t bring all of the settings back to the SharePoint farm. The SharePoint Administrator will still need to use the SharePoint tools to attach that database back to the farm.

Scenario

Let’s say that we have SharePoint admin who goes to SharePoint 2010 Central Administrator -> Manage Content Database and removes the WSS_Content database by mistake. The next thing he does is engage the DPM administrator to restore the affected content database. Then once the restore is complete, the SharePoint administrator says that the restored content database isn’t showing up in the content database list in SharePoint 2010 Central Administration.

In this scenario we have two options to get the restored content database to show back up:

Option 1: Restore the SharePoint configuration database as well. Note that if any changes were done to the farm after the backup, those changes will be lost.

Option 2: Run the following SharePoint PowerShell command:

Mount-SPContentDatabase "WSS_Content" -DatabaseServer SPSQL –WebApplication <parameter>

In my case the command line would be as follows:

Mount-SPContentDatabase "WSS_Content" -DatabaseServer SPSQL –WebApplication http://wilson-sp

In your environment you can get the WebApplication parameter by going to WebApplication under SharePoint 2010 Central Administrator provided that the WebApplication wasn’t also deleted.

 

Restoring a Site Collection/Site/List/ListItem to the original location

This will always be a two steps process. DPM will restore the content database where the site collection belongs to a temporary SQL Server and the restored database will have a different name than the original. This allows you to use the same SQL server that holds the SharePoint content databases. When the database is restored, the second part of the process is to extract the Site Collection/Site/List/List Item you selected and import it back to the production database.

Please note that doing any of these steps will fail if you select a share as the staging area. Microsoft does not officially support this but the UI will allow you to select one. The restore will ultimately fail but the error message isn’t clear on why it has failed. In DPM 2012, the share shows up as well but it will fail in the same way if it’s selected. Below are the two screens shown in the Recovery Wizard that give you the option to select a share. DON’T USE THEM!

clip_image001

clip_image002

NOTEIn DPM 2012, if a share is selected for file location (the screenshot just above), then DPMRA from SQL server will crash. The crash will prevent the DPMRA from detaching the restored database. In DPM 2010, DPMRA will also crash but the database will be removed.

Scenario 1

The first scenario is when the user doesn’t know that a share should not be used and thus perform several restores, selecting a share every time. This can cause the SQL server to run out of disk space due to those restored databases.

clip_image003

clip_image005

clip_image006

Below is a snippet from DPMRACurr.errlog on the SharePoint Web Front End (WFE) server:

0F38   0EE8   05/18  05:40:27.018  18     fsutils.cpp(3723)                 WARNING       Failed: Hr: = [0x80070002] : G: lVal : HRESULT_FROM_WIN32(dwError)
0F38   0EE8   05/18  05:40:27.018  18     fsutils.cpp(2173)                 WARNING       Failed: Hr: = [0x80990a52] : Invalid filespec:Datasources\PSExchangeDatasourceConfig.xml
0F38   0EE8   05/18  05:40:27.018  61     inquirysubtask.cpp(1031)   [0000000000436030]              NORMAL <--CInquirySubTask::ExecuteInquiry
0F38   134C   05/18  05:40:27.018  61     inquirysubtask.cpp(458)    [0000000000436030]       E56D1666-1101-465E-ACC5-B25DE3696D9F     NORMAL Sending final response with 10 records
0F38   134C   05/18  05:40:27.112  61     inquirysubtask.cpp(990)    [0000000000436030]       E56D1666-1101-465E-ACC5-B25DE3696D9F     NORMAL CInquirySubTask::Inquiry finished with status [0000000000]
0F38   134C   05/18  05:40:27.175  03     workitem.cpp(272)    [00000000004364B0]       E56D1666-1101-465E-ACC5-B25DE3696D9F     ACTIVITY      WorkItem stopping
0F38   134C   05/18  05:40:27.175  31     vsssnapshotrequestor.cpp(94)       [0000000000436358]   E56D1666-1101-465E-ACC5-B25DE3696D9F     NORMAL       CVssSnapshotRequestor::~CVssSnapshotRequestor [0000000000436358]
0F38   134C   05/18  05:40:27.175  31     vsssnapshotrequestor.cpp(1763)       [0000000000436358]   E56D1666-1101-465E-ACC5-B25DE3696D9F     NORMAL       CVssSnapshotRequestor::CleanUp [0000000000436358]
0F38   134C   05/18  05:40:27.175  31     vssbaserequestor.cpp(80)   [0000000000436358]       E56D1666-1101-465E-ACC5-B25DE3696D9F     NORMAL CVssBaseRequestor: destructor [0000000000436358]
0F38   134C   05/18  05:45:12.427  03     workitem.cpp(86)     [0000000000449A90]       037F22CA-C1CF-476B-9E4E-6D58C1D2A248     ACTIVITY      Idle Timer created with timeout = 390000
0F38   134C   05/18  05:45:13.333  31     vadatasourcestate.cpp(729)        037F22CA-C1CF-476B-9E4E-6D58C1D2A248 WARNING       Failed: Hr: = [0x8007007e] GetModuleHandle failed for Library [WSSWriterHelperPlugin], will try LoadLibrary
0000   134C   05/18  05:45:13.349  00     agentutils.hpp(68)         037F22CA-C1CF-476B-9E4E-6D58C1D2A248     WARNING       Failed: Hr: = [0x80070002] : F: lVal : r.GetValue(pszKey, pT)
0000   134C   05/18  05:45:13.349  00     fsutils.cpp(4426)          037F22CA-C1CF-476B-9E4E-6D58C1D2A248     NORMAL CClientReadThrottler::InitializeWaitForClientRead Failed to read sleep time from registry [hr = 0x80070002]. Setting default [50 ms].
0F38   134C   05/18  05:45:13.349  31     dllmain.cpp(38)            037F22CA-C1CF-476B-9E4E-6D58C1D2A248     NORMAL WSSWriterHelperPlugin: DLL_PROCESS_ATTACH
0F38   134C   05/18  05:45:13.349  31     wss4writerhelperplugin.cpp(577)       [00000000003CF130]   037F22CA-C1CF-476B-9E4E-6D58C1D2A248     NORMAL Export And Import Operation to be performed in PostRestore WSS Specific Operations
0F38   134C   05/18  05:45:13.364  18     fsutils.cpp(3723)          037F22CA-C1CF-476B-9E4E-6D58C1D2A248     WARNING       Failed: Hr: = [0x80070003] : G: lVal : HRESULT_FROM_WIN32(dwError)
0F38   134C   05/18  05:45:13.380  31     wss4writerhelperplugin.cpp(246)       [00000000003CF130]   037F22CA-C1CF-476B-9E4E-6D58C1D2A248     NORMAL Successfully created Export Files Staging Directory \\wilson-sp.wsouza.local\MTATempStore$\DPM_40cb1167_d579_471c_91e4_6cbf3cd44193\cmp
0F38   134C   05/18  05:46:50.415  18     fsutils.cpp(663)           037F22CA-C1CF-476B-9E4E-6D58C1D2A248     WARNING       Failed: Hr: = [0x80070057] : Invalid path:\\wilson-sp.wsouza.local\MTATempStore$\DPM_40cb1167_d579_471c_91e4_6cbf3cd44193\cmp\00000000.dat
0F38   134C   05/18  05:46:50.415  18     fsutils.cpp(698)           037F22CA-C1CF-476B-9E4E-6D58C1D2A248     WARNING       Failed: Hr: = [0x80070057] : GetVolumePrefixLength failed for \\wilson-sp.wsouza.local\MTATempStore$\DPM_40cb1167_d579_471c_91e4_6cbf3cd44193\cmp\00000000.dat
0F38   134C   05/18  05:46:50.493  22     watsonintegration.cpp(73)         037F22CA-C1CF-476B-9E4E-6D58C1D2A248 NORMAL Inside Watson Handler
0F38   134C   05/18  05:46:50.509  22     watsonintegration.cpp(116)        037F22CA-C1CF-476B-9E4E-6D58C1D2A248 CRITICAL      Raising Watson for process

Here’s what you see on the DPM 2010 Administrator Console/Monitoring tab:

clip_image008

Error Description text: The recovery jobs for SharePoint Farm SharePoint Farm \SPSQL\SharePoint_Config that started at Friday, May 18, 2012 12:45:05 AM, with the destination of Wilson-sp.wsouza.local, have completed. Most or all jobs failed to recover the request data. (ID 3111)

The protection agent on Wilson-sp.wsouza.local was temporarily unable to respond because it was in an unexpected state. (ID 60 Details: Internal error code: 0x809909B0)

Scenario 2

The second scenario is where a site collection or site is deleted from the farm, and right away the SharePoint administrator requests you to do a Site Collection/Site restore but the restore fails.

The reason for this is because when SharePoint exports and imports the data it needs to import to an existing Site Collection/Site. If the Site Collection/Site wasn’t recreated before you started the restore from DPM then the failure is expected.

So let’s say that the SharePoint administrator removed the Site Collection as shown below.

clip_image010

Once it was removed he noticed that he actually meant to remove a different site collection than the one he selected. The DPM administrator then gets a call from the SharePoint administrator saying that he needs the site collection named http://wilson-sp/sites/blog restored to its original location. The DPM admin then attempts to recover it.

clip_image011

Unfortunately, after few minutes the restore fails on the second step as shown below.

clip_image013

Note that the first step was to restore the SQL database. In Monitoring/Jobs, the SQL DB restore job is called Disk recovery and as you can see here, that step was successful:

clip_image015

It’s the second step (SharePoint export and import task) that we’re interested in and the error message here is pretty clear:

clip_image017

Error Description text: DPM was unable to import the item http://wilson-sp/sites/Blog/ to the protected farm. Exception Mesaage = The site http://wilson-sp/sites/Blog/ could not be found in the Web application SPWebApplication Name=SharePoint – 80.. (ID 32005 Details: The system cannot find the file specified (0x80070002)

Here is a snippet from WssCmdletsWrapperCurr.errlog:

0770 0A9C 05/18 06:14:19.774 31 wsscmdletswrapperfactory.cpp(235) ACTIVITY Principal name HOST/WILSON-SP.WSOUZA.LOCAL@WSOUZA.LOCAL
0770 12E8 05/18 06:14:41.478 31 WSSCmdlets.cs(1362) NORMAL Successfully added UnAttachedContentDatabase [WILSON-SQL\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab].
0770 12E8 05/18 06:14:41.541 31 WSSCmdlets.cs(418) NORMAL Triggering Export of Source Url =
http://wilson-sp/sites/Blog/ to File = C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp\
0770 12E8 05/18 06:14:41.775 31 WssExportHelper.cs(125) NORMAL Export Parameters:- SourceUrl = [
http://wilson-sp/sites/Blog], ExportFilePath = [C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp], ExportFileName = [], RoType = [SiteCollection]
0770 12E8 05/18 06:14:41.775 31 WssExportHelper.cs(131) NORMAL Export Parameters:- Unattached Database :: [WILSON-SQL\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab]
0770 12E8 05/18 06:14:41.916 31 WssExportHelper.cs(278) NORMAL Source url : [
http://wilson-sp/sites/Blog] , HostHeaderIsSiteName = False
0770 12E8 05/18 06:14:41.931 31 WSSObjectModelHelper.cs(114) NORMAL Modified Source Url =
http://wilson-sp:44573/sites/Blog
0770 12E8 05/18 06:14:42.291 31 WssExportHelper.cs(303) NORMAL Triggering Export of SiteCollection =
http://wilson-sp:44573/sites/Blog
0770 12E8 05/18 06:15:21.464 31 WSSCmdlets.cs(444) NORMAL Successfully exported Source Url =
http://wilson-sp/sites/Blog to File = C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp\
0770 12E8 05/18 06:15:21.464 31 WSSCmdlets.cs(502) NORMAL Triggering Import of Target Url =
http://wilson-sp/sites/Blog/ from File = C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp\
0770 12E8 05/18 06:15:21.464 31 WssImportHelper.cs(157) NORMAL Import Parameters:- TargetUrl = [
http://wilson-sp/sites/Blog/], ImportFilePath = [C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp], ImportFileName = [], ImportSecurity = [False], IsAlternateUrl = [False], roType = [SiteCollection]
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(518) WARNING Caught IOException while trying to import Url [
http://wilson-sp/sites/Blog/] from File [C:\temp\DPM_7a6721a8_22e0_48c0_ae67_f0154c517bab\cmp\], will retry
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1281) WARNING --------------------------------------------------
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1282) WARNING Exception Message =
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1282) WARNING The site
http://wilson-sp/sites/Blog/ could not be found in the Web application SPWebApplication Name=SharePoint - 80.
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING Exception Stack =
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.SPSite..ctor(SPFarm farm, Uri requestUri, Boolean contextSite, SPUserToken userToken)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.SPSite..ctor(String requestUrl)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.SPImport.InitializeImport()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.SPImport.Run()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at WSSCmdlets.CWssImportHelper.ImportUrlDelegate()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass4.<RunWithElevatedPrivileges>b__2()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1283) WARNING at WSSCmdlets.CWSSCmdlets.ImportUrl(String targetUrl, String importPath, String importFileName, Boolean importSecurity, Boolean isAlternateURLRecovery, String roType, Int32& hr, String& exceptionMessage)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1284) WARNING Inner Exception =
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING Exception String =
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING System.IO.FileNotFoundException: The site
http://wilson-sp/sites/Blog/ could not be found in the Web application SPWebApplication Name=SharePoint - 80.
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.SPSite..ctor(SPFarm farm, Uri requestUri, Boolean contextSite, SPUserToken userToken)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.SPSite..ctor(String requestUrl)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.Deployment.SPImport.InitializeImport()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.Deployment.SPImport.Run()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at WSSCmdlets.CWssImportHelper.ImportUrlDelegate()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass4.<RunWithElevatedPrivileges>b__2()
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1285) WARNING at WSSCmdlets.CWSSCmdlets.ImportUrl(String targetUrl, String importPath, String importFileName, Boolean importSecurity, Boolean isAlternateURLRecovery, String roType, Int32& hr, String& exceptionMessage)
0770 12E8 05/18 06:15:21.745 31 WSSCmdlets.cs(1286) WARNING

So at this point you call the SharePoint administrator and ask him to recreate the site collection (blank is fine). The SharePoint admin creates the site collection as shown below and then tells you that you can proceed with the restore.

clip_image019

However, even after doing this the restore fails again and the error message doesn’t help at all.

clip_image021

Monitoring/Jobs gives you a little better error message:

clip_image023

Error Description text: DPM was unable to import the item http://wilson-sp/sites/Blog/ to the protected farm. Exception Message = Cannot import site. The exported site is based on the template BLOG#0 but the destination site is based on the template STS#1. You can import sites only into sites that are based on the same template as the exported site.. (ID 32005 Details: Unknown error (0x80131600) (0x80131600)

Below is a snippet from WssCmdletsWrapperCurr.errlog:

1310 11CC 05/18 06:31:42.803 31 wsscmdletswrapperfactory.cpp(235) ACTIVITY Principal name HOST/WILSON-SP.WSOUZA.LOCAL@WSOUZA.LOCAL
1310 0B4C 05/18 06:31:56.178 31 WSSCmdlets.cs(1362) NORMAL Successfully added UnAttachedContentDatabase [WILSON-SQL\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a].
1310 0B4C 05/18 06:31:56.194 31 WSSCmdlets.cs(418) NORMAL Triggering Export of Source Url =
http://wilson-sp/sites/Blog/ to File = C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp\
1310 0B4C 05/18 06:31:56.335 31 WssExportHelper.cs(125) NORMAL Export Parameters:- SourceUrl = [
http://wilson-sp/sites/Blog], ExportFilePath = [C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp], ExportFileName = [], RoType = [SiteCollection]
1310 0B4C 05/18 06:31:56.335 31 WssExportHelper.cs(131) NORMAL Export Parameters:- Unattached Database :: [WILSON-SQL\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a]
1310 0B4C 05/18 06:31:56.413 31 WssExportHelper.cs(278) NORMAL Source url : [
http://wilson-sp/sites/Blog] , HostHeaderIsSiteName = False
1310 0B4C 05/18 06:31:56.428 31 WSSObjectModelHelper.cs(114) NORMAL Modified Source Url =
http://wilson-sp:44573/sites/Blog
1310 0B4C 05/18 06:31:56.710 31 WssExportHelper.cs(303) NORMAL Triggering Export of SiteCollection =
http://wilson-sp:44573/sites/Blog
1310 0B4C 05/18 06:32:23.811 31 WSSCmdlets.cs(444) NORMAL Successfully exported Source Url =
http://wilson-sp/sites/Blog to File = C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp\
1310 0B4C 05/18 06:32:23.811 31 WSSCmdlets.cs(502) NORMAL Triggering Import of Target Url =
http://wilson-sp/sites/Blog/ from File = C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp\
1310 0B4C 05/18 06:32:23.811 31 WssImportHelper.cs(157) NORMAL Import Parameters:- TargetUrl = [
http://wilson-sp/sites/Blog/], ImportFilePath = [C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp], ImportFileName = [], ImportSecurity = [False], IsAlternateUrl = [False], roType = [SiteCollection]
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(540) WARNING Caught Exception while trying to import Url
http://wilson-sp/sites/Blog/ from File C:\temp\DPM_0dfb31ba_a9c0_454a_84da_0cd87cd5fd2a\cmp\
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1281) WARNING --------------------------------------------------
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1282) WARNING Exception Message =
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1282) WARNING Cannot import site. The exported site is based on the template BLOG#0 but the destination site is based on the template STS#1. You can import sites only into sites that are based on same template as the exported site.

1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING Exception Stack =
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.WebSerializer.IsWebTemplateCompatible(String sourceWebTemplateName, String destinationWebTemplateName)
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.WebSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector)
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.XmlFormatter.CallSetObjectData(Object obj, SerializationInfo objectData, ISerializationSurrogate surrogate, ISurrogateSelector selector)
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObject(Type objectType, Boolean isChildObject)
1310 0B4C 05/18 06:32:24.608 31 WSSCmdlets.cs(1283) WARNING at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope)

So what’s the problem? As we can see, the template used by the SharePoint administrator for the blank site collection doesn’t match the template used for the deleted Site Collection. In this case, the template used for the Site Collection we are trying to restore is BLOG#0. This clearly tells that the template used was BLOG. In case your template isn’t as clear as this one, you can get a list of templates by running this command:

Get-SPWebTemplate | Sort-Object Name | ft –auto

So with that in mind, this is how the Site Collection should be recreated in our example:

clip_image025

 

Other Issues

There are also a few other known issues when restoring SharePoint Site Collection/Sites that you’ll want to be aware of:

1. If there is a space in a site name, protection will work but the restore of the site will not.

2. If a site is renamed and later on you need to restore that site, the restore will fail. This happens because once the site is renamed, DPM will treat the site as an Item. During restore DPM will complain that an Item can’t be restored to a site since they both have the same path.

 

Restoring Items Outside of DPM

Situations may arise where an item needs to be restored but using the DPM process will not work. For example, maybe there is a failure in the DPM restore process. Or maybe backups were done but catalogs failed and thus items are not enumerated in the DPM UI.

In these types of situations, you can use DPM to restore the content database to an alternate SQL server that SharePoint is using and then use SharePoint tools to restore a given item to a network location. The steps for this are below:

1. From DPM, select and restore the content database that holds the item you are trying to restore to a SQL server. For example, here we’ll restore file DownloadCenter.xml:

clip_image026

2. The item above is on content database WSS_Content so we’ll right-click that content database and select Recover

clip_image027

3. Click Next on the first screen of the recovery wizard (Review Recovery Selection).

clip_image029

4. On Select Recovery Type, select Recover to any SQL instance and click Next.

clip_image031

5. Select a SQL server where you want to restore the content database.

NOTE If you select the SQL Server\Instance where the content database you are restoring exist, give an alternate name for the database as well as a different path for the MDF and LDF files.

In our example here I am selecting the same SQL server\Instance name where content database WSS_Content exists so I will use an alternate name and database path for the MDF and LDF files.

clip_image032

6. Complete the Recovery Wizard accepting all defaults.

7. Once the restore is completed, go to SQL server\instance where the database was restored and verify that the database is there.

clip_image033

Now navigate to the SharePoint front end server for the remaining steps.

8. From Central Administration, click on Backup and Restore.

clip_image034

9. Click on Recover data from an unattached content database.

clip_image035

10. Enter the database server, database name, select Browse Content and then click Next.

clip_image037

11. Select the Site Collection, Site and List for the item you want to restore, then select Export site or list and then click Next.

clip_image039

12. Select the UNC path to store the exported item and click Start Export.

NOTE The path needs to be an existing share where the SharePoint admin has permission to write the file.

clip_image041

13. Once export is complete, check on the share/folder for the exported file.

This concludes our series on protecting SharePoint with System Center 2012 Data Protection Manager. Chances are you’ll never encounter any problems protecting your SharePoint farms but if you do, hopefully the information here will help you get past them.

Chris Butcher | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division
Wilson Souza | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news on FacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPM 2012 R2 SharePoint 2010 SharePoint 2012

Support Tip: DPM System State Protection Fails with error ID: 30229

$
0
0

~ Dwayne Jackson | Senior Support Escalation Engineer

FIXHi Everyone, Dwayne Jackson here with a quick tip in case you ever run into an issue where System State Protection fails for System Center 2012 Data Protection Manager. Please ensure your symptoms align with the items noted below. 

Symptoms

The first symptom you’ll see from DPM server system state protection fails with below error:

clip_image002

<DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code:  0x12F6790). (ID 30229 Details: Internal error code: 0x80990ED0)>

Also, if you examine it from the protected server side, the Event ID 517 below is logged in Application log.

clip_image004

<The backup operation that started at '‎2015‎-‎04‎-‎22T11:46:14.594589400Z' has failed with following error code '0x80780102' (The system writer is not found in the backup.). Please review the event details for a solution, and then rerun the backup operation once the issue is resolved.>

Additionally, from protected server, Windows Server backup displays than error similar to this:

clip_image006

clip_image008

You will also notice that if you go to the protected server and run the command vssadmin list writers from and administrative command prompt, you’ll see that System Writer is not displayed in the output.

Cause

If you encounter these symptoms, a possible cause is that Cryptographic Services on the problem server is not running.

Resolution

To remedy this, carry out the steps outlined below.

1) From the problem server, identify whether the Cryptographic Services service is running.

2) Assuming it is not running, attempt to start the service. If the service starts successfully then move on to step 3. If the service fails to start, review the Application Event Log for help identifying why the service failed to start.

clip_image010

3) From problem server, open an administrative command prompt (Run as Administrator) run the following command:

Vssadmin List Writer

The writer named System Writer should now be displayed.

image

4) From the DPM server, attempt to run the System State backup that previously failed. It should now succeed.

For additional information explaining DPM System State Protection, please see the following:

DPM and System State Backup Explained

Dwayne Jackson | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

DPM 2012 R2


Support Tip: Establishing SQL Server Instance Auto Protection on a Secondary DPM Server

$
0
0

~ Dwayne Jackson | Senior Support Escalation Engineer

image4Hi Everyone, Dwayne Jackson here again with a quick tip for you in case you ever run into an issue where SQL Server instance auto-protection for Data Protection Manager is not enabled during DPM Secondary Protection. Please ensure your symptoms align with the items noted below.  

Scenario

1. Existing DPM Secondary Protection is configured for the SQL Server instance.

2. SQL Server instance auto-protection is working as expected from the Primary DPM server. SQL Server instance auto-protection enables DPM to automatically identify and protect SQL Server databases that are added to instances of SQL Server.

Symptom

From the Secondary DPM Server, newly added databases to the SQL Server Instance are not automatically added to be protected.  

Resolution

If you experience this issue, complete the following steps to enable SQL Server instance auto-protection on the Secondary DPM Server.

1. From the Secondary DPM Server, modify the protection group in question.

2) From the Select Group Members step, expand the appropriate SQL Server.

3) Select the SQL instance (i.e. check the box next to the SQL instance) then right-click it.

4) Next click on the pop-up window that says Turn on auto protection as shown below.

clip_image002

Here is what it should look like on auto protection has been turned on:

clip_image004

For additional information regarding SQL Server instance auto-protection, please see the following:

Add databases to a SQL Server

Dwayne Jackson | Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

System Center 2012 Data Protection Manager

System Center 2012 R2 Data Protection Manager

DPM 2012 R2

KB: You can't delete a recovery checkpoint for a virtual machine in Data Protection Manager

$
0
0

KB7334333232

After Data Protection Manager (DPM) backup fails, you are unable to delete broken recovery checkpoints for a virtual machine that was created by Hyper-V. When you try to do this, you discover that there's no option listed for a virtual machine in the Hyper-V Manager Console GUI as shown below.

3059454

For complete details regarding this problem as well as a resolution, please see the following:

KB3059372 - You can't delete a recovery checkpoint for a virtual machine in Data Protection Manager (https://support.microsoft.com/en-us/kb/3059372/)

J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

Main System Center blog: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv

Forefront Endpoint Protection blog: http://blogs.technet.com/b/clientsecurity/
Forefront Identity Manager blog: http://blogs.msdn.com/b/ms-identity-support/
Forefront TMG blog: http://blogs.technet.com/b/isablog/
Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/
Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/
The Surface Team blog: http://blogs.technet.com/b/surface/

DPM 2012 R2

An in-depth look at the Registry settings that control Microsoft DPM 2012

$
0
0

~ Mike Jacquet | Senior Support Escalation Engineer

HOWHello everyone, Mike Jacquet here from the DPM support team at Microsoft. I would like to share some registry settings that you may not be aware of that can alter the behavior of DPM, enable features, eliminate uncommon errors, and help with troubleshooting. Many of these registry settings were introduced and documented in DPM update rollups, on TechNet or the blogs, however unless you are already familiar with these settings they may be hard to discover on your own.

Please note that this is not a comprehensive list of all DPM registry settings. There are many settings that are part of a default DPM installation that are not covered here.

CAUTIONSerious problems can occur if you modify the registry incorrectly. These problems could require you to reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. Always make sure that you back up the registry before you modify it, and that you know how to restore the registry if a problem occurs.

Diagnostics

Logging was introduced in DPM 2007 and enhanced in DPM 2010. The ability to adjust the verbosity of the logging is helpful when troubleshooting an issue where normal logging may not have enough empirical information leading up to the error.

You can enable verbose logging using the following entries:

NOTE Binary = MSDPM (for engine), DPMRA (for Agent), DPMLA (for Library Agent), DPMAccessManager (for Access Manager), DpmBackup (for DPM backup), DpmWriter (for DPM writer), DPMUI (For DPM console mmc), DPMCLI (for DPM power shell console).

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager

Value Name

<binary>TraceLogMaxSize

Type

REG_DWORD

Value

Maximum Log file size in MB (default is 15 decimal)

  

Value Name

<binary>TraceLogMaxNum

Type

REG_DWORD

Value

Maximum number of log files retained

  

Value Name

<binary>TraceLogLevel [use just TraceLogLevel to enable for ALL binaries]

Type

REG_DWORD

Value

0x43E

  

Value Name

TraceLogPath

Type

REG_SZ

Value

Full path to log files - see notes below

NOTE The tracelogPath location setting is global across of all binaries.

NOTE Service restart is required before TraceLogPath and TraceLogLevel will be used. Delete or rename the TraceLogLevel to disable verbose logging.

NOTE DPM Log files are located in one of the following locations:

- DPM SERVER 2007/2010 (or if upgraded to DPM 2012) logs are in the C:\Program files\Microsoft DPM\DPM\temp folder.
- DPM SERVER 2012 and SP1 logs are in the C:\Program Files\System Center 2012\DPM\DPM\temp folder.
- DPM SERVER 2012 R2 logs are in the C:\Program Files\System Center 2012 R2\DPM\DPM\temp folder.
- PROTECTED SERVER logs are always in C:\Program files\Microsoft Data Protection Manager\DPM\temp

The DPMUI and DPMCLI error logs will be located in the users profile under one the following locations depending on DPM version:

C:\Users\<USERNAME>\AppData\Roaming\Microsoft\<DPM PRODUCT VERSION DIRECTORY> 

- Microsoft System Center 2012 Data Protection Manager
- Microsoft System Center 2012 R2 Data Protection Manager
- Microsoft System Center 2012 Service Pack 1 Data Protection Manager
- Microsoft System Center Data Protection Manager
- Microsoft System Center Data Protection Manager 2010

NOTE DPM will create log files with an extension of <binary>*.errlog.crash should the <binary> service crash.  These .crash files are not limited by the <binary>TraceLogMaxNum setting so it is advisable to monitor the log location and manually delete the .crash files manually.

More information can be found in this TechNet article.

DPM 2012 Service Pack 1 introduced online Azure backup capability that uses a different agent. You can also enable verbose logging for online backups:

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup

Value Name

TraceLogLevel [Delete or rename to disable verbose logging]

Type

REG_DWORD

Value

0x12 (18 decimal)

  

Value Name

CBEngineTraceLogMaxNumber

Type

REG_DWORD

Value

Maximum number of log files retained

NOTE Obengine service restart is required before TraceLogLevel will be used:

C:>Net stop obengine
C:\Net start obengine

Logs are located under C:\Program Files\Microsoft Azure Recovery Services Agent\Temp

Agent Communications

These were introduced in DPM 2010 to support client protection in mixed TPC IP4 and IP6 environment. DPM will work in a pure IPv6 environment, however if DPM has both IPv4 and IPv6 addresses, we expect the Protected Server (PS) to have IPv4 enabled as our preferred channel is IPv4. If some of the agents have only IPv6 enabled then we must have only IPv6 on DPM and on all PS's.

With that said, there is a workaround to this but it was not thoroughly tested therefore not supported by Microsoft. If you set the registry key below, DPM should work properly in a mixed environment.

Please note that this is not formally supported, but good for testing to see if it helps.

On BOTH the DPM Server and the Protected Server, set the following registry key:

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\2.0

Value Name

PingBeforeConnect

Type

REG_DWORD

Value

0x1

Agent communication timeouts

Description: The DPM service was unable to communicate with the protection agent on PS_Server.domain.com. (ID 2019 Details: An existing connection was forcibly closed by the remote host (0x80072746))

NOTE ID Numbers may vary but the error code 0x80072746 is consistent.

This can be caused by very slow network connectivity which causes the backup sender (DPMRA on the Protected Server) to timeout. To address this, add the following on both the DPM Server and the Protected Server(s), then restart the DPMRA service for the change to take effect. Be sure there are no active jobs before restarting the DPMRA services.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Agent

Value Name

ConnectionNoActivityTimeoutForNonCCJobs

Type

REG_DWORD

Value

0x1c20 (7200 decimal)

  

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Agent

Value Name

ConnectionNoActivityTimeout

Type

REG_DWORD

Value

0x1c20 (7200 decimal)

Description: The DPM service was unable to communicate with the protection agent on Clust-01.Domain.com. (ID 52 Details: The semaphore timeout period has expired (0x80070079))

170359 - How to modify the TCP/IP maximum retransmission time-out (http://support.microsoft.com/kb/170359/EN-US)

Apply on both DPM and Protected servers:

Location

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters

Value Name

TcpMaxDataRetransmissions

Type

REG_DWORD

Value

10 or more decimal

Application (SQL/ Exchange) Protection

Introduced in DPM 2007 hotfix KB 970867.

DPM performs a Volume Shadow Copy Service (VSS) full backup. Because the application transaction logs are deleted when the DPM backup job is completed, the DPM backup may interfere with other backup methods that are backing up transactional applications such as Microsoft SQL Server or Microsoft Exchange. By adding the below value on the protected server, DPM will perform copy only backups which will not truncate log files.

Add the following value on the protected server:

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\2.0

Value Name

CopyBackupEnabled

Type

REG_DWORD

Value

0x1

NOTE The same can be accomplished for SQL Server by configuring DPM to synchronize right before recovery point as seen in the figure below.

clip_image002

If the selection ‘Just before a recovery point’ is used then incremental backups won’t get scheduled. This option is a way of telling DPM that the user is interested only in express full backups and not incremental backups which truncate the logs.

Introduced in DPM 2010 to support a copy only backup for SharePoint farms that are using log shipping. 

As an example - On a SharePoint farm, you configure SQL to ship its logs to an alternate SQL server and replicate the farm for disaster recovery.  This process truncates the SQL log files and therefore DPM will not need to.

On the DPM server, create the following key:

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\2.0\CopyBackups

Value Name

SQL_SERVER\SQL_Instance\ConfigDB_Name

Type

REG_DWORD

Value

0x1

As an example, if your SharePoint farm is using a SQL server named SPSql_01 and the instance is named SP2010, then you would just look for the config database and the REG_DWORD would be similar to:

SPSql_01\SP2010\SharePoint_Config_907565b-d867-43b9-9371-2d9d69c0ecf1

SQL 4200 Express Full Limit Alert

When trying to perform more than 4200 SQL Server Express full backups, DPM may generate the following Alert:

A DPM server can have a maximum of 4200 Express Full backups of SQL Server per week. If you exceed this limit, the DPM server may miss the backup SLA and eventually it may become unresponsive. You currently have #### Express Full backups of SQL Server on Server1.contoso.com (ID: 32630)"

This is a proactive Warning Alert raised by DPM to let the user know that they have either 1) protected too many SQL data sources belonging to the same Protected Server, or 2) has set the Express Full frequency too high, or 3) a combination of both. If the user ignores this alert DPM should just continue to work, however there can be two possible outcomes. First, if the individual databases are very small and have little churn, the backups will work just fine. If the databases are larger and have a lot of churn, some Express Full backups will fail with errors such as “another backup is going on at the same time”. This will result in missed SLA for effected databases.

To eliminate the alert at the expense of possible missed backups, add the following on the DPM server, then run through the modify protection group wizard for the protection group containing the SQL Server protection without making changes.

SQLExpressFullPerPSLimit defines how many Express Full backups you can create per protected server.

SQLExpressFullLimit defines how many you can create in total from that DPM server.

Make each of them larger than the default of 0x1068 (4200 decimal).

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\ScaleConfig

Value Name

SQLExpressFullPerPSLimit

Type

REG_DWORD

Value

0x1068 (4200 decimal is the default)

  

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\ScaleConfig

Value Name

SQLExpressFullLimit

Type

REG_DWORD

Value

0x1068 (4200 decimal is the default)

Client Protection

DPM 2010 introduced Windows Client protection. The settings below were introduced to help with overall performance as per the TechNet articles.

Optimizing Client Computer Performance

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection

Value Name

WaitInMSPerRequestForClientRead

Type

REG_DWORD

Value

0x32 or 50 decimal (time in milliseconds to wait between IO)

NOTE The default value for WaitInMSPerRequestForClientRead DWORD is 50 (32H). This means the DPM agent will wait 50ms per read cycle to locate changed data. You can increase the value to 75 or 100 decimal to reduce IO on the disk to improve machine responsiveness at the cost of longer backup times. If you want to increase backup speed at the expense of responsiveness, reduce the value to 40 or 30 decimal.

Scaling up Client Protection

For Task Throttling:

Location

Software\Microsoft\Microsoft Data Protection Manager\Configuration\DPMTaskController\MaxRunningTasksThreshold

Value Name

9037ebb9-5c1b-4ab8-a446-052b13485f57

REG_DWORD

REG_DWORD

Value

0x32

  

Location

Software\Microsoft\Microsoft Data Protection Manager\Configuration\DPMTaskController\MaxRunningTasksThreshold

Value Name

3d859d8c-d0bb-4142-8696-c0d215203e0d

REG_DWORD

RED_DWORD

Value

0x64

  

Location

Software\Microsoft\Microsoft Data Protection Manager\Configuration\DPMTaskController\MaxRunningTasksThreshold

Value Name

c4cae2f7-f068-4a37-914e-9f02991868da

REG_DWORD

REG_DWORD

Value

0x32

The GUIDs control certain types of DPM tasks and you may need to tweak only certain ones to fit your needs. Below are the meanings and why you might want to reduce them.

9037ebb9-5c1b-4ab8-a446-052b13485f57 = Initial Replication - Reduce this if you plan on adding lots of new clients to protection at one time - this will limit the simultaneous transfer of data from X number of clients.

3d859d8c-d0bb-4142-8696-c0d215203e0d = Delta Replication (synchronizations) - reduce this to help with all clients trying to synchronize at the same time after an extended outage.

c4cae2f7-f068-4a37-914e-9f02991868da = Validate and Fix up (consistency check) - reduce this to throttle repairing replica volumes that need consistency checks.

To adjust the collocation factor: The default was 10 in DPM 2010 and was increased to 30 in DPM 2012.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\Client

Value Name

DSCollocationFactor

REG_DWORD

REG_DWORD

Value

0x1E (30 decimal)

Restoring Client backup data

This is new inKB 2465832.

The administrator of a client computer must set the name of non-admin users who have to have permissions to perform end-user recovery of protected data of a client computer. To do this, the administrator must add the following registry key and value for each of these non-admin users. This is single key that contains a comma-separated list of client users without any leading or trailing spaces. You do not have to add this key separately for each non-admin user.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection

Value Name

ClientOwners

Type

REG_SZ

Value

Comma-separated list of client users: IE: Domain\User1,Domain\User2

MORE INFORMATION: This is a hands off solution to allow all users that use a machine to be able to restore their own files.

1) Using Notepad, create these two .cmd files and save them in c:\temp (be sure the .txt extension is removed).

<addperms.cmd>

 
Cmd.exe /v /c c:\temp\addreg.cmd
 
<addreg.cmd>
 set users=
 echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
 echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
 FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
 echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
 REG IMPORT c:\temp\perms.reg
 Del c:\temp\perms.reg

2) Using Windows Scheduler, schedule addperms.cmd to run daily. Any new users that log onto the machine will automatically be added to the registry and have the ability to restore their own files.

Library / Tape Management

Introduced in DPM 2007 feature pack KB 949779

Added feature to better use tape capacity by co-locating data from multiple protection groups that have a similar retention range.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\1.0\Colocation

Value Name

TapeExpiryTolerance

Type

REG_DWORD

Value

A fraction between 0 and 1. The default value is 0.15 which is 15%

For DPM 2007 / DPM 2010 - see the following information: Things you can do to help Data Protection Manager utilize your tapes full capacity

IMPORTANT NOTE The TapeExpiryTolerance value is depreciated in DPM 2012 and later. You can now create protection group co-location sets (Pgset) to have better control over tape co-location.

For DPM 2012 and later see the following information: Colocate data from different protection groups on tape

First introduced in DPM 2007 KB 970868

Detailed inventories can raise alerts for failure on each slot. If you have set many slots, too many alerts may be raised. Additionally, alerts are raised for each slot when you cancel the library detailed inventories. To prevent the alerts from being raised add the following value on the DPM Server.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\1.0\Alert

Value Name

DetailedInventoryFailed

Type

REG_DWORD

Value

0x0

NOTE After you apply the update and the registry setting, you can still determine whether the DI jobs failed or succeeded in the jobs view.

Support for IBM System Storage TS2900 Tape Autoloader KB 2465832

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent

Value Name

RSMCompatMode

Type

REG_DWORD

Value

0x1D or 29 Decimal

NOTE The RSMCompatMode registry value is used to specify multiple flags for DPM. The following are the flags that are set by this registry value:

• 1 = RSM_COMPAT_INIT_ELEMENT_STATUS
• 4 = RSM_COMPAT_IGNORE_TAPE_INVENTORY_RESULT
• 8 = RSM_COMPAT_CLEANER_EXCEPTION
•16 = TS2900 compatibility

Dell TL2000 / TL4000 and IBM 35XX libraries require the RSMCompatMode registry value to be 0xD or 13 Decimal, however 0x1D will also work fine with those libraries.

Short Erase

When a user erases a tape, by default DPM will do a long erase on that tape which takes longer. DPM 2010 introduced the ability to do a short erase by adding the following value.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection

Manager\Agent

Value Name

UseShortErase

Type

REG_DWORD

Value

0x0

NOTE Setting to 0 (zero) will cause DPM to use a short erase. In order to set it back to using the long erase, simply delete the UseShortErase value.

Media Usage

The following two settings were introduced in the below updates:

DPM 2010 Update Rollup 6 KB 2718797
DPM 2012 Update Rollup 2 KB 2706783

Expiry dates for valid datasets that are already written to tape are changed when the retention range is changed during a protection group modification.

A protection group is configured for long-term tape recovery points together with custom long-term recovery goals. Recovery Goal 1 has a smaller retention range than the other recovery goals. In this configuration, if the protection group is changed to remove Recovery Goal 1 and to keep other recovery goals, datasets that were created by Recovery Goal 1 have their retention range changed to the retention range of the other recovery goals.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Configuration\MediaManager

Value Name

IsDatasetExpiryDateChangeInModifyPgAllowed

Type

REG_DWORD

Value

0x0

Tapes are not reusable until the day after the day of expiry. This is true because DPM waits until midnight to run the reclamation job that marks tapes as reusable.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Configuration\MediaManager

Value Name

ExpireDatasetOnDayStart

Type

REG_DWORD

Value

0x1

MMC crash when opening library tab

When running DPM 2010 or DPM 20102 and using a tape library with many slots, the DPM MMC will crash on opening. Often after several attempts, the console will finally open. This is usually seen when more than 1500 slots are presented to DPM for use.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows

Value Name

USERPostMessageLimit

Type

REG_DWORD

Value

0x4e20 (20000 decimal)

Tape I/O errors 0x8007045D

Some tape drives do not handle multi-buffer IO very well and can lead to tape drive IO device errors. This IO error may result in DPM tape backup jobs failing or the tape being closed out and marked offsite ready before it is full. If you look in the DPMRACURR.ERRLOG after such a failure, you will find error code 0x8007045D that means "The request could not be performed because of an I/O device error". Reducing the buffer size helps in most cases.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent

Value Name

BufferQueueSize

Type

REG_DWORD

Value

0x2 (Default is 10 decimal - maximum is 30 decimal)

   

NOTE If you set the value to 0x1 it may help with backup, however restores require a minimum of 2 buffers so it is advised to set it for 0x2 or more.

Prior to the BufferQueueSize setting, you could use a value called TapeSize. If the tape driver returns an IO_DEVICE_ERROR and the amount of data written by DPM is more than TapeSize value (in MBs), DPM will auto convert IO_DEVICE_ERROR to END_OF_TAPE_REACHED and span to next media without any issues.

Default behavior is for DPM to treat any I/O error after more than 30GB is written to tape as an “end of media” command. The TapeSize is now depreciated so please use the BuferQueueSize to fix I/O errors.

Another solution that also seems to help resolve the above IO error 0x8007045D is to add the following Storport key and BusyRetryCount value to each of the tape devices.

Location

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Enum\SCSI\<DEVICEID>

\<INSTANCE>\DeviceParameters\Storport\

Value Name

BusyRetryCount

Type

REG_DWORD

Value

0xFA (250 decimal)

To get a list of all the tape devices in your DPM Server that need the registry key added, run the following command from an administrative command prompt. That will return a list of tape drive Scsi\DeviceID\Instance that you can use to make the above change.

C:\Windows\system32>wmic tapedrive list brief

clip_image004

Below would be the registry keys to add to the DPM server based on the above output from the WMIC command.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI\Sequential&Ven_IBM&Prod_ULTRIUM-TD3\5&31cf2afa&0&000001\Device Parameters\StorPort]
"BusyRetryCount"=dword:000000fa

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI\Sequential&Ven_IBM&Prod_ULTRIUM-TD3\5&31cf2afa&0&000002\Device Parameters\StorPort]
"BusyRetryCount"=dword:000000fa

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI\Sequential&Ven_IBM&Prod_ULTRIUM-TD3\5&31cf2afa&0&000003\Device Parameters\StorPort]
"BusyRetryCount"=dword:000000fa

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI\Sequential&Ven_IBM&Prod_ULTRIUM-TD3\5&31cf2afa&0&000004\Device Parameters\StorPort]
"BusyRetryCount"=dword:000000fa

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\SCSI\Sequential&Ven_IBM&Prod_ULTRIUM-TD3\5&31cf2afa&0&000005\Device Parameters\StorPort]
"BusyRetryCount"=dword:000000fa

Prompting Timeout

During a tape backup, if a tape becomes full and there are no other tapes marked Free, Free (contains data) or Expired in the library, or if you are using a standalone tape drive that needs you to manually change the tape, DPM will raise an alert to prompt for another free tape to continue backup. The same is true during a restore. If a needed tape is not in the library an alert will be raised. By Default, DPM It will wait for 1 hour before failing the job.

This prompting timeout can be configured by adding this registry entry on the DPM Server. Restart the DPMRA service for it to take effect.

Location

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\1.0\Prompting

Value Name

PromptingTimeOut

Type

REG_DWORD

Value

3600000 (Timeout in MS decimal) The Formula is (#hrs*1000*60*60)

DPM Update Rollup

DPM Update Rollup setup will make a backup of the DPMDB prior to applying the update. You can alter the location of where that backup is stored using the following registry values. The backup file will be called QFEDPMDB.bak.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\<MSSQL.n>\MSSQLServer

Value Name

BackupDirectory

Type

REG_SZ

Value

EXAMPLE ONLY: C:\Program Files\Microsoft DPM\SQL\MSSQL10_50.MSDPM2012\MSSQL\Backup

DPM Update Rollups may fail when DPM services do not stop or start in an allotted time. If you look in the update log, you will find entries with the error 8007041d which means "The service did not respond to the start or control request in a timely fashion". Below is a sample of a timeout trying to start the DPMWriter service, but the same can be logged for other DPM Services.

1: PatchCA: 2: start dpmwriter returned hr=8007041d
1: PatchCA: 2: Error in EnableAndStartService. hr=8007041d
1: PatchCA: 2: EnableDpmServices returned hr 0x8007041d

When a service starts, the service communicates to the Service Control Manager how long the service must have to start (the time-out period for the service). If the Service Control Manager does not receive a "service started" notice from the service within this time-out period, the Service Control Manager terminates the process that hosts the service. This time-out period is typically less than 30 seconds. If you do not adjust this time-out period, the Service Control Manager ends the process.

To eliminate that timeout error add the following value and restart the DPM server. Note that 30000 is in milliseconds, which is 5 minutes.

Location

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control

Value Name

ServicesPipeTimeout

Type

REG_DWORD

Value

0x493E0 (30000 decimal)

IMPORTANT The above issue was fixed in DPM 2012 R2 UR6.

Bypass DPM filter block level tracking

There may be times when normal backups may not occur due to errors in the DPM Filter bitmap, a possible resource issue on the machine, or some other unforeseen problem. Under such conditions, you may want DPM to make the backup using consistency check workflow until a permanent solution can be found. To Bypass the DPM filter block level tracking mechanism, you can add this registry value on the protected server, then restart the DPMRA service.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\2.0

Value Name

ForceFixup

Type

REG_DWORD

Value

0x1

NOTE This will have the same performance impact as a consistency check for every recovery point taken while the ForceFixup is in use.

Please be aware of the following KB Article:

2848751 VM backups in Data Protection Manager fail with "change tracking information is corrupt" error (https://support.microsoft.com/en-us/kb/2848751/en-us)

Auto Heal features

DPM 2010 introduced some Auto Heal features like Auto-grow, Auto-rerun, Auto-CC and Continue on Failure to help resolve backup failures. These features are carried forward and are present in all newer versions of DPM. The below values can be adjusted to better meet your needs to control if and when the Auto Heal feature is utilized.

I don't want to re-invent the wheel here since many of these are already documented in our DPM 2010 blogs, but for completeness of this article I think it's necessary to include them here as well.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Configuration

Value Name

DisableAutoHeal

Type

REG_DWORD

Value

0x1 [0x1 = do not auto-rerun failed jobs.]

Note

Above only effects auto-rerun and if set to 0x1 other values for auto-rerun do not apply.

  

Value Name

AutoRerunDelay

Type

REG_DWORD

Value

0x3C [60 decimal is the default and is in minutes]

  

Value Name

AutoRerunNumberOfAttempts

Type

REG_DWORD

Value

0x1 [Number of job re-run attempts before publishing alert]

  

Value Name

AutoCCNumberOfAttempts

Type

REG_DWORD

Value

0x1 [Number of Consistency Check re-run attempts]

  

Value Name

AutoCCDelay

Type

REG_DWORD

Value

0x3C [60 decimal is the default and is in minutes]

  

Value Name

MaxFailedFiles  [This is added to the protected servers registry, then restart dpmra service]

Type

REG_DWORD

Value

0x64 [Number of files to skip before failing backup job. 100 decimal is the default]

More information about some of the above values can be found in the following blog posts:

DPM 2010: Helping you meet SLAs with less effort
How to use and troubleshoot the Auto-heal features in DPM 2010

Co-locating Client, SQL and Hyper-V data sources

Disk co-location was introduced in DPM 2010 to allow a single DPM server to protect more than 300 data sources. The 300 data source limit is due to Windows Logical Disk Manager (LDM) database design that maintains the dynamic volumes created by DPM. Protecting 300 unique data sources requires DPM to create 600 volumes, a replica volume and a recovery point volume. The LDM database has a limit of 2960 records, and a minimum of three records is required per dynamic volume created. Since disk migration may need to occur at a later time, DPM leaves unused LDM records by still staying in the 600 volume limit range. By enabling disk co-location for some data sources, DPM does not need to create as many volumes to protect more data sources.

Below are the default co-location entries:

CLIENT PROTECTION

DSCollocationFactor is the number of data sources that can be collocated on a single replica.

NOTE The default DSCollocationFactor of 30 = 3000 clients per DPM 2012 server and later. This means 30 client machines will use the same DPM Replica volume and recovery point volume. The replica volume size will be a factor of DSCollocationFactor setting times the GB/per client specified when adding the clients to protection group.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\Client

Value Name

DSCollocationFactor

Type

REG_DWORD

Value

0x1E (30 Decimal)

To see which client machines are co-located on the same replica volume in a protection group, you can run the below DPM PowerShell commands. Look for co-located client machines that are on the same replicapath (Volume GUID).

Be sure to replace the 'Protectiongroup_Friendly_Name' before running.

$pg = get-protectiongroup (&hostname) | ? { $_.friendlyname -eq 'Protectiongroup_Friendly_Name'}
Get-datasource $pg | sort-object -property replicapath | ft replicapath, ProductionServerName, diskallocation -AutoSize

SQL PROTECTION

DSCollocationFactor: This is the number of SQL data sources that can be collocated on a single replica. DPM will fit as many SQL data Sources as possible up to the specified limit based on the data sources sizes at the time of enumeration and the replica size.

CollocatedReplicaSize: The default value of replica volume created for collocated SQL data sources is 10GB. This can be overridden in the GUI at time of protection. If making changes to the registry, make sure the value entered is a multiple of 1GB (1073741824). The recovery point volume size depends on this value in addition to the retention range specified in the Protection Group. The exact formula is recovery point volume size = (replica volume size * 1.5) * retention days * 0.1 + 1.6GB.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\Client

Value Name

DSCollocationFactor

Type

REG_DWORD

Value

0x1E (30 Decimal)

  

Value Name

CollocatedReplicaSize

Type

REG_SZ

Value

10737418240 (Must be a multiple of 1GB (1073741824 bytes))

To see which SQL databases are co-located on the same replica volume in a protection group, you can run the below DPM Powershell commands. Look for co-located SQL Databases that are on the same replicapath (Volume GUID).

Be sure to replace the 'Protectiongroup_Friendly_Name' before running.

$pg = get-protectiongroup (&hostname) | ? { $_.friendlyname -eq 'Protectiongroup_Friendly_Name'}
Get-datasource $pg | sort-object -property replicapath | ft replicapath, name, diskallocation -AutoSize

HYPER-V PROTECTION

DSCollocationFactor: This is the number of Hyper-V Guest data sources that can be collocated on a single replica. DPM will fit as many virtual machine data sources as possible up to the specified DSCollocationfactor limit based on the data sources sizes at the time of enumeration and the replica size.

CollocatedReplicaSize: The default value of replica volume created for collocated Virtual machines 250GB. This can be overridden in the GUI at time of protection. If making changes to the registry, make sure the value entered is a multiple of 1GB (1073741824). The recovery point volume size depends on this value in addition to the retention range specified in the Protection Group. The exact formula is recovery point volume size = (replica volume size * 1.5) * retention days * 0.1 + 1.6GB.

The default Hyper-V Co-located replica volume that DPM creates by default is 250GB meaning we will co-locate as many Hyper-V guests that fit on that 250GB volume up to 8 before we create another 250GB volume. You can override the overall replica volume size in the GUI during protection.

- With DPM 2010 RTM you can protect 400 VM’s of an average 50GB with 10% churn rate using a single DPM server (any mix of that delivers ~20TB of total VM space).

- With DPM 2012 RTM you can protect 400 VM’s of average 100GB with 10% churn rate using a single DPM server (any mix of that delivers ~40TB of total VM space).

- With DPM 2012 SP1 or DPM 2012 R2, you can protect 800 VM's of average 100GB each with 10% churn using a single DPM server (any mix of that delivers ~80TB of total VM space).

- With DPM 2012 SP1 later, DPM will allow multiple DPM servers to communicate with nodes in the cluster so now you can scale your hyper-V cluster to 64 nodes and have multiple DPM servers protect the entire cluster.

See the following blog post on scale out Hyper-V protection:

SC 2012 SP1 – DPM: Leveraging DPM ScaleOut feature to protect VMs deployed on a big cluster

A Windows Server 2012 R2 64 node Hyper-V cluster can support 8000 VM's, so 10 DPM 2012 R2 servers each protecting 800 VM's covers all 8000 VM’s.

Location

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\Hyperv

Value Name

DSCollocationFactor

Type

REG_DWORD

Value

0x8 (8 Decimal)

  

Value Name

CollocatedReplicaSize

Type

REG_SZ

Value

268435456000 (Must be a multiple of 1GB (1073741824 bytes))

To see which virtual machines are co-located on the same replica volume in a protection group, you can run the below DPM PowerShell commands. Look for co-located SQL databases that are on the same replicapath (Volume GUID).

Be sure to replace the 'Protectiongroup_Friendly_Name' before running.

$pg = get-protectiongroup (&hostname) | ? { $_.friendlyname -eq 'Protectiongroup_Friendly_Name'}
Get-datasource $pg  | sort-object -property replicapath | ft replicapath, name, diskallocation -AutoSize

Please be aware of these special considerations when dealing with co-located data sources:

Moving Between Co-Located and Non-Co-Located Protection Groups

Stopping Protection for Co-Located Data

Protect, Unprotect, Protect, Unprotect – Understanding how DPM 2010 retention works

In Summary, I hope you find the above information useful and convenient for one stop shopping for DPM related registry values. I will update this blog as new registry setting are introduced in future releases.

Mike Jacquet| Senior Support Escalation Engineer | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

System Center 2012 Data Protection Manager System Center 2012 R2 Data Protection Manager DPM 2012 R2

Free ebook: Microsoft System Center Data Protection for the Hybrid Cloud

$
0
0

imageMicrosoft is happy to announce the release of their newest free ebook, Microsoft System Center Data Protection for the Hybrid Cloud (ISBN 9780735695832), by Shreesh Dubey, Vijay Tandra Sistla, Shivam Garg, and Aashish Ramdas; Mitch Tulloch, Series Editor.

If you are responsible for architecting and designing the backup strategy for your organization, especially if you're looking for ways to incorporate cloud backup into your business continuity scenarios, this book is for you. With the increasing trends in virtualization as well as the move to the pubic cloud, IT organizations are headed toward a world where data and applications run in on-premises private clouds as well as in the public cloud. This has key implications for data protection strategy, and it is important to choose the solution that provides the same level of data protection you have afforded so far while allowing you to harness the power of the public cloud.

We will cover how the Azure Backup service has evolved into a first-class platform-as-a-service (PaaS) service in Microsoft Azure that integrates with the on-premises enterprise class backup product, System Center Data Protection Manager (DPM), to provide a seamless hybrid cloud backup solution. Current backup products treat the cloud as a storage endpoint, which we see as a limited-use case for the public cloud. The approach we describe in this book allows you to exploit the full power of the public cloud and gives you the flexibility to manage your backups in a hybrid world.

We have made a steady set of investments in DPM over the last 18 months, and, as of this writing, we have released six update rollups, including customer hot fixes as well as new features in the areas of private cloud protection, storage optimization, and workload support. The last chapter focuses on the most recently released protection for infrastructure-as-a- service (IaaS) virtual machines, which went to preview release in March 2015 and is expected to be generally available by Q3 of calendar year 2015.

This book covers improvements added in DPM 2012 R2 as well as the integration with Microsoft Azure Backup service and assumes you have working knowledge of the DPM 2012 version.

You can download your free copy here.

J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

System Center 2012 R2 Data Protection Manager

Update Rollup 7 for System Center 2012 R2 Data Protection Manager is now available

$
0
0

 We are happy to announce that Update Rollup 7 (UR7) for Microsoft System Center 2012 R2 Data Protection Manager is now available for download. Please see the following Knowledge Base article for complete details about fixes and installation instructions for DPM 2012 R2:

3065246 Update Rollup 7 for System Center 2012 R2 Data Protection Manager

Please note that Microsoft recommends that all of the System Center 2012 R2 subcomponents be upgraded to the same Update Rollup version. You can upgrade different System Center subcomponents in any desired sequence. Be aware that using subcomponents that are at different Update Rollup versions could lead to compatibility issues and is not a Microsoft supported scenario. For all the latest information regarding Update Rollup 7 for System Center 2012 R2 please see the following:

3069110Description of Update Rollup 7 for System Center 2012 R2

J.C. Hornbeck| Solution Asset PM | Microsoft GBS Management and Security Division

Get the latest System Center news onFacebookandTwitter:

clip_image001clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/ 
Data Protection Manager Team blog: http://blogs.technet.com/dpm/ 
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/ 
Operations Manager Team blog: http://blogs.technet.com/momteam/ 
Service Manager Team blog: http://blogs.technet.com/b/servicemanager 
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Microsoft Intune: http://blogs.technet.com/b/microsoftintune/
WSUS Support Team blog: http://blogs.technet.com/sus/
The RMS blog: http://blogs.technet.com/b/rms/
App-V Team blog: http://blogs.technet.com/appv/
MED-V Team blog: http://blogs.technet.com/medv/
Server App-V Team blog: http://blogs.technet.com/b/serverappv
The Surface Team blog: http://blogs.technet.com/b/surface/
The Application Proxy blog: http://blogs.technet.com/b/applicationproxyblog/

The Forefront Endpoint Protection blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

Viewing all 339 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>