Test-SPContentDatabase Reports Valid Web Parts as ‘Missing’

While working on a migration for a customer we were combing through the results of Test-SPContentDatabase in preparation for the migration efforts of taking their SharePoint 2010 content into a new SharePoint 2013 environment. We were systematically reviewing and repairing all the common errors that we found, like MissingFeatures, MissingSetupFiles, and MissingWebParts. It was the last of these that gave us a bit of a problem. After using the common scripts for removing these missing web parts, which normally show up as an error, these missing web parts were remaining quite stubborn.

Category        : MissingWebPart
Error           : True
UpgradeBlocking : False
Message         : WebPart class [8dd36a66-e8d0-c735-2173-b3cf93383598] (class [
                  _2010_VisualWebPartProject.VisualWebPart1.VisualWebPart1] fro
                  m assembly [2010_VisualWebPartProject, Version=1.0.0.0, Cultu
                  re=neutral, PublicKeyToken=7552893a02bae51b]) is referenced [
                  2] times in the database [WSS_Content], but is not installed
                  on the current farm. Please install any feature/solution whic
                  h contains this web part.
Remedy          : One or more web parts are referenced in the database [WSS_Con
                  tent], but are not installed on the current farm. Please inst
                  all any feature or solution which contains these web parts.

 

The end result is complicated by a couple of factors:

  1. The version of SP2010 that I was working with was 14.0.4762.1000… yes, the RTM build
  2. The web parts that were being stubborn were sourced from Sandbox Solutions.

The combination of the above two items actually cause Test-SPContentDatabase to result in false negatives regarding these web parts…even after you remove the sandbox solution. Yes, folks you heard that correctly.. if you have the RTM build of SP2010 and remove your sandbox solutions that contain web parts, then Test-SPContentDatabase will still show false negatives for these web parts.

I did not test every build to find out exactly where it changed, but at some point before the release of service pack 1, Test-SPContentDatabase was altered to ignore sandbox solutions. So you no longer get an error on these web parts. There may or may not be another problem in that after removing the solution from the solution gallery and the web parts from the web part gallery that the reference still exists in the AllWebParts table, but I can neither confirm nor deny such theory at this time.. Smile

My final thoughts on this?

If you are upgrading SP2010 RTM –> 2013, then you can ignore missing web part warnings/errors from Test-SPContentDatabase as long as you confirm they are referring to sandbox solutions.

SharePoint 2013 Shredded Storage vs. RBS

**THIS IS A REPOST**

It’s important to note that sometimes information needs to be re-posted.. This post was originally created by a peer of mine at Microsoft – Chris Mullendore – on his Microsoft blog. When he moved on, the post disappeared and the rest of us mere mortals FREAKED… so we were able to find the content and I’ve reposted here so that we can all find it again… Thank you, Chris!

SharePoint 2013 has introduced significant enhancements in how file content is stored and retrieved in the database. Specifically, files are now split into many parts (“Shredded”) and stored in individual rows in the content database. This is related to the Cell Storage API introduced in 2010 that support(s/ed) things like co-authoring in SharePoint 2010 and the Office Web Applications, but is gaining prominence in SharePoint 2013 because in 2013 because this file-splitting capability has been pushed beyond an over-the-wire transfer capability and now exists in the SharePoint databases and now exists as a feature called Shredded Storage.

There are already more than enough blog entries (http://blogs.technet.com/b/wbaer/archive/2012/11/12/introduction-to-shredded-storage-in-sharepoint-2013.aspx) describing in detail shredded storage itself. Suffice it to say for the purposes of this blog entry all files will be shredded into either 64K or 1MB (1024K) chunks, depending on the type of file we’re talking about. File types that SharePoint understands and interacts with directly (Office files) will receive the 64K treatment, other file types will be sliced to 1MB chunks. While I don’t know the exact reasons for these sizes, I do believe that they make sense given the purposes of the Cell Storage API, what is likely to be versioned and not, and the difference in use cases for Office documents vs. other files like binary (zip) or large media files.

Where things get really interesting (and where we reach the purpose of this blog) is around how the Shredded Storage functionality impacts or interacts Remote Blob Storage (RBS) which was introduced in SharePoint 2010 and continues to exist in 2013. If you recall, RBS is best used to push relatively large files from being contained directly in the SharePoint content database(s) out into an actual file system. Although any size file can be configured to be pushed (and in fact the default setting is “0”, or “push all files to RBS”), smarter people than me have done testing and indicate that the benefits of RBS are most valuable with somewhat larger files while using RBS for smaller files can actually reduce performance.

So… lets think about these two features for a moment…

  • RBS works best with larger blobs.
  • Shredded Storage slices larger blobs into a lot of very small blobs.

Time for a few examples…

  • Scenario 1: Word document, 10K, all text. RBS Threshold = 0K
    This file would be placed in a single shred and pushed to the RBS store.
  • Scenario 2: Word document, 10K, all text. RBS Threshold = 1MB
    The file would be placed in a single shred and would be stored in the database.
  • Scenario 3: Word document, 5MB, all text. RBS Threshold = 0K
    The file would be placed into numerous ~64K shreds, pushed to RBS, resulting in ~80 RBS chunks.
  • Scenario 4: Word document, 5MB, all text. RBS Threshold = 1MB
    The file would be placed into numerous ~64K shreds and would be stored in the database.

Your 5MB file would never make it to RBS because RBS cares about the size of the shred, not the total size of the file!

Uh oh… we have an apparent conflict. Our Word document will never make it to RBS. However, RBS isn’t completely negated… it still plays a role in files that SharePoint doesn’t provide direct interoperability with. For example, in a Word document SharePoint plays a role in versioning, co-authoring, metadata integration, etc. For other, non-integrated file types however, SharePoint will still use shreds and RBS… just not in the way you might think.

Let’s do the same scenarios using a ZIP file instead, and changing the RBS threshold to 1MB:

  • Scenario 1: ZIP File, 10K. RBS Threshold = 0K
    The file would be placed in a single shred and pushed to the RBS store.
  • Scenario 2:  ZIP File, 10K. RBS Threshold = 2MB
    The file would be placed in a single shred and would be stored in the database.
  • Scenario 3:  ZIP File, 10MB. RBS Threshold = 0K
    The file would be shredded to 1MB chunks and pushed to RBS.
  • Scenario 4:  ZIP File, 10MB. RBS Threshold = 2MB
    The file would be shredded to 1MB chunks and pushed to RBS.

Now things should be very confusing. Neither SharePoint nor RBS cooperated with our RBS settings! The curiosities continue in the following example…

Same scenario, but using a Word document with 1MB of text and a single 5MB embedded image for a total of 6MB…

  • Scenario 1: RBS Threshold = 0K
    The file will be split into ~16 ~64K shreds and 5 1MB shreds, all pushed to RBS.
  • Scenario 2:  RBS Threshold = 2MB
    The file will be split into ~16 ~64K shreds which will be stored in the database and 5 1MB shreds pushed to RBS.
  • Scenario 3:  RBS Threshold = 10MB
    The file will be split into ~16 ~64K shreds which will be stored in the database and 5 1MB shreds pushed to RBS.

You should now be either thoroughly confused or noticing a pattern. Here is the basic summary for the behavior:

  • Content that SharePoint understands will be shredded normally and those shreds will be pushed to RBS or not depending on your RBS threshold.
    Content that SharePoint does not directly understand or interoperate with will always be shredded to 1MB chunks and will always be pushed to RBS if RBS is enabled. Your RBS threshold will be ignored.
    Content that SharePoint understands will be broken down will be recognized during the shredding process and the various pieces will be one of the two above types of content above and will be handled according to the rules of that media type.

Believe it or not, this does actually make sense… but to make sense of it we have to understand the critical differences between the two file types.

  • Office documents are…
    • …likely to be heavily versioned.
    • …likely to have only a small portion of the content change at each version.
    • …reliant on SharePoint/Office ability to do things like co-authoring that are much faster if only the changes/deltas are saved or transferred over the wire.
  • Non-integrated files (images, media, ZIP, etc.) are…
    • … likely to be replaced entirely if versioned. Incremental or cell-based updates are impossible.
    • …not deeply integrated into SharePoint itself, and SharePoint does not alter the contents or attributes of those files.
    • …typically managed through applications that do not understand or directly interact with SharePoint.
      (For the purposes of this conversation, any file type that SharePoint does not provide direct interoperability with should be considered “non-integrated”. For example, a large TXT file uploaded into SharePoint will be managed according to the non-integrated rules despite the fact that it is still a text based format.)

Essentially, (and I am taking an educated guess here) the SharePoint product group looked at the significant difference between these two types of files and made their own determination about what was best for each of them. Office documents shredding would focus on many small, incremental changes between versions that could easily be transferred independently to/from the SharePoint server. Binary files shredding would focus on reasonable sizes that provide a reasonable balance between the benefits and the cost of breaking and re-assembling the binary BLOBs through RBS. Further, the PG went so far as to go inside of Office documents, separating out the editable (and therefore highly likely to change) text from any embedded binary (and therefore either likely to not change, or change completely) elements and is managing both types of content appropriately even as it appears in the same file.

What should you do about this?

First, we need to be clear about the goals and situations that each solution does well in and is intended to address:

  • Shredded storage defaults to 64K for integrated files and 1MB for non-integrated files and reduces SQL database growth caused by file versioning.
  • RBS works better with fewer, larger files, and moves files out of the database.

Clearly keeping the SQL database file size down is a good thing (and is probably why you deployed RBS to begin with). Otherwise, our primary goal is to find a balance between how SharePoint operates and how RBS is configured.

The answer is simple: Set RBS to a 1MB threshold.

This setting is consistent with how SharePoint operates when RBS is enabled and with previous guidance on how to configure RBS such that only large files are pushed to the RBS store. Configuring the RBS threshold to 1MB aligns the environment with SharePoint’s default behavior, but more importantly it ensures that the small 64K shreds are not pushed to the RBS store. Given SharePoint’s behavior, this is the best balance one can strike in the RBS vs. Shredded Storage design. This lets the small, integrated file shreds continue to remain in the database where differential versioning can be effective while maintaining consistency with the large files that are likely to be pushed to RBS by shredded storage anyway.

The side effect to this configuration is that your databases will grow faster than if all files were pushed to RBS, and this may cause some concern for some people. However, this configuration can dramatically increase the actual storage density inside of the database, particularly for heavily versioned files, while avoiding the performance overhead of retrieving thousands of 64K files through RBS for a single document. It really is the perfect balance… the best of both worlds… a good thing.

Oh… about that FileChunkWriteSize thing…

Yes, you can force shredded storage to behave as if it were disabled (even though it’s not) by modifying the FileChunkWriteSize property to some incredibly large value such as 2GB. This would have the intended effect of forcing SharePoint into storing the entire file in a single shred which would then exceed your RBS threshold and SharePoint would dutifully push the file to RBS as desired. However, modifying this setting comes with some significant side effects. For example, by setting the FileChunkWriteSize to a large value you are defeating the opportunity for SharePoint to increase your storage density for versioned files. Think of it this way: The larger the chunk size, the larger the block we store is going to be, and the fewer blocks there will be. The fewer blocks there are, the fewer identical blocks there can be and the lower your storage density.

Lets go for one of those scenarios again for demonstration. Imagine we have a 100MB file that has a single value updated daily (Excel Services can easily create this kind of scenario) in a library with versioning enabled and no maximum version count (a frighteningly common configuration). That means 365 updates over the course of a year.

  • With FileChunkWriteSize = 2GB total storage would be 100MB x 365 days = 36,500MB (or 35.6GB!)
  • With FileChunkWriteSize = 20MB total storage would be 100MB + (20MB x 364) = 7,380MB (or 7.2GB)
  • With FileChunkWriteSize = 64K (the default value) total storage for the file would be 100MB + (64KB x 364) = 122.75MB

The net effect being that leaving that file in the database and allowing shredded storage to do its job will actually reduce storage growth over a single year by over 99% without incurring any of the overhead of pushing that file to RBS! Remember that the purpose of RBS is not to reduce database size… it is to move the non-transactional data to less expensive storage as a means of reducing storage cost. Shredded storage can be better for highly versioned content than RBS because it reduces actual storage used. Depending on your scenario and content, not consuming storage at all can be much more cost effective than consuming a lot of somewhat less expensive storage.

Ultimately, increasing FileChunkWriteSize is the complete opposite of value. Just say no.

So in the end, my recommendation is “Let SharePoint Be SharePoint” and let shredded storage do its job in collaboration with RBS. They both have a place in the world, and they both work together well. SharePoint will own the highly integrated, highly versioned content, and SharePoint will let RBS own the less integrated, monolithic content. End users stay happy, file save requests stay fast, and databases grow much slower than in any previous version of SharePoint.

Edited September 18, 2013 to reflect updates from Bill Baer’s recently published whitepaper on shredded storage, available here.

Provided User Name was not Encoded as a Claim

So today’s lesson is quite bizarre.. and I’m not sure where *exactly* the problem lies just yet. I was going through the process of configuring my SharePoint 2013 (SP2013) farm to test something with an external list that is bound to an external content type (ECT) backed by a SQL Server database. Simple enough on the surface:

  1. In your SP2013 farm, create and configure Business Data Connectivity (BDC) and Secure Store (SS) Service Applications
  2. Create the SQL database
  3. Add a SQL login to use with Secure Store (for my situation I chose not to use Kerberos Delegation)
  4. In SharePoint Designer (SPD), create an ECT and connect it to the database in step 1
  5. Create a custom list from this ECT

Now obviously there are a lot of details that I’m leaving out above, but the purpose of this post is not to show you how to create an ECT. Step 1, specifically, is where I encountered my issues today. I had already previously created the required service applications, but I had not configured them for use. Creating the SS application went without a hitch for the most part, but with the BDC service application it was not that easy.

You need to grant rights to the Metadata Store within the BDC service application in order for users to create an ECT so my first stop was to Central Admin and in to Manage the BDC service application.

Once in the service application management page, you will need to add a user to the Metadata Store Permissions

You may notice a bit of a problem in the screenshot above in that my user has not been resolved properly. (you may not have noticed it depending on how much sleep you’ve had, but that’s another discussion) As it so happens, I’m rather stubborn, so I proceeded to click the Add button anyway. Much to my surprise AdamB was added to the user list and I was allowed to select the permissions I wanted this user to have. However, when I clicked OK is when the real fun began.

As you can see, it didn’t work very well and resulted in the following error:

A provided user name was not encoded as a claim. Parameter name: aces

Now considering the fact that I occasionally enjoy a game of cards here and there “aces” got me rather excited until I realized it was trying to tell me that something failed and I had to now fix it. So where is the first place you go to unravel errors in SharePoint? The ULS logs, of course!! And here a list of a couple of the entries (scrubbed and filtered) around this error:

ULS Log Entries

01/07/2015 17:37:20.65    w3wp.exe (0x30F8)    0x3B7C    Business Connectivity Services    Business Data    9f4h    Unexpected    ‘BDC’ BdcServiceApplication logging server side ArgumentException before marshalling and rethrowing on client side: System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    General    8nca    Medium    Application error when access /_admin/BDC/ManageBDCPermissions.aspx, Error=A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    Runtime    tkau    Unexpected    System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    General    ajlz0    High    Getting Error Message for Exception System.Web.HttpUnhandledException (0x80004005): Exception of type ‘System.Web.HttpUnhandledException’ was thrown. —> System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces —> System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation) — End of inner exception stack trace — at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplicationProxy.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation, Boolean performCanaryCheck, Boolean isChannelThatDelegatesIdentity) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplicationProxy.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId) at Microsoft.SharePoint.BusinessData.Infrastructure.BdcAccessControlList.SaveAs(MetadataObjectStruct metadataObjectStruct, String settingId, BdcServiceApplicationProxy serviceProxy) at Microsoft.SharePoint.BusinessData.Administration.IndividuallySecurableMetadataObject.SetAccessControlList(IAccessControlList acl, String settingId) at Microsoft.SharePoint.ApplicationPages.ManageBDCPermissions.OkButton_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

So while the ULS information did put me at ease that I was not going to be shot by Wild Bill and it helped me to understand that the problem was with adding entries to Access Control Entries (ACEs) I still wasn’t any closer to a resolution. Thinking about the error a bit, though, did help me arrive at a path of investigation. The error is specifically talking about a user name that wasn’t “encoded as a claim”. So internally in SharePoint between the service applications we are utilizing windows claims for user identities. Within my farm specifically, my Central Administration web application is configured for NTLM Classic mode authentication and my content web applications are all configured for Windows Claims.

On a whim I tried to enter my user name in the People Picker in its claims format: i:0#w|contoso\adamb, but that provided no benefit and resulted in the same error series. So thinking about this a bit more I started wondering why we were looking for a claims encoded user name and remembered that although my content web applications were not using SAML claims I did, in fact, have a SAML Claims provider configured in the farm.

So we go on another whim and remove the SPTrustedIdentityTokenIssuer and try again. This time it worked without errors… neither the People Picker name resolution error, nor the “user name provided was not encoded as a claim” error appeared.

SYNOPSIS

When trying to add a user to the Metadata Store permissions of your BDC service application, if you receive name resolution errors and more specifically an error stating that the user name provided was not encoded as a claim, then you may need to remove any configured SAML claims providers in order to add the users to the permissions list.

401 – Unauthorized Response to Register-SPWorkflowService

My previous post about configuring the Workflow Manager using least privilege is a great start; however I realized the other day that I was missing something. Thinking about a more realistic scenario and indeed if I decided that I need high availability, then I’d need more than one Workflow Manager (WF) server.

I started doing research and found a great 3 or 4 part series of blogs from the well-known Spencer Harbar. In this blog series, Spencer goes through a great explanation of the certificates involved and walks through creating a highly available WF Farm using Windows Load Balancer (WLB) software. His explanation describes how the Subject Alternative Name (SAN) field is used to allow us to use virtual names easily.

I’m not going to re-hash the setup of WF in this post, but I did alter the setup in order to use SSL. My configuration for this post includes:

  • a fully configured SharePoint Server 2013 Farm with a host named site collection at https://intranet.contoso.lab
  • a WF Farm consisting of a single server and configured for HTTPS, but I want to register it within SharePoint as a virtual name in order to ease expansion in the future:

    Server: WF01.contoso.lab

    Virtual name: WF.Contoso.lab

So we’ve created a DNS A record for the virtual name of wf.contoso.lab and it’s resolving to the IP address of the server wf01.contoso.lab. Next we run the powershell cmdlet to register the WF Farm with the SharePoint Farm and we expect all of this to work without a problem… because nothing ever fails the first time around, right? Riiiiiiiggghhhtt…

So we log into the SharePoint server as an appropriate account (refer to my previous post), and open a SharePoint 2013 Management Shell and execute the following:

Register-SPWorkflowService –SPSite https://intranet.contoso.lab –WorkflowHostUri https://wf.contoso.lab:12290

The result?

Register-SPWorkflowService : A response was returned that did not come from
the Workflow Manager. Status code = 401:
< !DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN”
http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd”>
< html xmlns=”http://www.w3.org/1999/xhtml”>
< head>
< meta http-equiv=”Content-Type” content=”text/html; charset=iso-8859-1″/>
< title>401 – Unauthorized: Access is denied due to invalid credentials.</title>
< style type=”text/css”>
< !–
body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,
sans-serif;background:#EEEEEE;}
fieldset{padding:0 15px 10px 15px;}
h1{font-size:2.4em;margin:0;color:#FFF;}
h2{font-size:1.7em;margin:0;color:#CC0000;}
h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}
#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;font-family:”trebuchet
MS”, Verdana, sans-serif;color:#FFF;
background-color:#555555;}
#content{margin:0 0 0 2%;position:relative;}
.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;positi
on:relative;}
–>
< /style>
< /head>
< body>
< div id=”header”><h1>Server Error</h1></div>
< div id=”content”>
< div class=”content-container”><fieldset>
<h2>401 – Unauthorized: Access is denied due to invalid credentials.</h2>
<h3>You do not have permission to view this directory or page using the
credentials that you supplied
.</h3>
< /fieldset></div>
< /div>
< /body>
< /html>
HTTP headers received from the server – WWW-Authenticate: Negotiate,NTLM.
Client ActivityId : e27eaa36-7e61-4511-a767-7cc5bad42e2e.
At line:1 char:1
+ Register-SPWorkflowService -SPSite https://intranet.contoso.lab

-WorkflowHostUri …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~
+ CategoryInfo          : InvalidData: (Microsoft.Share…WorkflowService:
RegisterSPWorkflowService) [Register-SPWorkflowService], AuthenticationExc
eption
+ FullyQualifiedErrorId : Microsoft.SharePoint.WorkflowServices.PowerShell
.RegisterSPWorkflowService

So basically, even though I was using all the right accounts in my least privileged configuration — you read my previous post, right? — I was still encountering an access denied. To save you the time and effort of all of the trouble-shooting I will cut to the chase. The following error showed up in the System Event log on my WF server:

Log Name:      System
Source:        Microsoft-Windows-Security-Kerberos
Event ID:      4
Level:         Error
User:          N/A
Computer:      wf01.contoso.lab
Description:

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server wf01$. The target name used was HTTP/wf.contoso.lab. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password. If the server name is not fully qualified, and the target domain (CONTOSO.LAB) is different from the client domain (CONTOSO.LAB), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

After consulting with a few peers, they immediately suspected kernel mode auth in IIS. As a test, we disabled that option on the WF website and reset IIS (to refresh Kerberos tickets) and now when registering the SPWorkflowService in SharePoint it worked flawlessly. However, there are advantages to using kernel mode authentication in IIS 7.0 so we don’t necessarily want to disable it completely.

The explanation of what’s happening…

With kernel mode authentication enabled, IIS utilizes http.sys to decrypt Kerberos tickets and it will be doing so under the context of the System Account (notice the reference in the event log error above to ‘wf01$’). When this happens with an application pool that is running under specific account credentials, the Kerberos tickets cannot be decrypted and used.

Instead of disabling kernel mode authentication, we have another option to allow the application pool credentials to be used to decrypt Kerberos tickets – useAppPoolCredentials. The command for this is as follows:

appcmd.exe set config “Workflow Management Site” –section:system.webServer/security/authentication/windowsAuthentication /useAppPoolCredentials:”True” /commit:apphost

After running this, with kernel mode authentication enabled on the Workflow Management Site, then the registration cmdlet should succeed. Note that if you have run the registration command previously and it failed, then you may need to add the –Force parameter to it in order to overwrite the previously failed registration.

As with the majority of the content I produce, I could not have done this alone (or at least not as quickly). Kudos out to a couple of peers:

  • Dean Cron (fellow PFE)
  • Xuehong Gan
  • Vishal Bajal

SELECT Permission Denied When Consuming User Profile Service Application

**Updated to include officially supported method**

While setting up a repro scenario a month or so ago I encountered an interesting issue with consuming the User Profile Service Application (UPSA). And by interesting, of course I mean maddening…

It appears that the documentation around how to consume a shared User Profile Service Application is not quite complete. The scenario is that I have a SharePoint Server 2010 farm consuming the UPSA from a SharePoint Server 2013 services farm. As you are aware there are six services that we can share (publish) so that other farms can consume them and make use of there information.

With the UPSA, there is one step that is slightly different from the other shared service applications and that is the following note:

UPSA_GrantWebAppInstructions

Now once you add the identity of the content web application from the 2010 farm where you want to use the UPSA, you might expect all of this to “just work”… however it doesn’t. In fact, when you browse to your 2010 content web application you may be greeted with the all too familiar:

UnexpectedError

At this point we take a look in the ULS logs and find the following:

04/01/2014 17:01:40.31    w3wp.exe (0x29B4)    0x47E8    SharePoint Server    Database    880i    High    System.Data.SqlClient.SqlException: The SELECT permission was denied on the object ‘Versions’, database ‘aProfile’, schema ‘dbo’.     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)     at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)     at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()     at System.Data.SqlClient.SqlDataReader.get_MetaData()     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior)     at Microsoft.Office.Server.Data.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock)    705298ec-2b50-4c69-b8bd-0037eaa0a688

04/01/2014 17:01:40.31    w3wp.exe (0x29B4)    0x47E8    SharePoint Server    Database    880j    High    SqlError: ‘The SELECT permission was denied on the object ‘Versions’, database ‘aProfile’, schema ‘dbo’.’    Source: ‘.Net SqlClient Data Provider’ Number: 229 State: 5 Class: 14 Procedure: ” LineNumber: 1 Server: ‘sql1\sp2013,57590’    705298ec-2b50-4c69-b8bd-0037eaa0a688

So there seem to be many ways to workaround this issue, but I cannot find anything documented as the “official” way. I am continuing investigation to see if we can get official guidance on this, but for now you should be able to get by this with whatever method works for you:

**Officially supported resolution**

**Unsupported, but alternate resolutions**

Note that I’m showing you how to do this through SQL Management Studio. You can accomplish the same through T-SQL, but I’ll leave that to someone else to figure out.

HowTo directly grant the SELECT permission to the SPDataAccess role

  1. Open SQL Management Studio and connect to the SQL instance that is hosting your SP2013 UPSA Profile database.
  2. Expand the User Profile database, expand Security, expand Roles, expand Database Roles, right-click SPDataAccess, and click Properties.
  3. Click Securables in the “Select a page” pane in the upper left of the dialog window.
  4. Click Search on the left side and click OK (leaving the default radio button set to “Specific objects…”).
  5. Click Object Types, click to select Tables, click OK, type Versions, and click OK.
  6. Click to select the entry for “[dbo].[Versions]” and click OK.
  7. You should now see the Versions table listed in the Securables list. Click to select it and click to select the Grant column of the Select row in the “Permissions for dbo.Versions” list at the bottom.
  8. Click OK to save the changes.

UPSA_GrantSelectOnVersionsToSPDataAccess

You may need to execute an IISReset on your 2010 web front end, but at this point your web application should load and your User Profile information (including MySite configuration) should be getting serviced from the 2013 UPSA.

HowTo add your web application ID to db_datareader

  1. Open SQL Management Studio and connect to the SQL instance that is hosting your SP2013 UPSA Profile database.
  2. Expand the User Profile database, expand Security, expand Users, right-click your 2010 web application Id, and click Properties
  3. Click Membership in the “Select a page” pane in the upper left of the dialog window
  4. Click to select db_datareader in the “Database role membership” pane and click OK.

UPSA_GrantDataReader

You may need to execute an IISReset on your 2010 web front end, but at this point your web application should load and your User Profile information (including MySite configuration) should be getting serviced from the 2013 UPSA.

HowTo directly grant the SELECT permission to your web application ID

  1. Open SQL Management Studio and connect to the SQL instance that is hosting your SP2013 UPSA Profile database.
  2. Expand the User Profile database, expand Tables, right-click dbo.Versions, and click Properties
  3. Click Permissions in the “Select a page” pane in the upper left of the dialog window
  4. Click Search on the left side and either browse to locate or type in the desired web application ID and click OK.
  5. Now that your web application ID is listed, be sure it is highlighted (selected) and in the lower pane select the “Grant” column of the “Select” row and click OK.

UPSA_GrantSelectOnVersions

You may need to execute an IISReset on your 2010 web front end, but at this point your web application should load and your User Profile information (including MySite configuration) should be getting serviced from the 2013 UPSA.

Export-SPWeb Syntax Changes Between Root Site and Sub Sites

Working on a somewhat related issue today and needed to test something with Export-SPWeb. We were trying to import a list from an STP (List template) and in order to get there I decided to export one of my own lists and play around a bit. Much to my surprise I was unable to export a custom list from my lab environment. I kept getting the following error:

export-spweb : The URL provided is invalid. Only valid URLs that are site collections or sites are allowed to be exported using stsadm.exe.
At line:1 char:1
+ export-spweb $web -Path exported_list.cmp -ItemUrl “/lists/testlist” -Force +
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (Microsoft.Share…CmdletExportWeb: SPCmdletExportWeb) [Export-SPWeb], SPException
+ FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletExportWeb

It was quite maddening as the parameters are not that difficult to figure out. (get-help export-spweb)

So I was trying many things and discovered that in a second lab environment I had that it worked flawlessly and I was determined to discover why. The first thing that a peer of mine found for me was that in the failing environment the web from which I was trying to export the custom list was on a site collection under the ‘sites’ managed path (in other words not the root site). The environment that was working was the root site collection.

To cut to the chase, you may not be able to use the same syntax depending on where the list resides. The parameter in question that kept failing for me was the ItemUrl parameter. It is to represent the url to the list that you would like to export. Basically if the site collection is the root site collection, then the ItemUrl should contain a leading “/” such as this:

Export-SPWeb http://intranet.contoso.lab –Path exported_list.cmp –ItemUrl “/Lists/TestList”

However if the site collection is not the root, then you should either include the full server relative url to the list or the web relative url to the list *without* the leading slash:

Export-SPWeb http://intranet.contoso.lab/sites/TestImport –Path exported_list.cmp –ItemUrl “/sites/TestImport/Lists/TestList”

or

Export-SPWeb http://intranet.contoso.lab/sites/TestImport –Pat exported_list.cmp –ItemUrl “Lists/TestList”

If you care to know, basically under the covers the Export-SPWeb cmdlet will call SPWeb.GetList(), but prior to calling that function it will prepend the SPWeb.ServerRelativeUrl and an extra “/” if the string in ItemUrl does not have a leading “/”. In short, to be safe if you always set the ItemUrl parameter to the server relative url and then path to the list, it should always work regardless of web location.