Test-SPContentDatabase Reports Valid Web Parts as ‘Missing’

While working on a migration for a customer we were combing through the results of Test-SPContentDatabase in preparation for the migration efforts of taking their SharePoint 2010 content into a new SharePoint 2013 environment. We were systematically reviewing and repairing all the common errors that we found, like MissingFeatures, MissingSetupFiles, and MissingWebParts. It was the last of these that gave us a bit of a problem. After using the common scripts for removing these missing web parts, which normally show up as an error, these missing web parts were remaining quite stubborn.

Category        : MissingWebPart
Error           : True
UpgradeBlocking : False
Message         : WebPart class [8dd36a66-e8d0-c735-2173-b3cf93383598] (class [
                  _2010_VisualWebPartProject.VisualWebPart1.VisualWebPart1] fro
                  m assembly [2010_VisualWebPartProject, Version=1.0.0.0, Cultu
                  re=neutral, PublicKeyToken=7552893a02bae51b]) is referenced [
                  2] times in the database [WSS_Content], but is not installed
                  on the current farm. Please install any feature/solution whic
                  h contains this web part.
Remedy          : One or more web parts are referenced in the database [WSS_Con
                  tent], but are not installed on the current farm. Please inst
                  all any feature or solution which contains these web parts.

 

The end result is complicated by a couple of factors:

  1. The version of SP2010 that I was working with was 14.0.4762.1000… yes, the RTM build
  2. The web parts that were being stubborn were sourced from Sandbox Solutions.

The combination of the above two items actually cause Test-SPContentDatabase to result in false negatives regarding these web parts…even after you remove the sandbox solution. Yes, folks you heard that correctly.. if you have the RTM build of SP2010 and remove your sandbox solutions that contain web parts, then Test-SPContentDatabase will still show false negatives for these web parts.

I did not test every build to find out exactly where it changed, but at some point before the release of service pack 1, Test-SPContentDatabase was altered to ignore sandbox solutions. So you no longer get an error on these web parts. There may or may not be another problem in that after removing the solution from the solution gallery and the web parts from the web part gallery that the reference still exists in the AllWebParts table, but I can neither confirm nor deny such theory at this time.. Smile

My final thoughts on this?

If you are upgrading SP2010 RTM –> 2013, then you can ignore missing web part warnings/errors from Test-SPContentDatabase as long as you confirm they are referring to sandbox solutions.

Advertisements

New phase of life/career!

I’m excited to start a new phase of my career at Microsoft and hopefully continue posting information as I find it. Throughout my life here I’ve been in our Support/Services organizations assisting our customers with planning, implementing, and maintaining, a wide variety of Microsoft products. Primarily I’ve been focused on Internet platforms like BizTalk Server and SharePoint Server.

While the majority of my work has been architectural or administrative in nature, there are always components of development. Whether it is writing sample code for the customer or writing utility tools to assist with diagnosing problems or resolving issues. So I joined our Premier Developer team in Nov of 2015 as a Senior Dev Consultant in order to push myself more into a developer role.

As I’m sure most of you know, the trends are moving away from on premises products and towards cloud and mobile related technologies. Development is one of those areas where you can always have an impact. So I’m looking forward to the new role and the future posts that I will be able to make … hoping they help you as much as they help me!  Smile

Workflow Manager 1.0 Refresh Disaster Recovery (further) Explained

With the release of SharePoint 2013, Microsoft released a new platform for workflows called Workflow Manager (WFM). As of this writing the current version is 1.0 Cumulative Update 3. Unfortunately disaster recovery (DR) for this product is not as straight forward as just setting up database replication.

Following are a list of resources I’ve used to implement disaster recovery:

I found that each of the above references hold vital clues to making DR for WFM work, but none of them had details upon which I was stumbling. There are two basic concepts where I needed to do additional research:

    • Certificates (which ones to use where and how to restore effectively)
    • Changing service accounts and admin groups upon a failover

As pointed out there are plenty of TechNet articles and blogs that talk about how to do WFM Disaster Recovery (DR), so I am not going into detail on the individual steps, but I decided to document my discoveries in hopes that others can benefit from my experiences.

So, at a high level, the basic operation is as follows. I’ll have sections below describing each of the areas where I had concerns:

    • Install production WFM and configure
    • Configure your backup/replication strategy for the WF/SB databases
    • Install WFM in DR
    • Execute the failover process
    • Re-connect SharePoint 2013
    • (Optional) Changing RunAsAccount and AdminGroup

Install Production WFM and Configure

Certificates – AutoGenerate or custom Certs?

Installing WFM 1.0 CU3 is fairly well documented in several places, but the one piece that I feel needs to be called out is regarding certificate configuration. There are options to Autogenerate your certificates (self-signed), to use your own domain certificates, or to use certs acquired from a 3rd party certificate authority. There are some businesses who have no restrictions against self-signed certs, but this will affect your restoration of service in the DR environment. As noted in Spencer’s blog, there are a total of six or seven possible certificates. Auto-generating your WFM certificates will dictate your restoration process in a failover scenario. One reason for this is that the WorkflowOutbound certficate is created with private keys, but they non-exportable.

Configure Your Backup/Replication Strategy for the WF/SB Databases

The key to disaster recovery with WFM (as with many products) is the data store. In this case we are referring to the SQL Server databases. Again, this information is in the related links and there are two things to keep in mind:

  1. You can use pretty much any replication method – backup/restore, mirroring, log shipping — except for SQL Server 2012 AlwaysOn, which is unsupported at this time. It is also crucially important to keep the WF/SB databases backed up as close in time as possible as the content databases in order to preserve the WF instance integrity.

UPDATE: With the release of Workflow Manager CU 4, SQL AlwaysOn is now supported and should be considered as the High Availability/Disaster Recovery solution. You can find information on CU4 here. And you can find installation information here.

  1. You do not need to backup the management databases, WFManagementDb and SBManagementDb, as they will be re-created during the recovery process.

Install WFM in DR

Depending on whether you want a cold or warm standby WFM farm, you will either have already installed the servers or will perform this as part of your recovery process. NOTE: WFM does *not* support a hot standby configuration. There are a couple of keys to your DR installation:

  • You will install the bits on the DR app servers, but you will *not* configure the product at this time.
  • If you are choosing to do a warm standby, then you may also import the necessary certificates ahead of time.
    • If you are using:
      • Auto-generated certificates, then it’s important to know that you need to export/import the Service Bus certificates from Prod to DR and for the Workflow Manager certificates you can auto-generate them in DR (remember you cannot import/export the WF certificates because the private keys are marked as non-exportable)
      • Custom domain certificates, then you will export/import all of them from Prod to DR
  • The Service Bus root certificate should be imported into the LocalMachine\TrustedRootAuthorities store.
  • The other Service Bus certs should be imported into the LocalMachine\Personal store.

Executing the Failover Process

In the event of a disaster (or just a need to failover), the following process is required.

  1. Restore the 4+ SQL databases (WFResourceManagementDb, WFInstanceManagementDb, SBGatwayDatabase, SBMessageContainer01 – n) from prod_SQL to dr_SQL.
  2. Assuming the steps above have been followed to install WFM in DR, then you need to use powershell to restore the SB farm. If you were doing a true ‘cold standby’, then you need to install (but not configure) the SB/WF bits from Web Platform Installer.
  3. Restore the SBFarm, SBGateway, and MessageContainer databases and settings (do this on only one WFM node)
      • The SBManagementDB will be created in DR during this ‘restore’ process
      • The RunAsAccount *must* be the same as the credentials used in production
  1. Again, using powershell, run Add-SBHost on each node of the farm.
  2. If you used auto-generated certificates for the WFFarm in prod, then when you restore the WFFarm you will auto-generate new ones. However this also means that you may need to restore the PrimarySymmeticKey to the new SBNamespace.
  3. At this point, restore the WFFarm using powershell (do this on only one WFM node)
  4. Run Add-WFHost on each node of the farm.

At this point, the new WF Farm should be in a working state. You can test this by navigating to the endpoint in a browser and you should receive output similar to the image below:

clip_image002

Re-connect SharePoint 2013

If WF certificates were re-generated in DR, then you will need to recreate the SharePoint Trusted Root Authority. Export the WF SSL certificate and add it to the SharePoint farm using New-SPTrustedRootAuthority.

Create a new registration to the Workflow farm using Register-SPWorkflowService.

There is a cache of security trusts, so in order to see the change more immediately you will likely need to execute the timer job “Refresh Trusted Security Token Services Metadata feed.” with the following powershell:

Start-SPTimerJob –Identity ‘RefreshMetadataFeed’

(Optional) Changing RunAsAccount and AdminGroup

Summary

The process above should work in most (if not all) scenarios, but I welcome any comments if you encounter problems or challenges. I’ve spent many hours on this over the past 6 months off and on and it’s very possible that I’ve missed something. Smile

I’ll add the last section about changing service accounts once I have the complete set of steps for WF accounts. Service Bus added powershell cmdlets, which makes this easier, but Workflow Manager has not as of yet.

UPDATE: With the release of Workflow Manager CU 4, one can now change the credentials for the Workflow Manager Service with the Set-WFCredentials powershell commandlet. You can find information on CU4 here. And you can find installation information here.

Configuring UserPhotoExpiration for User Profile Photo Sync between Exchange 2013 and SharePoint 2013

Following on my previous post about different user profile photo options for SharePoint 2013, I wanted to expand on some research that I had done for one of my customers in this area regarding the expiration values. There are a couple of scarcely documented* properties that will also affect when a user’s photo is re-synchronized from Exchange 2013 instead of just using the cached photo in SharePoint 2013.

To cover the basics, I used this blog to configure the integration between SharePoint 2013 and Exchange 2013 for user photos. There are a couple of SPWebApplication properties that are set here:

  • UserPhotoImportEnabled – this property defines if SharePoint should import photos from Exchange
  • UserPhotoExpiration – this property defines (in hours) how long the photo in the user photo library of the MySite host should be considered valid before attempting to synchronize a potentially updated photo
  • UserPhotoErrorExpiration – this property tells SharePoint that if encountered an error attempting to retrieve a new photo less than ‘this many’ hours ago, then do not attempt again

These are fairly well-known properties, but there are a couple of others that affect how often or *if* your user photo sync will happen. These additional properties are contained in the web application property bag:

  • DisableEnhancedBrowserCachingForUserPhotos – (default: not present) If this property is set to ‘false’ or is not present, then SharePoint will bypass the blob cache. If the property is set to ‘true’, then SharePoint will check the timestamp and will bypass the blob cache if the timestamp passed in is within 60 seconds of the current time
  • AllowAllPhotoThumbnailsTriggerExchangeSync – (default: not present) If this property is set to ‘false’ or is not present, then SharePoint will only trigger a sync with Exchange if the thumbnail being requested is the Large thumbnail.

So what I want to explain below is a series of steps that I took in my lab to hopefully illustrate how these properties work.

Anne accesses her profile page to change her photo and it properly redirects her to Outlook Web App (OWA) where she can upload her latest professionally taken headshot (photo1.jpg). Upon completion, she returns to her profile page and sees the new photo. She also navigates to OWA and sees the photo there as well.

Adam navigates to Anne’s profile page and sees the recently uploaded photo.

Anne decides that she wants to upload a different photo (photo2.jpg) and does so through OWA instead of through her profile page. In this case photo2.jpg does not show up immediately in her profile page and she and other users are still seeing photo1.jpg; however OWA is showing photo2.jpg.

Why is SharePoint not updating the photo?

Basically, it will depend on the settings above combined with what method was used to change the photo. In the above scenario, Anne changed her photo the second time via OWA. SharePoint has no way to know that the photo was changed until its cached photo expiration (UserPhotoExpiration) value has passed from the first time the photo was changed. Even after the photo expiration has passed, there still has to be some action that triggers for SharePoint to check. In this case, Anne navigating to her profile page (since it’s the large thumbnail) should trigger SharePoint to evaluate the expiration values and if needed re-sync Anne’s photo.

Why wouldn’t I reduce the UserPhotoExpiration value to 0 hours?

I’m sure that in some installations this would not be a problem, but the point of a cache (in this case SharePoint’s user photo library) is to reduce round-trips to the authoritative source of data. You likely do *not* want SharePoint to be contacting Exchange every time someone accesses their profile photo.

How do I set these properties?

In powershell, of course! For the first three above there are examples within this blog, but I’ll duplicate them here along with the other two. To reiterate, these values are for the sake of examples and you should do your own testing to find out what works for your environment:

$wa = Get-SPWebApplication https://my.contoso.lab

$wa.UserPhotoImportEnabled = $true

$wa.UserPhotoExpiration = 6

$wa.UserPhotoErrorExpiration = 1

$wa.Properties.Add(“DisableEnhancedBrowserCachingForUserPhotos”, “true”)

$wa.Properties.Add(“AllowAllPhotoThumbnailsTriggerExchangeSync”, “true”)

$wa.Update()

  

* – I say scarcely documented as I could find very few references to DisableEnhancedBrowserCachingForUserPhotos or AllowAllPhotoThumbnailsTriggerExchangeSync that were related to the topic. I did, however, find one that was helpful in directing my research: http://sharepoint.sigicom.ch/Blog/Beitrag/2/Profile-Picture-Cache. And since I am not fluent in German, I’m thankful for the translation tools we have available on the Internet.

SharePoint 2013 Shredded Storage vs. RBS

**THIS IS A REPOST**

It’s important to note that sometimes information needs to be re-posted.. This post was originally created by a peer of mine at Microsoft – Chris Mullendore – on his Microsoft blog. When he moved on, the post disappeared and the rest of us mere mortals FREAKED… so we were able to find the content and I’ve reposted here so that we can all find it again… Thank you, Chris!

SharePoint 2013 has introduced significant enhancements in how file content is stored and retrieved in the database. Specifically, files are now split into many parts (“Shredded”) and stored in individual rows in the content database. This is related to the Cell Storage API introduced in 2010 that support(s/ed) things like co-authoring in SharePoint 2010 and the Office Web Applications, but is gaining prominence in SharePoint 2013 because in 2013 because this file-splitting capability has been pushed beyond an over-the-wire transfer capability and now exists in the SharePoint databases and now exists as a feature called Shredded Storage.

There are already more than enough blog entries (http://blogs.technet.com/b/wbaer/archive/2012/11/12/introduction-to-shredded-storage-in-sharepoint-2013.aspx) describing in detail shredded storage itself. Suffice it to say for the purposes of this blog entry all files will be shredded into either 64K or 1MB (1024K) chunks, depending on the type of file we’re talking about. File types that SharePoint understands and interacts with directly (Office files) will receive the 64K treatment, other file types will be sliced to 1MB chunks. While I don’t know the exact reasons for these sizes, I do believe that they make sense given the purposes of the Cell Storage API, what is likely to be versioned and not, and the difference in use cases for Office documents vs. other files like binary (zip) or large media files.

Where things get really interesting (and where we reach the purpose of this blog) is around how the Shredded Storage functionality impacts or interacts Remote Blob Storage (RBS) which was introduced in SharePoint 2010 and continues to exist in 2013. If you recall, RBS is best used to push relatively large files from being contained directly in the SharePoint content database(s) out into an actual file system. Although any size file can be configured to be pushed (and in fact the default setting is “0”, or “push all files to RBS”), smarter people than me have done testing and indicate that the benefits of RBS are most valuable with somewhat larger files while using RBS for smaller files can actually reduce performance.

So… lets think about these two features for a moment…

  • RBS works best with larger blobs.
  • Shredded Storage slices larger blobs into a lot of very small blobs.

Time for a few examples…

  • Scenario 1: Word document, 10K, all text. RBS Threshold = 0K
    This file would be placed in a single shred and pushed to the RBS store.
  • Scenario 2: Word document, 10K, all text. RBS Threshold = 1MB
    The file would be placed in a single shred and would be stored in the database.
  • Scenario 3: Word document, 5MB, all text. RBS Threshold = 0K
    The file would be placed into numerous ~64K shreds, pushed to RBS, resulting in ~80 RBS chunks.
  • Scenario 4: Word document, 5MB, all text. RBS Threshold = 1MB
    The file would be placed into numerous ~64K shreds and would be stored in the database.

Your 5MB file would never make it to RBS because RBS cares about the size of the shred, not the total size of the file!

Uh oh… we have an apparent conflict. Our Word document will never make it to RBS. However, RBS isn’t completely negated… it still plays a role in files that SharePoint doesn’t provide direct interoperability with. For example, in a Word document SharePoint plays a role in versioning, co-authoring, metadata integration, etc. For other, non-integrated file types however, SharePoint will still use shreds and RBS… just not in the way you might think.

Let’s do the same scenarios using a ZIP file instead, and changing the RBS threshold to 1MB:

  • Scenario 1: ZIP File, 10K. RBS Threshold = 0K
    The file would be placed in a single shred and pushed to the RBS store.
  • Scenario 2:  ZIP File, 10K. RBS Threshold = 2MB
    The file would be placed in a single shred and would be stored in the database.
  • Scenario 3:  ZIP File, 10MB. RBS Threshold = 0K
    The file would be shredded to 1MB chunks and pushed to RBS.
  • Scenario 4:  ZIP File, 10MB. RBS Threshold = 2MB
    The file would be shredded to 1MB chunks and pushed to RBS.

Now things should be very confusing. Neither SharePoint nor RBS cooperated with our RBS settings! The curiosities continue in the following example…

Same scenario, but using a Word document with 1MB of text and a single 5MB embedded image for a total of 6MB…

  • Scenario 1: RBS Threshold = 0K
    The file will be split into ~16 ~64K shreds and 5 1MB shreds, all pushed to RBS.
  • Scenario 2:  RBS Threshold = 2MB
    The file will be split into ~16 ~64K shreds which will be stored in the database and 5 1MB shreds pushed to RBS.
  • Scenario 3:  RBS Threshold = 10MB
    The file will be split into ~16 ~64K shreds which will be stored in the database and 5 1MB shreds pushed to RBS.

You should now be either thoroughly confused or noticing a pattern. Here is the basic summary for the behavior:

  • Content that SharePoint understands will be shredded normally and those shreds will be pushed to RBS or not depending on your RBS threshold.
    Content that SharePoint does not directly understand or interoperate with will always be shredded to 1MB chunks and will always be pushed to RBS if RBS is enabled. Your RBS threshold will be ignored.
    Content that SharePoint understands will be broken down will be recognized during the shredding process and the various pieces will be one of the two above types of content above and will be handled according to the rules of that media type.

Believe it or not, this does actually make sense… but to make sense of it we have to understand the critical differences between the two file types.

  • Office documents are…
    • …likely to be heavily versioned.
    • …likely to have only a small portion of the content change at each version.
    • …reliant on SharePoint/Office ability to do things like co-authoring that are much faster if only the changes/deltas are saved or transferred over the wire.
  • Non-integrated files (images, media, ZIP, etc.) are…
    • … likely to be replaced entirely if versioned. Incremental or cell-based updates are impossible.
    • …not deeply integrated into SharePoint itself, and SharePoint does not alter the contents or attributes of those files.
    • …typically managed through applications that do not understand or directly interact with SharePoint.
      (For the purposes of this conversation, any file type that SharePoint does not provide direct interoperability with should be considered “non-integrated”. For example, a large TXT file uploaded into SharePoint will be managed according to the non-integrated rules despite the fact that it is still a text based format.)

Essentially, (and I am taking an educated guess here) the SharePoint product group looked at the significant difference between these two types of files and made their own determination about what was best for each of them. Office documents shredding would focus on many small, incremental changes between versions that could easily be transferred independently to/from the SharePoint server. Binary files shredding would focus on reasonable sizes that provide a reasonable balance between the benefits and the cost of breaking and re-assembling the binary BLOBs through RBS. Further, the PG went so far as to go inside of Office documents, separating out the editable (and therefore highly likely to change) text from any embedded binary (and therefore either likely to not change, or change completely) elements and is managing both types of content appropriately even as it appears in the same file.

What should you do about this?

First, we need to be clear about the goals and situations that each solution does well in and is intended to address:

  • Shredded storage defaults to 64K for integrated files and 1MB for non-integrated files and reduces SQL database growth caused by file versioning.
  • RBS works better with fewer, larger files, and moves files out of the database.

Clearly keeping the SQL database file size down is a good thing (and is probably why you deployed RBS to begin with). Otherwise, our primary goal is to find a balance between how SharePoint operates and how RBS is configured.

The answer is simple: Set RBS to a 1MB threshold.

This setting is consistent with how SharePoint operates when RBS is enabled and with previous guidance on how to configure RBS such that only large files are pushed to the RBS store. Configuring the RBS threshold to 1MB aligns the environment with SharePoint’s default behavior, but more importantly it ensures that the small 64K shreds are not pushed to the RBS store. Given SharePoint’s behavior, this is the best balance one can strike in the RBS vs. Shredded Storage design. This lets the small, integrated file shreds continue to remain in the database where differential versioning can be effective while maintaining consistency with the large files that are likely to be pushed to RBS by shredded storage anyway.

The side effect to this configuration is that your databases will grow faster than if all files were pushed to RBS, and this may cause some concern for some people. However, this configuration can dramatically increase the actual storage density inside of the database, particularly for heavily versioned files, while avoiding the performance overhead of retrieving thousands of 64K files through RBS for a single document. It really is the perfect balance… the best of both worlds… a good thing.

Oh… about that FileChunkWriteSize thing…

Yes, you can force shredded storage to behave as if it were disabled (even though it’s not) by modifying the FileChunkWriteSize property to some incredibly large value such as 2GB. This would have the intended effect of forcing SharePoint into storing the entire file in a single shred which would then exceed your RBS threshold and SharePoint would dutifully push the file to RBS as desired. However, modifying this setting comes with some significant side effects. For example, by setting the FileChunkWriteSize to a large value you are defeating the opportunity for SharePoint to increase your storage density for versioned files. Think of it this way: The larger the chunk size, the larger the block we store is going to be, and the fewer blocks there will be. The fewer blocks there are, the fewer identical blocks there can be and the lower your storage density.

Lets go for one of those scenarios again for demonstration. Imagine we have a 100MB file that has a single value updated daily (Excel Services can easily create this kind of scenario) in a library with versioning enabled and no maximum version count (a frighteningly common configuration). That means 365 updates over the course of a year.

  • With FileChunkWriteSize = 2GB total storage would be 100MB x 365 days = 36,500MB (or 35.6GB!)
  • With FileChunkWriteSize = 20MB total storage would be 100MB + (20MB x 364) = 7,380MB (or 7.2GB)
  • With FileChunkWriteSize = 64K (the default value) total storage for the file would be 100MB + (64KB x 364) = 122.75MB

The net effect being that leaving that file in the database and allowing shredded storage to do its job will actually reduce storage growth over a single year by over 99% without incurring any of the overhead of pushing that file to RBS! Remember that the purpose of RBS is not to reduce database size… it is to move the non-transactional data to less expensive storage as a means of reducing storage cost. Shredded storage can be better for highly versioned content than RBS because it reduces actual storage used. Depending on your scenario and content, not consuming storage at all can be much more cost effective than consuming a lot of somewhat less expensive storage.

Ultimately, increasing FileChunkWriteSize is the complete opposite of value. Just say no.

So in the end, my recommendation is “Let SharePoint Be SharePoint” and let shredded storage do its job in collaboration with RBS. They both have a place in the world, and they both work together well. SharePoint will own the highly integrated, highly versioned content, and SharePoint will let RBS own the less integrated, monolithic content. End users stay happy, file save requests stay fast, and databases grow much slower than in any previous version of SharePoint.

Edited September 18, 2013 to reflect updates from Bill Baer’s recently published whitepaper on shredded storage, available here.

Simplifying?? Provider Hosted Apps with SharePoint 2013–Part 1 of X?

From my own experience and most everyone I’ve talked with, if you’ve done anything with SP2013 Provider Hosted Apps, there seem to be three outcomes:

  1. You just get it
  2. You just don’t get it
  3. You beat your head against a wall and randomly get it to work without having a clue why

 

Up until recently I have been part of group 3 above… I started playing with SP2013 before it released, but since my primary skills and experience are not developer related I haven’t needed to *really* understand all of the nuances to the new app model.

One of my biggest frustrations with the entire thing is that despite how much I search I haven’t found any single good resource to describe/discuss the different properties and configuration needed to get this working. Part of the reason for this is that most of the authors of the content make assumptions about the reader’s experience and not everyone was born to be a developer. Smile Speaking of assumptions… this post does assume that you have already configured your farm for developing these provider hosted applications as specified in this MSDN article. (yes, I know I said I hate having to jump all over many articles, call me a hypocrite)

This is going to be one really long post or a series…I’m not sure how this will end, but the specific scenario that I’m focusing on is a high-trust provider hosted app for SharePoint 2013. Basically this will be an ASP.Net Web Application secured with an SSL certificate and provides tokens that it generates internally to pass to SharePoint. There are certain differences depending on whether you are a developer using this for creating apps and debugging them or if you are an ITPro responsible for deploying these apps into a production farm. I’ll try to point out these differences as I discuss.

CONFIGURATION:

While I certainly have all the other servers that are necessary for this task, the important points here are that I have a multi-server SharePoint 2013 farm (a web front end that also hosts the Central Admin and an application server which hosts Search) and a separate IIS server to host the ‘provider hosted web application’. In my case I’m using the SP2013 web front end as my Visual Studio box as well.

DATA NEEDED:

Certificate – this can be a self-signed certificate, a domain cert, or a real cert such as Verisign or GoDaddy. In my case I’m using a domain cert issued by my domain certificate authority.

IssuerID – this is a GUID that will be used in several places. You can generate it from Visual Studio or using Powershell, but it will be used as the provider hosted application IssuerID and for part of the SPTrustedSecurityTokenIssuer registered name.

ClientID – this is another GUID that will be generated in different ways depending on the scenario you are using. It will be used as the ClientID in the web.config of the provider hosted application and as part of the ID for the SPAppPrincipal that you register either using AppRegNew.aspx or Register-SPAppPrincipal.

Create Trusted Root Authority (MSDN Link)

You will eventually need both the CER (public key) and PFX (private key) from your certificate, but for this exercise only the CER is needed. On the SharePoint server open the SharePoint Management Console and execute the following powershell:

$publicCertPath = “<full file path to .cer>”
$certificate = New-Object System.Security.Cryptography.X509Certifcates.X509Certificate2($pulicCertPath)
New-SPTrustedRootAuthority –Name “HighTrustSampleCert” –Certificate $certificate

Note that if your certificate is a sub-ordinate certificate, then you also need to repeat the above for the parent certificate and any other certificates that are in the chain. (Check the ‘Certification Path’ tab of the Certificate properties to see if there are multiple certs in the chain).

Create SP Trusted Security Token Issuer

For this section you will need to generate a new GUID to use as the IssuerID and you will create a token issuer for SharePoint that will be responsible for issuing access tokens to your provider hosted application.

<still in the same SharePoint Management Console as above>

$IssuerID = [System.Guid]::NewGuid().ToString()
$spSite = “<url of site>”
$realm = Get-SPAuthenticationRealm –ServiceContext $spSite
$fullAppIssuerID = $IssuerID + ‘@’ + $realm
New-SPTrustedSecurityTokenIssuer –Name $IssuerID –Certificate $certificate –RegisteredIssuerName $fullAppIssuerID –IsTrustBroker
IISReset

Make a note of the $IssuerID as you will need that later. The URL of the site above is really only important if you have site subscriptions enabled (which most of you do not). We could technically get the $realm in many ways, but the above seems to be the common method. Concatenating the $IssuerID and the $realm gives us the ID for the security token issuer for issuing tokens to apps.

Create High-Trust Provider Hosted Application for SharePoint in Visual Studio

I don’t want to go into the click-by-click details of how to create a new project, but if someone needs help with this just holler in the comments section.

From Visual Studio (VS) 2013, create a new project from <language>\Office/SharePoint\Apps\App for SharePoint 2013. When you click OK after selecting the project type you will be asked for a couple of pieces of information:

What SharePoint site do you want to use for debugging?

For this, it can be any SharePoint site. If you are the developer and using this for creating/debugging the app, then you can select any developer site.

How do you want to host your app for SharePoint?

Provider-hosted

Next.. Which type of web application project do you want to create?

ASP.NET Web Forms Application

Next.. How do you want your app to authenticate?

Use a Certificate (for SharePoint on-premises apps using high-trust)
Certificate Location: path to your PFX file from above
Password: password to your certificate’s private key
Issuer ID: The issuerID from above. The same as what you used for the SPTrustedSecurityTokenIssuer. (all lowercase)

Finish..

Notes about the above section. There are 3 types of apps listed: Provider-hosted, Autohosted, and SharePoint-hosted. The Autohosted apps have been discontinued so you can ignore that option and I imagine it will be removed in the near future. SharePoint-hosted apps are totally running within your SharePoint environment.

The option about ASP.Net Web Forms vs. MVC is unimportant for the context of this blog post, but I will verify later that this configuration works for both options.

The IssuerID is critical to success. It *must* be the same IssuerID that you used above to create the SPTrustedSecurityTokenIssuer.

If you are creating/debugging this app, then at this point you should be able to hit F5 and it will launch the app and connect to the SharePoint site you specified in debug mode. This does a lot of work behind the scenes…

such as entering the correct information into the web.config for the ClientID:

VSbuild_insert_clientID_webconfig

and registers an app principal for your app and creates the app permissions:

AppPrincipal_created

AppPermissions_added

Notice in the screenshot above that the first part of the App Identifier matches the ClientID from the previous screenshot where VS inserted it into the web.config.

This will also upload the app package to the desired site (this is why it needs to either be a developer site or a site with developer features enabled). One of the differences that needs to be pointed out here is that on a developer site when you deploy an app via VS, there is no “app catalog” and apps are simply uploaded and installed directly to the site.

Create High-Trust Provider Hosted Application for SharePoint in Visual Studio for Package Deployment

So this section will show the slight differences between using VS for just creation and debugging versus using it for creation of app packages that will then be passed off to a deployment team.

The steps above for creating the SPTrustedRootAuthority and the SPTrustedSecurityTokenIssuer are the same for this process. All of that still needs to be done. The main difference is around how the app is packaged and then deployed.

Starting with a new project in VS, you will need some information such as the following:

ClientID – in the previous scenario (debugging) this was provided for you by Visual Studio. For this scenario you will need to provide this. You can create it the same way you created the IssuerID earlier with Powershell, but make a note of it as you will need it again shortly.

Certificate Location and password – path to your CER/PFX files

IssuerID – this is the same ID that you used to create the SPTrustedSecurityTokenIssuer

You start by packaging the SharePoint app project. Right-click on the SharePoint App project in your solution and select Publish:

Publish_SPApp_from_VS_thumb2

      

This launches a wizard where you need to create a new publishing profile. Click the drop down next to ‘Current Profile’ and select ‘<New>’, then select “Create new profile” and provide a descriptive name, then click Next.

Here is where you will need to provide the information from above. ClientID, Certificate location (to the PFX), Certificate password, and IssuerID. Click Finish. At this point you have created a publishing profile. Now you will need to actually create the package file by clicking the “Package the app” button:

PackageTheApp_button

This launches a new wizard that requires the URL for your provider hosted web application – this is where the non-SharePoint portion of your app is running – and the ClientID. Notice that the ClientID is pre-populated for you since you supplied it in the publishing profile.

image 

Click finish and a Windows Explorer window opens showing you the .APP file that is created. This file is what you’ll deliver to the SharePoint deployment team to upload to the SharePoint App Catalog.

But we’re not done yet.. Now you need to go back to VS and before you publish the web application project, be sure to verify that the web.config contains the proper information for the location and password for the certificate as well as the IssuerID. These values should have been supplied at project creation time. The ClientID, however, needs to be added at this point to the web.config.

Now right-click the Web Application project and select Publish. Select the Publishing Profile that you created above and click Next..

image

WebPublishOptions1

There are a variety of publishing options…for this conversation I’ll use the publishing method of Web Deploy Package. This will create a set of files that can be manipulated to deploy the web application to the destination web server. Discussion of all of these details is beyond the scope of this post, unfortunately.

WebPublishOptions2

The other information needed for the above dialog is a location where to save the web deploy package and the web site name where you want the files deployed. Click Next twice and then click Publish.

Now what? You have an APP file with your SharePoint App component and a set of files from the web deploy package for the actual provider hosted app. Copy the web deploy package files over to your target IIS server and do the following:

  1. Be sure that your PFX certificate is installed so that it can be accessed by IIS
  2. Create a new IIS site and edit the bindings so that it is secured by the certificate.
  3. Disable anonymous authentication and enable Windows authentication. This is a *MUST*. The provider hosted application must be able to authenticate users when they hit the site in order to create the access tokens that are sent back to SharePoint.
  4. Now execute the .CMD file in the web deploy package folder in order to install the web application files.

 

Obviously there are details here I’m leaving out because I feel this blog post is getting too long as it is… I may come back in later and break it up into pieces if there is a lot of confusion around the parts I’m leaving out. Remember, my whole reason for doing this is that too many other blogs make too many assumptions and leave out content… now I may be understanding why they do that… it’s a LOT of content. Smile

For the .APP file, you pass this to the SharePoint Deployment team so they can upload the file to the App Catalog and make it available to the web application… but WAIT!!!

It still won’t work… recall when you were creating/debugging the app in VS against a developer site that the sheer act of publishing the app components caused some things to happen. One of those things is the creation of the AppPrincipal. The AppPrincipal is how SharePoint assigns permissions to your apps in order to access objects within SharePoint. The information you’ll need to create the AppPrincipal:

ClientID – the same ID that you created and specified in both the publishing profile and the web.config     
Realm – remember this can be obtained from Get-SPAuthenticationRealm
URL – this should be the url of the SharePoint site where you are installing the app     

$ClientID = <guid>
$realm = Get-SPAuthenticationRealm
$appID = $ClientID + ‘@’ + $realm
Register-SPAppPrincipal –Site <url> –NameIdentifier $appID

At this point, once a user adds your new app to their site, then it *should* all just work. Now we all know that isn’t what normally happens and there are a myriad of places for this to fail.

My objective here was really just to help bring together a couple pieces of data and hope to clarify where they go each time you create a new SharePoint app and deploy it.

Provided User Name was not Encoded as a Claim

So today’s lesson is quite bizarre.. and I’m not sure where *exactly* the problem lies just yet. I was going through the process of configuring my SharePoint 2013 (SP2013) farm to test something with an external list that is bound to an external content type (ECT) backed by a SQL Server database. Simple enough on the surface:

  1. In your SP2013 farm, create and configure Business Data Connectivity (BDC) and Secure Store (SS) Service Applications
  2. Create the SQL database
  3. Add a SQL login to use with Secure Store (for my situation I chose not to use Kerberos Delegation)
  4. In SharePoint Designer (SPD), create an ECT and connect it to the database in step 1
  5. Create a custom list from this ECT

Now obviously there are a lot of details that I’m leaving out above, but the purpose of this post is not to show you how to create an ECT. Step 1, specifically, is where I encountered my issues today. I had already previously created the required service applications, but I had not configured them for use. Creating the SS application went without a hitch for the most part, but with the BDC service application it was not that easy.

You need to grant rights to the Metadata Store within the BDC service application in order for users to create an ECT so my first stop was to Central Admin and in to Manage the BDC service application.

Once in the service application management page, you will need to add a user to the Metadata Store Permissions

You may notice a bit of a problem in the screenshot above in that my user has not been resolved properly. (you may not have noticed it depending on how much sleep you’ve had, but that’s another discussion) As it so happens, I’m rather stubborn, so I proceeded to click the Add button anyway. Much to my surprise AdamB was added to the user list and I was allowed to select the permissions I wanted this user to have. However, when I clicked OK is when the real fun began.

As you can see, it didn’t work very well and resulted in the following error:

A provided user name was not encoded as a claim. Parameter name: aces

Now considering the fact that I occasionally enjoy a game of cards here and there “aces” got me rather excited until I realized it was trying to tell me that something failed and I had to now fix it. So where is the first place you go to unravel errors in SharePoint? The ULS logs, of course!! And here a list of a couple of the entries (scrubbed and filtered) around this error:

ULS Log Entries

01/07/2015 17:37:20.65    w3wp.exe (0x30F8)    0x3B7C    Business Connectivity Services    Business Data    9f4h    Unexpected    ‘BDC’ BdcServiceApplication logging server side ArgumentException before marshalling and rethrowing on client side: System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    General    8nca    Medium    Application error when access /_admin/BDC/ManageBDCPermissions.aspx, Error=A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    Runtime    tkau    Unexpected    System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

01/07/2015 17:37:20.65    w3wp.exe (0x3618)    0x340C    SharePoint Foundation    General    ajlz0    High    Getting Error Message for Exception System.Web.HttpUnhandledException (0x80004005): Exception of type ‘System.Web.HttpUnhandledException’ was thrown. —> System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces —> System.ArgumentException: A provided user name was not encoded as a claim. Parameter name: aces at Microsoft.SharePoint.BusinessData.SharedService.IndividuallySecurableMetadataObjectAccessor.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId, DbSessionWrapper dbSessionWrapper) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.<>c__DisplayClass2c.<Microsoft.SharePoint.BusinessData.SharedService.IBdcServiceApplication.SetAccessControlEntries>b__2b() at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation) — End of inner exception stack trace — at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplicationProxy.Execute[T](String operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation, Boolean performCanaryCheck, Boolean isChannelThatDelegatesIdentity) at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplicationProxy.SetAccessControlEntries(MetadataObjectStruct metadataObjectStruct, AccessControlEntryStruct[] aces, String settingId) at Microsoft.SharePoint.BusinessData.Infrastructure.BdcAccessControlList.SaveAs(MetadataObjectStruct metadataObjectStruct, String settingId, BdcServiceApplicationProxy serviceProxy) at Microsoft.SharePoint.BusinessData.Administration.IndividuallySecurableMetadataObject.SetAccessControlList(IAccessControlList acl, String settingId) at Microsoft.SharePoint.ApplicationPages.ManageBDCPermissions.OkButton_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    858bdd9c-f650-b0dd-7410-7cdb3f5cb1fd

So while the ULS information did put me at ease that I was not going to be shot by Wild Bill and it helped me to understand that the problem was with adding entries to Access Control Entries (ACEs) I still wasn’t any closer to a resolution. Thinking about the error a bit, though, did help me arrive at a path of investigation. The error is specifically talking about a user name that wasn’t “encoded as a claim”. So internally in SharePoint between the service applications we are utilizing windows claims for user identities. Within my farm specifically, my Central Administration web application is configured for NTLM Classic mode authentication and my content web applications are all configured for Windows Claims.

On a whim I tried to enter my user name in the People Picker in its claims format: i:0#w|contoso\adamb, but that provided no benefit and resulted in the same error series. So thinking about this a bit more I started wondering why we were looking for a claims encoded user name and remembered that although my content web applications were not using SAML claims I did, in fact, have a SAML Claims provider configured in the farm.

So we go on another whim and remove the SPTrustedIdentityTokenIssuer and try again. This time it worked without errors… neither the People Picker name resolution error, nor the “user name provided was not encoded as a claim” error appeared.

SYNOPSIS

When trying to add a user to the Metadata Store permissions of your BDC service application, if you receive name resolution errors and more specifically an error stating that the user name provided was not encoded as a claim, then you may need to remove any configured SAML claims providers in order to add the users to the permissions list.