PowerAppsPuzzle: Sorting SelectedItems

Let’s take the scenario where you have a ListBox in a PowerApp that contains sorted information – either alphabetically or numerically, doesn’t matter – similar to the following:
The user selects 1, then 2, then 4, and all is good, however you may notice that sometimes the values are not sorted. You may receive the values out of order like: 4,2,1 or 2,4,1. The reason for this is that the values are in order that they are selected and not necessarily in the order that they exist in the list. For example, if the user selects 1, then 2, then 4, and then deselects 2 and selects 3, the value list will result in 1,4,3.

You can easily demonstrate this using a Label and the Concat() formula to see the results of selecting different items:

PA_ListBox_Numeric_2

 

In order to preserve the sequential or sorted order of the values selected, one can use the SortByColumns() formula as the first parameter to the Concat() formula as shown below:

 

Hope this helps!
Advertisements

Friday Thought of the Day

This is over a year old, but it was the first time I’d seen it… it’s a whimsical interview with Satya Nadella (CEO of Microsoft Corporation) by the Wall Street Journal. It’s only ~2 min long, so if you haven’t seen it, then it’s worth the time for a look. The very last question (1:50 mark) is what struck me the most.

“Believe in yourself more than you do”

It sounds a bit hokey, but I’d seriously like to get that tattooed on the inside of my forearm. (and yes, I have a few others so it wouldn’t be a first) Being critical of one’s self is human nature, I think. And it helps to fine tune our ideas and actions to make them even better. Some of us take this a bit further — to an art form, if you will– and create more ways to doubt our own abilities. Remember a few key things:

  • You wouldn’t be here if you were not excellent at what you do
  • Trust those who believe in you that they believe you are more than capable for the task at hand
  • Believe in yourself more than you do

You CAN do this

Help! I need QuickEdit!!

Hi all!  My latest scripting effort comes as a result of the 2010 –> 2013 migration from one of my customers. We should all know about the migration process (database upgrade, then site upgrade) and it was after the site upgrade that they were being informed that their QuickEdit options were disabled in *some of* their custom lists. Now this phenomenon has been discussed and talked about in many other blogs:

http://www.sharepointdiary.com/2014/12/fix-quick-edit-disabled-in-sharepoint-2013-issue.html

https://veroniquepalmer.com/2013/09/29/quick-edit-gotcha-in-sharepoint-2013/

The request was multi-fold:

– Is there a way we can detect which lists will experience this problem?

I have not, as of yet, determined what causes this in some lists and not others, but for the purposes of this work we checked any custom list that had been upgraded to the 2013 UI.

– Is there an automated way we can correct it rather than manually navigating to each list?

Yes

So just to refresh the issue… In your 2010 environment, users could use Datasheet View to modify custom lists and it looked like this:

datasheet view 2010

 

As long as the site remains in 2010 mode after upgrading the content database to 2013, the functionality remains and is still available. However after upgrading the site to 2013, then Datasheet View is replaced by QuickEdit and in some cases (as with my customer) is disabled:

QuickEdit Disabled in 2013

You will also notice that the nice HTML5 UI is no longer here either.

To be 100% transparent, I didn’t do all the research on this issue. I simply provided the resultant script to fix it. What my customer found was that the issue affected all 2010 custom lists and after executing the site upgrade, the JSLink from their View had been set to ‘null’ or ‘empty’. In order to correct the issue, what they had to do was reset JSLink on their view to ‘clienttemplates.js’.

I’ve attached the script to this blog post, but the core of the logic is around enumerating through each list and each view of each list to check the value of JSLink. The script also have a report-only function – which is the default. If run in report-only mode, then no changes are made and only a list of the views and their corresponding JSLink values will be output.

The parameters for the script are:

Url – this represents the url of the web, site, or web application that you want to scan; Required

Scope – this represents the scope of the scan and can be one of the following: web, site, webapp; Required

Repair – this represents whether you want to repair the views that are found; Optional; default value is $false

New phase of life/career!

I’m excited to start a new phase of my career at Microsoft and hopefully continue posting information as I find it. Throughout my life here I’ve been in our Support/Services organizations assisting our customers with planning, implementing, and maintaining, a wide variety of Microsoft products. Primarily I’ve been focused on Internet platforms like BizTalk Server and SharePoint Server.

While the majority of my work has been architectural or administrative in nature, there are always components of development. Whether it is writing sample code for the customer or writing utility tools to assist with diagnosing problems or resolving issues. So I joined our Premier Developer team in Nov of 2015 as a Senior Dev Consultant in order to push myself more into a developer role.

As I’m sure most of you know, the trends are moving away from on premises products and towards cloud and mobile related technologies. Development is one of those areas where you can always have an impact. So I’m looking forward to the new role and the future posts that I will be able to make … hoping they help you as much as they help me!  Smile

Workflow Manager 1.0 Refresh Disaster Recovery (further) Explained

With the release of SharePoint 2013, Microsoft released a new platform for workflows called Workflow Manager (WFM). As of this writing the current version is 1.0 Cumulative Update 3. Unfortunately disaster recovery (DR) for this product is not as straight forward as just setting up database replication.

Following are a list of resources I’ve used to implement disaster recovery:

I found that each of the above references hold vital clues to making DR for WFM work, but none of them had details upon which I was stumbling. There are two basic concepts where I needed to do additional research:

    • Certificates (which ones to use where and how to restore effectively)
    • Changing service accounts and admin groups upon a failover

As pointed out there are plenty of TechNet articles and blogs that talk about how to do WFM Disaster Recovery (DR), so I am not going into detail on the individual steps, but I decided to document my discoveries in hopes that others can benefit from my experiences.

So, at a high level, the basic operation is as follows. I’ll have sections below describing each of the areas where I had concerns:

    • Install production WFM and configure
    • Configure your backup/replication strategy for the WF/SB databases
    • Install WFM in DR
    • Execute the failover process
    • Re-connect SharePoint 2013
    • (Optional) Changing RunAsAccount and AdminGroup

Install Production WFM and Configure

Certificates – AutoGenerate or custom Certs?

Installing WFM 1.0 CU3 is fairly well documented in several places, but the one piece that I feel needs to be called out is regarding certificate configuration. There are options to Autogenerate your certificates (self-signed), to use your own domain certificates, or to use certs acquired from a 3rd party certificate authority. There are some businesses who have no restrictions against self-signed certs, but this will affect your restoration of service in the DR environment. As noted in Spencer’s blog, there are a total of six or seven possible certificates. Auto-generating your WFM certificates will dictate your restoration process in a failover scenario. One reason for this is that the WorkflowOutbound certficate is created with private keys, but they non-exportable.

Configure Your Backup/Replication Strategy for the WF/SB Databases

The key to disaster recovery with WFM (as with many products) is the data store. In this case we are referring to the SQL Server databases. Again, this information is in the related links and there are two things to keep in mind:

  1. You can use pretty much any replication method – backup/restore, mirroring, log shipping — except for SQL Server 2012 AlwaysOn, which is unsupported at this time. It is also crucially important to keep the WF/SB databases backed up as close in time as possible as the content databases in order to preserve the WF instance integrity.

UPDATE: With the release of Workflow Manager CU 4, SQL AlwaysOn is now supported and should be considered as the High Availability/Disaster Recovery solution. You can find information on CU4 here. And you can find installation information here.

  1. You do not need to backup the management databases, WFManagementDb and SBManagementDb, as they will be re-created during the recovery process.

Install WFM in DR

Depending on whether you want a cold or warm standby WFM farm, you will either have already installed the servers or will perform this as part of your recovery process. NOTE: WFM does *not* support a hot standby configuration. There are a couple of keys to your DR installation:

  • You will install the bits on the DR app servers, but you will *not* configure the product at this time.
  • If you are choosing to do a warm standby, then you may also import the necessary certificates ahead of time.
    • If you are using:
      • Auto-generated certificates, then it’s important to know that you need to export/import the Service Bus certificates from Prod to DR and for the Workflow Manager certificates you can auto-generate them in DR (remember you cannot import/export the WF certificates because the private keys are marked as non-exportable)
      • Custom domain certificates, then you will export/import all of them from Prod to DR
  • The Service Bus root certificate should be imported into the LocalMachine\TrustedRootAuthorities store.
  • The other Service Bus certs should be imported into the LocalMachine\Personal store.

Executing the Failover Process

In the event of a disaster (or just a need to failover), the following process is required.

  1. Restore the 4+ SQL databases (WFResourceManagementDb, WFInstanceManagementDb, SBGatwayDatabase, SBMessageContainer01 – n) from prod_SQL to dr_SQL.
  2. Assuming the steps above have been followed to install WFM in DR, then you need to use powershell to restore the SB farm. If you were doing a true ‘cold standby’, then you need to install (but not configure) the SB/WF bits from Web Platform Installer.
  3. Restore the SBFarm, SBGateway, and MessageContainer databases and settings (do this on only one WFM node)
      • The SBManagementDB will be created in DR during this ‘restore’ process
      • The RunAsAccount *must* be the same as the credentials used in production
  1. Again, using powershell, run Add-SBHost on each node of the farm.
  2. If you used auto-generated certificates for the WFFarm in prod, then when you restore the WFFarm you will auto-generate new ones. However this also means that you may need to restore the PrimarySymmeticKey to the new SBNamespace.
  3. At this point, restore the WFFarm using powershell (do this on only one WFM node)
  4. Run Add-WFHost on each node of the farm.

At this point, the new WF Farm should be in a working state. You can test this by navigating to the endpoint in a browser and you should receive output similar to the image below:

clip_image002

Re-connect SharePoint 2013

If WF certificates were re-generated in DR, then you will need to recreate the SharePoint Trusted Root Authority. Export the WF SSL certificate and add it to the SharePoint farm using New-SPTrustedRootAuthority.

Create a new registration to the Workflow farm using Register-SPWorkflowService.

There is a cache of security trusts, so in order to see the change more immediately you will likely need to execute the timer job “Refresh Trusted Security Token Services Metadata feed.” with the following powershell:

Start-SPTimerJob –Identity ‘RefreshMetadataFeed’

(Optional) Changing RunAsAccount and AdminGroup

Summary

The process above should work in most (if not all) scenarios, but I welcome any comments if you encounter problems or challenges. I’ve spent many hours on this over the past 6 months off and on and it’s very possible that I’ve missed something. Smile

I’ll add the last section about changing service accounts once I have the complete set of steps for WF accounts. Service Bus added powershell cmdlets, which makes this easier, but Workflow Manager has not as of yet.

UPDATE: With the release of Workflow Manager CU 4, one can now change the credentials for the Workflow Manager Service with the Set-WFCredentials powershell commandlet. You can find information on CU4 here. And you can find installation information here.

Configuring UserPhotoExpiration for User Profile Photo Sync between Exchange 2013 and SharePoint 2013

Following on my previous post about different user profile photo options for SharePoint 2013, I wanted to expand on some research that I had done for one of my customers in this area regarding the expiration values. There are a couple of scarcely documented* properties that will also affect when a user’s photo is re-synchronized from Exchange 2013 instead of just using the cached photo in SharePoint 2013.

To cover the basics, I used this blog to configure the integration between SharePoint 2013 and Exchange 2013 for user photos. There are a couple of SPWebApplication properties that are set here:

  • UserPhotoImportEnabled – this property defines if SharePoint should import photos from Exchange
  • UserPhotoExpiration – this property defines (in hours) how long the photo in the user photo library of the MySite host should be considered valid before attempting to synchronize a potentially updated photo
  • UserPhotoErrorExpiration – this property tells SharePoint that if encountered an error attempting to retrieve a new photo less than ‘this many’ hours ago, then do not attempt again

These are fairly well-known properties, but there are a couple of others that affect how often or *if* your user photo sync will happen. These additional properties are contained in the web application property bag:

  • DisableEnhancedBrowserCachingForUserPhotos – (default: not present) If this property is set to ‘false’ or is not present, then SharePoint will bypass the blob cache. If the property is set to ‘true’, then SharePoint will check the timestamp and will bypass the blob cache if the timestamp passed in is within 60 seconds of the current time
  • AllowAllPhotoThumbnailsTriggerExchangeSync – (default: not present) If this property is set to ‘false’ or is not present, then SharePoint will only trigger a sync with Exchange if the thumbnail being requested is the Large thumbnail.

So what I want to explain below is a series of steps that I took in my lab to hopefully illustrate how these properties work.

Anne accesses her profile page to change her photo and it properly redirects her to Outlook Web App (OWA) where she can upload her latest professionally taken headshot (photo1.jpg). Upon completion, she returns to her profile page and sees the new photo. She also navigates to OWA and sees the photo there as well.

Adam navigates to Anne’s profile page and sees the recently uploaded photo.

Anne decides that she wants to upload a different photo (photo2.jpg) and does so through OWA instead of through her profile page. In this case photo2.jpg does not show up immediately in her profile page and she and other users are still seeing photo1.jpg; however OWA is showing photo2.jpg.

Why is SharePoint not updating the photo?

Basically, it will depend on the settings above combined with what method was used to change the photo. In the above scenario, Anne changed her photo the second time via OWA. SharePoint has no way to know that the photo was changed until its cached photo expiration (UserPhotoExpiration) value has passed from the first time the photo was changed. Even after the photo expiration has passed, there still has to be some action that triggers for SharePoint to check. In this case, Anne navigating to her profile page (since it’s the large thumbnail) should trigger SharePoint to evaluate the expiration values and if needed re-sync Anne’s photo.

Why wouldn’t I reduce the UserPhotoExpiration value to 0 hours?

I’m sure that in some installations this would not be a problem, but the point of a cache (in this case SharePoint’s user photo library) is to reduce round-trips to the authoritative source of data. You likely do *not* want SharePoint to be contacting Exchange every time someone accesses their profile photo.

How do I set these properties?

In powershell, of course! For the first three above there are examples within this blog, but I’ll duplicate them here along with the other two. To reiterate, these values are for the sake of examples and you should do your own testing to find out what works for your environment:

$wa = Get-SPWebApplication https://my.contoso.lab

$wa.UserPhotoImportEnabled = $true

$wa.UserPhotoExpiration = 6

$wa.UserPhotoErrorExpiration = 1

$wa.Properties.Add(“DisableEnhancedBrowserCachingForUserPhotos”, “true”)

$wa.Properties.Add(“AllowAllPhotoThumbnailsTriggerExchangeSync”, “true”)

$wa.Update()

  

* – I say scarcely documented as I could find very few references to DisableEnhancedBrowserCachingForUserPhotos or AllowAllPhotoThumbnailsTriggerExchangeSync that were related to the topic. I did, however, find one that was helpful in directing my research: http://sharepoint.sigicom.ch/Blog/Beitrag/2/Profile-Picture-Cache. And since I am not fluent in German, I’m thankful for the translation tools we have available on the Internet.

Simplifying?? Provider Hosted Apps with SharePoint 2013–Part 1 of X?

From my own experience and most everyone I’ve talked with, if you’ve done anything with SP2013 Provider Hosted Apps, there seem to be three outcomes:

  1. You just get it
  2. You just don’t get it
  3. You beat your head against a wall and randomly get it to work without having a clue why

 

Up until recently I have been part of group 3 above… I started playing with SP2013 before it released, but since my primary skills and experience are not developer related I haven’t needed to *really* understand all of the nuances to the new app model.

One of my biggest frustrations with the entire thing is that despite how much I search I haven’t found any single good resource to describe/discuss the different properties and configuration needed to get this working. Part of the reason for this is that most of the authors of the content make assumptions about the reader’s experience and not everyone was born to be a developer. Smile Speaking of assumptions… this post does assume that you have already configured your farm for developing these provider hosted applications as specified in this MSDN article. (yes, I know I said I hate having to jump all over many articles, call me a hypocrite)

This is going to be one really long post or a series…I’m not sure how this will end, but the specific scenario that I’m focusing on is a high-trust provider hosted app for SharePoint 2013. Basically this will be an ASP.Net Web Application secured with an SSL certificate and provides tokens that it generates internally to pass to SharePoint. There are certain differences depending on whether you are a developer using this for creating apps and debugging them or if you are an ITPro responsible for deploying these apps into a production farm. I’ll try to point out these differences as I discuss.

CONFIGURATION:

While I certainly have all the other servers that are necessary for this task, the important points here are that I have a multi-server SharePoint 2013 farm (a web front end that also hosts the Central Admin and an application server which hosts Search) and a separate IIS server to host the ‘provider hosted web application’. In my case I’m using the SP2013 web front end as my Visual Studio box as well.

DATA NEEDED:

Certificate – this can be a self-signed certificate, a domain cert, or a real cert such as Verisign or GoDaddy. In my case I’m using a domain cert issued by my domain certificate authority.

IssuerID – this is a GUID that will be used in several places. You can generate it from Visual Studio or using Powershell, but it will be used as the provider hosted application IssuerID and for part of the SPTrustedSecurityTokenIssuer registered name.

ClientID – this is another GUID that will be generated in different ways depending on the scenario you are using. It will be used as the ClientID in the web.config of the provider hosted application and as part of the ID for the SPAppPrincipal that you register either using AppRegNew.aspx or Register-SPAppPrincipal.

Create Trusted Root Authority (MSDN Link)

You will eventually need both the CER (public key) and PFX (private key) from your certificate, but for this exercise only the CER is needed. On the SharePoint server open the SharePoint Management Console and execute the following powershell:

$publicCertPath = “<full file path to .cer>”
$certificate = New-Object System.Security.Cryptography.X509Certifcates.X509Certificate2($pulicCertPath)
New-SPTrustedRootAuthority –Name “HighTrustSampleCert” –Certificate $certificate

Note that if your certificate is a sub-ordinate certificate, then you also need to repeat the above for the parent certificate and any other certificates that are in the chain. (Check the ‘Certification Path’ tab of the Certificate properties to see if there are multiple certs in the chain).

Create SP Trusted Security Token Issuer

For this section you will need to generate a new GUID to use as the IssuerID and you will create a token issuer for SharePoint that will be responsible for issuing access tokens to your provider hosted application.

<still in the same SharePoint Management Console as above>

$IssuerID = [System.Guid]::NewGuid().ToString()
$spSite = “<url of site>”
$realm = Get-SPAuthenticationRealm –ServiceContext $spSite
$fullAppIssuerID = $IssuerID + ‘@’ + $realm
New-SPTrustedSecurityTokenIssuer –Name $IssuerID –Certificate $certificate –RegisteredIssuerName $fullAppIssuerID –IsTrustBroker
IISReset

Make a note of the $IssuerID as you will need that later. The URL of the site above is really only important if you have site subscriptions enabled (which most of you do not). We could technically get the $realm in many ways, but the above seems to be the common method. Concatenating the $IssuerID and the $realm gives us the ID for the security token issuer for issuing tokens to apps.

Create High-Trust Provider Hosted Application for SharePoint in Visual Studio

I don’t want to go into the click-by-click details of how to create a new project, but if someone needs help with this just holler in the comments section.

From Visual Studio (VS) 2013, create a new project from <language>\Office/SharePoint\Apps\App for SharePoint 2013. When you click OK after selecting the project type you will be asked for a couple of pieces of information:

What SharePoint site do you want to use for debugging?

For this, it can be any SharePoint site. If you are the developer and using this for creating/debugging the app, then you can select any developer site.

How do you want to host your app for SharePoint?

Provider-hosted

Next.. Which type of web application project do you want to create?

ASP.NET Web Forms Application

Next.. How do you want your app to authenticate?

Use a Certificate (for SharePoint on-premises apps using high-trust)
Certificate Location: path to your PFX file from above
Password: password to your certificate’s private key
Issuer ID: The issuerID from above. The same as what you used for the SPTrustedSecurityTokenIssuer. (all lowercase)

Finish..

Notes about the above section. There are 3 types of apps listed: Provider-hosted, Autohosted, and SharePoint-hosted. The Autohosted apps have been discontinued so you can ignore that option and I imagine it will be removed in the near future. SharePoint-hosted apps are totally running within your SharePoint environment.

The option about ASP.Net Web Forms vs. MVC is unimportant for the context of this blog post, but I will verify later that this configuration works for both options.

The IssuerID is critical to success. It *must* be the same IssuerID that you used above to create the SPTrustedSecurityTokenIssuer.

If you are creating/debugging this app, then at this point you should be able to hit F5 and it will launch the app and connect to the SharePoint site you specified in debug mode. This does a lot of work behind the scenes…

such as entering the correct information into the web.config for the ClientID:

VSbuild_insert_clientID_webconfig

and registers an app principal for your app and creates the app permissions:

AppPrincipal_created

AppPermissions_added

Notice in the screenshot above that the first part of the App Identifier matches the ClientID from the previous screenshot where VS inserted it into the web.config.

This will also upload the app package to the desired site (this is why it needs to either be a developer site or a site with developer features enabled). One of the differences that needs to be pointed out here is that on a developer site when you deploy an app via VS, there is no “app catalog” and apps are simply uploaded and installed directly to the site.

Create High-Trust Provider Hosted Application for SharePoint in Visual Studio for Package Deployment

So this section will show the slight differences between using VS for just creation and debugging versus using it for creation of app packages that will then be passed off to a deployment team.

The steps above for creating the SPTrustedRootAuthority and the SPTrustedSecurityTokenIssuer are the same for this process. All of that still needs to be done. The main difference is around how the app is packaged and then deployed.

Starting with a new project in VS, you will need some information such as the following:

ClientID – in the previous scenario (debugging) this was provided for you by Visual Studio. For this scenario you will need to provide this. You can create it the same way you created the IssuerID earlier with Powershell, but make a note of it as you will need it again shortly.

Certificate Location and password – path to your CER/PFX files

IssuerID – this is the same ID that you used to create the SPTrustedSecurityTokenIssuer

You start by packaging the SharePoint app project. Right-click on the SharePoint App project in your solution and select Publish:

Publish_SPApp_from_VS_thumb2

      

This launches a wizard where you need to create a new publishing profile. Click the drop down next to ‘Current Profile’ and select ‘<New>’, then select “Create new profile” and provide a descriptive name, then click Next.

Here is where you will need to provide the information from above. ClientID, Certificate location (to the PFX), Certificate password, and IssuerID. Click Finish. At this point you have created a publishing profile. Now you will need to actually create the package file by clicking the “Package the app” button:

PackageTheApp_button

This launches a new wizard that requires the URL for your provider hosted web application – this is where the non-SharePoint portion of your app is running – and the ClientID. Notice that the ClientID is pre-populated for you since you supplied it in the publishing profile.

image 

Click finish and a Windows Explorer window opens showing you the .APP file that is created. This file is what you’ll deliver to the SharePoint deployment team to upload to the SharePoint App Catalog.

But we’re not done yet.. Now you need to go back to VS and before you publish the web application project, be sure to verify that the web.config contains the proper information for the location and password for the certificate as well as the IssuerID. These values should have been supplied at project creation time. The ClientID, however, needs to be added at this point to the web.config.

Now right-click the Web Application project and select Publish. Select the Publishing Profile that you created above and click Next..

image

WebPublishOptions1

There are a variety of publishing options…for this conversation I’ll use the publishing method of Web Deploy Package. This will create a set of files that can be manipulated to deploy the web application to the destination web server. Discussion of all of these details is beyond the scope of this post, unfortunately.

WebPublishOptions2

The other information needed for the above dialog is a location where to save the web deploy package and the web site name where you want the files deployed. Click Next twice and then click Publish.

Now what? You have an APP file with your SharePoint App component and a set of files from the web deploy package for the actual provider hosted app. Copy the web deploy package files over to your target IIS server and do the following:

  1. Be sure that your PFX certificate is installed so that it can be accessed by IIS
  2. Create a new IIS site and edit the bindings so that it is secured by the certificate.
  3. Disable anonymous authentication and enable Windows authentication. This is a *MUST*. The provider hosted application must be able to authenticate users when they hit the site in order to create the access tokens that are sent back to SharePoint.
  4. Now execute the .CMD file in the web deploy package folder in order to install the web application files.

 

Obviously there are details here I’m leaving out because I feel this blog post is getting too long as it is… I may come back in later and break it up into pieces if there is a lot of confusion around the parts I’m leaving out. Remember, my whole reason for doing this is that too many other blogs make too many assumptions and leave out content… now I may be understanding why they do that… it’s a LOT of content. Smile

For the .APP file, you pass this to the SharePoint Deployment team so they can upload the file to the App Catalog and make it available to the web application… but WAIT!!!

It still won’t work… recall when you were creating/debugging the app in VS against a developer site that the sheer act of publishing the app components caused some things to happen. One of those things is the creation of the AppPrincipal. The AppPrincipal is how SharePoint assigns permissions to your apps in order to access objects within SharePoint. The information you’ll need to create the AppPrincipal:

ClientID – the same ID that you created and specified in both the publishing profile and the web.config     
Realm – remember this can be obtained from Get-SPAuthenticationRealm
URL – this should be the url of the SharePoint site where you are installing the app     

$ClientID = <guid>
$realm = Get-SPAuthenticationRealm
$appID = $ClientID + ‘@’ + $realm
Register-SPAppPrincipal –Site <url> –NameIdentifier $appID

At this point, once a user adds your new app to their site, then it *should* all just work. Now we all know that isn’t what normally happens and there are a myriad of places for this to fail.

My objective here was really just to help bring together a couple pieces of data and hope to clarify where they go each time you create a new SharePoint app and deploy it.