Sunday, June 21, 2015

SP2013 MMS Example Scenario Not Supported

If you need to implement a multi-department solution like the documented example scenario at Overview of managed metadata service applications in SharePoint Server 2013, it will work fine except it will break the incremental crawl of friendly URLs (FURL).


Microsoft support will tell you that using more than one proxy (connection) for a specific MMS is not supported, even if this scenario is documented and even if you can create more that one proxy and use each MMS proxy in two different proxy groups to have different default storage location settings for two web-applications.


Microsoft support will tell you that the product group says that the documented scenario on Technet is not supported. Using two connections (MMS proxy) would be required as the "column-specific term set location" needs to be different for the two web-apps, thus they cannot share/reuse a single MMS proxy.

According to MSFT support, the documentation on Technet is "not very accurate".

Thursday, May 08, 2014

Getting SSL Termination to work for HNSC in SP2013

We have been struggling a bit with getting off-box SSL termination to work properly for SharePoint 2013 host-named site collections (HNSC). We had issues with the ribbon, with admin pages like "manage content and structure", and with the term picker. Sure signs that some JavaScript files did not load. Users could not edit the terms in managed metadata fields, that is, terms could be selected, but clicking "ok" to save would just hang forever. A lot of scripts and links would not load, showing mixed content warnings in IE9 - and nothing at all in Chrome and Firefox, which both just blocks HTTP content on secure HTTPS pages.

To cut to the chase, this setup for SSL offloading is what worked for us:
  • create the web-app on port 80 (not 443), do not use the -SecureSocetsLayer switch
  • do not use the server name as the web-app name, you have a farm - don't you?
  • always extend the web-app to other zones before starting to create HNSC sites; leave one zone unextended, e.g. the "custom" zone
  • create a classic root site-collection with the same HTTP name as the web-app, do not use a HNSC for this
  • a site template is not required for the root site-collection
  • alternate access mapping (AAM) is used for load balancing even for HNSC, but HNSCs can't use AAM for host name aliases
  • create the HNSC using an internal HTTP URL in New-SPSite for the default zone, remember that crawling must always use the default zone
  • create a public URL alias for the default zone by mapping an unextended zone using a HTTPS URL in Set-SPSiteUrl, such as the "custom" zone
  • create public HNSC mappings using HTTPS URL in Set-SPSiteUrl for the other zones
  • ensure that your gateway adds the custom header "front-end-https: on" for all your public URLs secured using SSL
  • note that using just "front-end-https: on" and HTTP in the public URL will not correctly rewrite all links in the returned pages

In short, the salient point is to use HTTPS in the public URLs even if the web-app zone does not use the SecureSocetsLayer switch nor any SSL certificates. The default zone of the web-application must be configured for crawling - either no SSL or full SSL with certificates assigned in IIS. With no SSL you have to simulate AAM by mapping two URLs to the HNSC default zone. Using Set-SPSiteUrl on an unextended zone is like creating an alias for the default zone.

We had to use HTTP on the default zone to crawl the content of the published pages. It seems that if the web-application does not use SSL and your site default zone uses a HTTPS host header, then only the friendly URLs (FURL) will be crawled while the content will generate a lot of "This item comprises multiple parts and/or may have attachments. Not all of these parts were indexed." warnings. The result of the warning is no metadata being indexed, thus no search results - not good for a search-driven solution.

Note that SSL is recommended for all web-applications in SP2013 also inside the firewall, especially if you use apps - as the OAuth tokens otherwise will be exposed in the HTTP traffic, just as classic IIS basic authentication is not recommended without SSL. We wanted to do SSL bridging with BigIP due to this, but could not get SSL server name indication (SNI) configured successfully in BigIP v11 to allow us to have SSL certificates bound to two different IIS web-sites, even if IIS8 supports SNI.

SNI is required when the shared wildcard certificate or SAN certificate approach cannot be used for your SP2013 web-application setup, i.e. when binding to host names in multiple IIS web-sites at the web-application level. SNI is required when you need to use more than one web-application or more than one zone (extended web-app), even if you could bind your one-SAN-to-rule-them-all certificate to multiple IIS web-sites. IIS cannot route the request based on the host header until the request has been decrypted - SNI allows the request to be routed to the correct IIS web-site.

Remember that this is the path the HTTP(S) request travels from the browser:

 browser >
  host header >
   DNS A-record >
    virtual IP-address (VIP) in gateway > SSL off-box termination here
     load balancing >
      IIS server configured with IP-address >
       IIS web-site bound to IP-address (or host header) > normal SSL termination here
        SP web-application >
         site-collection bound to host header (HNSC)

Keeping tabs on this will help you understand the Technet guide to HNSC, which has some room for improvements. See this article by jasonth for a step-by-step guide for HNSC and SSL. Note that binding to host names in IIS rather than to IP-addresses for HNSCs at the SP2013 web-application level is supported, just as it was for SP2010.

Thursday, May 01, 2014

Managed Metadata Navigation, Anonymous Users in SP2013

The new term-driven navigation in SP2013 has some gotchas for anonymous users, resulting in them not seeing a full navigation menu. These are some things to check:
Finally, remember that you have to publish a major version for each page that you link to from the navigation node, otherwise anonymous users won't see the page, and neither the term. This includes all items on the page that also requires approval, such as images. An easy thing to forget, if you've been so stupid as not to use the simple publishing configuration for your site. If you as an admin or logged in user can see terms and view a page, while visitors can not - you forgot to publish. An empty page or no term is a sure sign.

Related to the managed navigation is the friendly URL (FURL) mechanism, which uses the term set structure to build the FURL from the linked-to term. To prevent broken links when moving a term, SP2013 stores links using the FIXUPREDIRECT.ASPX page, with params such as the termID, which will be resolved server-side into a friendly URL when rendered (see navigation term GetResolvedDisplayUrl). Do not render RichHtmlField using the simple "SPWC:FieldValue" web-control, as this will not resolve the fixup-links. In addition, having the same control both in an edit mode panel and in a display mode panel might cause problems.

This all applies to author-in-place (AIP) usage of term-driven navigation and friendly URLs; cross-site publishing (XSP) have different kind of issues.

Note that the managed navigation term set is stored in the default MMS of the hosting web-application. It uses the local term store for the site-collection it belongs to (IsSiteCollectionGroup). This will affect your backup/restore procedure as not only the content database or the site-collection backup will be needed for a restore, the MMS database or tenant backup is also needed. As all host-named site-collections (HNSC) share a web-application, restoring the MMS with it's term stores will affect the navigation term set of all site-collections. Take care.

Wednesday, October 23, 2013

Roadmap for Responsive Design and Dynamic Content in SharePoint 2013

Responsive design combined with dynamic user-driven content and mobile first seems to be the main focus everywhere these days. The approach outlined in Optimizing SharePoint 2013 websites for mobile devices by Waldek Mastykarz and his How we did it series show how it can be achieved using SharePoint 2013.

But what if you have thousands of pages with good content that you want make resposive? What approach will you use to adapt all those articles after migrating the content over from SharePoint 2010? This post suggests a roadmap for gradually transforming your static content to become dynamic content.

First of all, the metadata of your content types must be classified so that you know the ranking of the metadata within a content type, so that it can be used in combination with device channel panels and resposive web design (RWD) techniques to prioritize what to show on what devices. This will most likely introduce more specialized content types with more granular metadata fields. All your content will need to be edited to fit the new content classification, at least the most important content you have. By edited, I mean that the content type of articles must be changed and the content adapted to the new metadata; in addition, selecting a new content type will also cause a new RWD page layout to be used for the content. These new RWD page layouts and master pages are also something you need to design and implement, as part of your new user experience (UX) concept.

While editing all the (most important) existing content, it is also a good time to ensure that the content gets high quality tagging according to your information architecture (IA), as tagging is the cornerstone of a good, dynamic user experience provided by term-driven navigation and search-driven content. Editing the content is the most important job here, tagging the content is not required to enable RWD for your pages.

Doing all of this at once as part of migrating to SP2013 is by experience too much to be realistic, so this is my recommended roadmap:

Phase 1
- Focus on RWD based on the new content types and their new prioritized metadata and new responsive master pages and page layouts
- Quick win: revise the search center to exploit the new search features, even if tagging is postponed to a later phase (IA: findability and search experience)
- Keep the existing information architecture structure, and thus the navigation as-is
- Keep the page content as-is, do not add search-driven content to the pages yet, focus on making the articles responsive
- Most time-consuming effort: adapting the content type of all articles, cutting and pasting content within each article to fit the new prioritized metadata structure

Phase 2
- Focus on your new concept for structure and navigation in SP2013 (IA: content classification and structure, browse and navigate UX)
- Tagging of the articles according to the new IA-concept for dynamic structuring of the content (IA: term sets for taxonomy)
- Keep the page content as-is, no new search-driven UX in this phase, just term-driven navigation
- Most time-consuming effort: tagging all of your articles, try scripting some auto-tagging based on the existing structure of the content

Phase 3
- Focus on search-driven content in the pages according to the new concept  (IA: discover and explore UX)
- New routines and processes for authors, approvers and publishers based on new SP2013 capabilities (IA: content contributor experience)
- Most time-consuming effort: tune and tag the content of all your articles to drive the ranking of the search-driven content according to the new concept

Phase 4
- Content targeting in the pages based on visitor profile segmentation, this kind of user-driven content is also search-driven content realized using query rules (and some code)

The IA aspects in the roadmap are taken from my SharePoint Information Architecture from the Field article.