Friday, December 11, 2009

SharePoint Computed Site Columns, Site & List Content Types

I was creating a customized CQWP rollup for linking to applications and tools from a MOSS site, and used the OpenInNewWindow attribute in my custom ItemStyle.xsl file to get target="_blank" in the links. In addition, I wanted the "open in new window" functionality in the lists storing the actual links. Unfortunately, this is not available ootb in SharePoint lists using the URL site column.

So I took the list field CAML definitions from Andrew Connell's Adding "OpenInNewWindow" option to SharePoint Links lists post and modified them into new site columns:

<Field Type="Boolean" Name="OpenInNewWindow" DisplayName="Open in New Window" Required="TRUE" Group="KjellSJ site columns" ID="{2F88664E-D2FB-40da-B962-C454C895074F}" SourceID="" />

<Field ID="{7CC38078-C7F4-4c1a-B6F6-D681D4079E3B}"
Group="KjellSJ site columns"
Name="URLNewWindowLink" DisplayName="URL (new window link)"
ShowInNewForm="FALSE" ShowInFileDlg="FALSE" ShowInEditForm="FALSE"
Filterable="FALSE" Sortable="FALSE"
<FieldRef ID="{c29e077d-f466-4d8e-8bbe-72b66c5f205c}" Name="URL"/>
<FieldRef ID="{2F88664E-D2FB-40da-B962-C454C895074F}" Name="OpenInNewWindow"/>
<Column Name="URL" HTMLEncode="TRUE"/>
<Expr><Column Name="OpenInNewWindow"/></Expr>
<Case Value="1"> <HTML><![CDATA[ target="_blank"]]></HTML> </Case>
<Expr><Column2 Name="URL"/></Expr>
<Case Value=""> <Column Name="URL" HTMLEncode="TRUE"/> </Case>
<Default> <Column2 Name="URL" HTMLEncode="TRUE"/> </Default>

I added the new site columns to a site content type, deployed and checked the results. The computed site column was not shown in the Site Columns Gallery, but neither are any of the ootb computed site columns. Alas, computed columns are shown when looking at content types, but cannot be added, updated or removed. Check e.g. the Picture Size field in the ootb Picture content type, which is the computed ImageSize site column. You can still see the computed site column by opening the OpenInNewWindow column and just changing the &field= name in the query string to URLNewWindowLink - or use SharePoint Manager 2007.

With the new fields added to the site content type, I created a custom list from Site Settings and changed the list content type from Item to my AppsToolsList site content type. To my surprise, the computed site column was not added to the list, and I confirmed that it had not been added to the list content type either by using SharePoint Manager 2007.

The sad thing about computed columns being hidden in all site column pickers is that you cannot add it using "Add from existing site columns" in List Settings. It seems the only way of getting a computed field provisioned to a list is by adding it as a CAML list definition field.

Knowing that there are quirks when provisioning new list content types from site content types (see List Content Type Fields & Forms: CAML vs Code), I used the object model to provision the computed site column by adding it using code:

urlNewWindowLink.Title = "URL";
list.Fields.AddFieldAsXml(urlNewWindowLink.SchemaXml, true, SPAddFieldOptions.AddToNoContentType);

The computed column is added to the list's field collection and to its default view, but not to the list content type. The DisplayPattern CAML works fine, but only as long as you don't rename the computed field using List Settings > Edit Column (using the &field= trick). Doing so will change the DisplayName of the field, but also remove the FieldRefs from the list field's SchemaXml. The computed field rendering will still work as long as the fields used in the DisplayPattern CAML is in the view, but remove them from the view and you'll get this error:


You can always implement a list definition instead of using code, just remember that you need to duplicate all site column field definitions in the list template due to the way CAML provisioning works.

Friday, December 04, 2009

Wildcard People Search in SharePoint 2007

The out-of-the-box SharePoint 2007 people search lacks a central feature that most people take for granted; the ability to search for persons using partial names. The search engine itself has this capability through the property "starts with" mechanism, as shown in People Search Content Editor Web Part by Larry Kuhn.

I've improved a bit on Larry's wildcard people search web-part, by adding prefilling of the search boxes with the searched for values from the query string. I've also added maxlength 32 on the text boxes to avoid overruns. Note that this new functionality is not fully working when in page edit mode due to some extra encoding of the query string in edit mode.

Download the improved wildcard people search web-part here: PeopleSearchWildcard.dwp

Saturday, November 14, 2009

MOSS WCM: Publishing Images and Thumbnails

A rather nice SharePoint feature for standard image libraries (type 109) is that when uploading a picture, it will automatically create a thumbnail and a compressed, web friendly version of the picture in the _t and _w hidden folders. Unfortunately the MOSS WCM PublishingImages list is not an image library, but rather a document library. This is so because the published start and end date/time feature depends on both versioning and approval being on, and this requires a list of type document library.

Some WCM sites try to use AssetUploader.aspx, which is used in the ootb computed field "Thumbnail" in the list content type, to generate the thumbnail, but as anonymous users have no rights on the \_layouts\ folder for secured publishing sites, this is a no go. A solution is to code an event receiver for the list and create _t and _w versions for added images. The no-code solution is to upload separate thumbnail images in a separate folder, and apply a naming convention when displaying thumbnails for the images in e.g. content query web-parts (CQWP).

Wednesday, October 14, 2009

Secure External SharePoint Sites for Anonymous Access

As there is some blogging about exploiting unsecured SharePoint _layouts and _vti_bin pages out there, posts that don't tell you how to actually secure those pages and prevent the exploit, I thought why not just post the recommended lockdown method: Locking down Office SharePoint Server sites

It is not enough just to enable the SharePoint WCM publishing lockdown mode feature, as this only limits access to /forms/ application pages and to the web-services.

Finally, read this article by Andrew Connell on delay loading core.js and removing the Microsoft Name ActiveX control from your pages.

Thursday, October 01, 2009

SharePoint Farm using Kerberos on IIS7

Kerberos is recommended as the authentication mode in SharePoint due to security, performance and not least delegation (impersonation, double-hop) reasons. I won't go into details on the benefits here, but rather point to some good resources you will need when configuring Kerberos for SharePoint. Configuration of Kerberos is not part of the SharePoint installer, except for popup boxes warning you that your system administrator must perform some extra configuration for this to work. If you're lucky, that system admin is not you.

I recommend you follow these guides to ensure a working Kerberos environment:
Follow the guidelines meticulously, taking great care when adding SPNs. Use SETSPN -S rather than -A to avoid duplicates (which breaks Kerberos and fallback to NTLM), and use SETSPN -X to check for duplicates. These are new SETSPN switches on Windows Server 2008.

After setting up all the SPNs, a strange problem affected our load balanced web-applications. We got 401 Unauthorized "the requested resource requires user authentication" and "access is denied due to invalid credentials" when browsing sites on the WFE servers.

Our servers run Windows Server 2008 and hosts the sites in IIS7. IIS7 use a new "kernel mode" by default for integrated Windows Authentication, and the computer account (NETWORK SERVICE) is used rather than the application pool identity by Kerberos to do SPN ticket decryption. This affects farm sites, as a domain account must be used rather than a local account to successfully decrypt SPN tickets across the farm. See Service Principal Name (SPN) checklist for Kerberos authentication with IIS 7.0 for further details.

So you have two options to make this work on IIS7: either disable kernel mode authentication, or use the undocumented useAppPoolCredentials in combination with useKernelMode in ApplicationHost.config (C:\Windows\System32\inetsrv\config\applicationHost.config). Add the attribute to the specific site's <windowsAuthentication> element so it looks like this:

<windowsAuthentication enabled="true" useAppPoolCredentials="true" useKernelMode="true">

Do not use this setting across the board, only use useAppPoolCredentials on the sites on those servers that give a Kerberos 401 not authorized error. And always make a backup copy of the file before editing it.

The last step in the configure Kerberos guide is to turn on "negotiate" authentication for the SSP:

STSADM –o setsharedwebserviceauthn –negotiate

Click on "Office SharePoint Server Search" in Central Admin > Operations > Services on Server on the Query role, after selecting a WFE server, to confirm that the SSP work correctly after enabling Kerberos. If the settings page opens all went well. If you get this message, it sure didn't:

An unhandled exception occurred in the user interface. Exception Information: Could not connect to server MOSS-WFE01. This error might occur if the firewall or proxy configuration is preventing the server from being contacted.

The above test checks the root-level Office Server Web Services used for MOSS server-to-server communication.

In addition, trying to open the SSP search settings page gave this error:

The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress.

The above test checks the Office Server Web Services used for SSP services such as search administration.

Don't panic. Diagnosing this is quite easy. SharePoint uses the Office Server Web Services when assigning roles to servers and to communicate between the servers. Browse the MOSS-WFE01 root-level services from the MOSS-ADMIN server and vice versa:
If you cannot browse these root-level Office Server Web Services (OSWS) on WFE from the Cental Admin server, then you have a problem. If you get a 404 "not found" message, the check the firewalls on the servers and try pinging them. If you get a response, but things still do not work it is typically 401 or 403 errors. Never mind the untrusted certificate message, the real problem is any 401 Unauthorized messages you might get.

First, ensure that all your SPNs are correctly configured for the server and app-pool accounts, including the HOST, HTTP and MSSP service classes. HOST or HTTP is used for the root-level OSWS, when the app-pool idenity is a machine account or domain account repectively. MSSP is used for the SSP services.

Then ensure that any SPN account involved in Kerberos is trusted for delegation in AD. This also applies to computer accounts used to run sites, typically as the "network service" local account. Also ensure that DCOM is correctly configured on all the servers running SharePoint in the farm.

If all else fails, change the application pool used to run the Office Server Web Services web-application. By default, this is OfficeServerApplicationPool running as "network service". But you can't really change that application pool's identity. You can, but SharePoint will automatically set it back to "network service". As you learnt above, using a local account for a farm service in IIS7 is not recommended.

So, reuse the SSP service application pool or create a new IIS "OfficeServerKerberosApplicationPool" using the domain account of the SSP service application pool and make the Office Server Web Services run using the new app-pool. This must be done on all WFE servers and on the indexing application server. Provided that the selected domain account has a valid HTTP SPN for the OSWS machine NETBOIS name, internal MOSS server-to-server calls should now work. Test and confirm that both the root-level and SSP search server settings pages now opens.

Finally, check that the security event log contains audit success entries for logon of type Kerberos on the servers. Kerberos ticket decryption failures are shown in the client's system event log with event id 4 and error code KRB_AP_ERR_MODIFIED. Remember that an app-pool account cannot decrypt the Kerberos ticket if the SPN is registered on another account.

Note that if you're following the least privilege approach and have different accounts for the SSP service application host and the SSP services, then you might not be able to directly browse the SSP web-services such as SearchAdmin.asmx, because the HTTP SPN is assigned to a different account than the account used to run the SSP service. This is not a problem because the SSP services internally use the MSSP SPN to get and validate Kerberos tickets.

Still having problems? Read Jesper M. Christensen's excellent series Troubleshooting Kerberos in a SharePoint environment Part 1 / Part 2 / Part3.

End note: Turn off UAC on Windows Server 2008 if you get "access denied" errors when running STSADM commands.

Friday, September 18, 2009

SharePoint Site Structuring: Wide vs Deep (Part II)

In my last post, I presented a SharePoint site classification & structuring scheme for realizing the information architecture (IA) structural design of your information space to facilitate content organization. SharePoint has five main content containers that can be combined to best structure the content according to your IA:
  • Root site collection: container for portal, home sites
  • Explicit site collection: container for department, services, regional sites
  • Wildcard site collection: container for team, community, project sites
  • Top-level site (site home): container for content and subsites and channels
  • Subsite (site section): container for content and even more subsites and channels
Sites always belong to one site-collection. A site-collection is just that, a collection of sites. A site-collection contains one top-level site (TLS), that again can contain a hierarchy of subsites. Hence site-collection.

Structures of top-level sites and subsites are generally labeled as wide or deep. A wide site structure uses multiple TLS sites (and hence multiple site-collections), while a deep site structure rather uses a single TLS site with multiple subsites with even more subsites. See the SharePoint site classification & structuring post for more details on this and the above five containers.

However, not all combinations are viable due to SharePoint software boundaries for the containers. This especially applies to how to combine managed site-collections with wide and deep site structures. As outlined in my last post, each major category gives a recommendation on the usage of wide and deep site structures based on the site classification. The recommendation on how to combine wide/deep with explicit/wildcard site collections is shown here (click to enlarge):

As you can see, it is not recommended to combine explicit site-collections with a wide site structure due to the limited set of explicit site-collections per SharePoint web-application. Using wildcard site-collections give the most flexibility as there is almost no limit on the number of site structures wildcard can hold. If you need a new top-level site, just create a new site-collection.

The SharePoint software boundaries on site structures per web-application is shown here (click to enlarge):

Each SharePoint web-application can hold about 20 explicit site-collections and about 50000 wildcard site-collections. About, as these boundaries are not hard limits - but performance will suffer significantly should your site structure push beyond them. The recommended limits on subsite structures is about 125 second level subsites under a top-level site, each with about 2000 third level subsites, for a theoretical maximum of 250000 subsites per site collection. Theoretical, as the maximum recommended database size per site-collection is 100GB and with hundred thousand subsites there will be little room for content in each subsite.

Note that root or explicit site-collection storage cannot be scaled as you cannot add another content database to a site collection. If scalable storage is a requirement, using wildcard site-collections is best practice.

Respect the SharePoint boundaries, and I recommend your never get close to pushing them. I once broke these recommendations myself for a wide site structure; a couple of years later the "client site" solution started behaving erroneously (mainly timeouts) because of more than 60000 clients being provisioned by then, pushing the 50000 TLS limit. The next time I designed a site structure to hold a million client sites, we did a proper information architecture LATCH (Location, Alphabet, Time, Category, Hierarchy) analysis. This resulted in a deep+wide site structure with the one million subsites (deep) hierarchically distributed based on the 9-digit customer number, across multiple site-collections (wide).

Steve Goodyear's decision diagram in the Determining Between SharePoint Site Collections and Sub-Sites post shows other important technical aspects when deciding between wide and deep.

Sunday, September 06, 2009

Classification and Structuring of SharePoint Sites (Part I)

As part of my work as a SharePoint architect, I almost always run into poorly designed and structured SharePoint solutions. Mostly because consultants hardly ever seems to read and really understand the Technet SharePoint Site Structure Planning and the SharePoint Logical Design and Architecture guidelines or do any information architecture (IA) analysis. What is needed is a site classification scheme based on solution domains, usage, management and governance as shown in this article with structuring guidelines for each site category.

This post and the next is targeted at a technical audience; if back-end architecture details is not for you, jump straight to part III for a simple five step process aimed at business users, analysts and information architects.

IA defines the structural design of your information space to facilitate content organization, realized in a SharePoint site structure to store and manage your content. Skipping the IA and site structure planning will work for some time, perhaps forever, but eventually most SharePoint solutions grows into the software boundaries of the platform. The effect will be poor performance and frustrating user experience due to the unmanageable number of sites popping up everywhere.

SharePoint supports a flexible IA structure by providing several types of content containers, and you really need to understand how to use them:
  • Root site collection: container for portal, home sites
  • Explicit site collection: container for department, services, regional sites
  • Wildcard site collection: container for team, community, project sites
  • Top-level site (site home): container for content and subsites and channels
  • Subsite (site section): container for content and even more subsites and channels
Note that it is a best practice to not use the terms "site" or "web" when classifying and structuring sites. Experience shows that these terms are so overloaded that is just causes confusion. Even the latest SharePoint guidance makes a classical MSDN mess using the term "root site" in some parts when referring to top-level sites. Also stay away from using site-definition names such as team-sites and workspaces.

In addition, I use "managed site-collection" instead of the term "managed path" to get consistent naming in my site taxonomy. Managed paths are central when constructing your URL space, but thats a secondary goal and only important as part of your IA structural design to facilitate content organization and driving findability. It is the site-collections held at managed path explicit and wildcard inclusion URLs that is the primary goal, plus of course that held at the root path.

It is important that you look at your diverse set of information and develop a classification scheme for the content that will go into your SharePoint solution. Develop a classification of sites and follow the planning guidelines on how to structure your sites according to SharePoint recommended practices. I like to classify sites into four major categories as shown here (click to enlarge):

Sites are classified into the four major categories based on the management policies (corporate to community, controlled to uncontrolled) and the life-cycle policies (predefined and planned, predictable and ad-hoc) of the site content. The four categories are numbered 1 to 4 ranging from most specific to very generic classification of sites.

Sites always belong to one site-collection. A site-collection is just that, a collection of sites. A site-collection contains one top-level site, that again can contain a hierarchy of subsites. Hence site-collection. As there is a limited set of explicit site-collections per SharePoint web-application, these should be reserved for a planned, wellknown small number of predefined sites. Use wildcard site-collections when the predictable number of sites is too large for explicit, and for ad-hoc sites that come and go in unknown numbers. Finally, there can be only one root site-collection per SharePoint web-application.

The following figure applies the SharePoint site structure design limits to my site classification scheme:

The major categories provide a recommendation of the type of site-collection to use; and for each category and site-collection type, the site classification scheme provides a recommendation for how to use top-level sites (TLS) and subsites to provide the structural IA design. There are two major ways to structure content within a site-collection: wide and deep. A wide site structure uses multiple TLS sites (and hence multiple site-collections), while a deep site structure rather uses a single TLS site with multiple subsites with even more subsites. Both approaches can be combined to best structure the content according to your IA.

Using a wide site structure is applicable for sites like projects and team-sites, as this gives the best control over site configuration, administration and management in SharePoint. Each TLS is a separate site-collection, and each site-collection is a management boundary for users and groups, access control, content taxonomy, navigation, branding, SLA, all in all sites are separated from each other and each have their own life-cycle and governance policies.

Using a deep site structure is applicable for sites like portals, departmental or regional content, and shared communities. All subsites with the TLS site-collection share the same navigation and content taxonomy, the same members and groups, the same branding, all in all the same configuration and administration and the same governing policies and life-cycles.

You will of course also need to consider the site-definition type that you plan to apply to each site, as this will influence what should be structured together to create a well planned user experience. E.g. within a internet "Publishing portal" TLS you would most likely not add a "Team-site" subsite.

Note that root or explicit site-collection storage cannot be scaled as you cannot add another content database to a site collection. If scalable storage is a requirement, using wildcard site-collections is best practice.

I recommend making a SharePoint site diagram like this to vizualize how your site structure is built by combining root, explicit and wildcard site-collections (click to enlarge):

I hope this site classification scheme will help you understand how to plan and design your next SharePoint site structure. At least it should make planning workshops more focused and less confusing, both for you and your customers.

Read part II for more details on wide vs deep site structures, including SharePoint system boundaries.

Read part III Five Steps to Structure Your SharePoint Sites if you're unsure of where to start your site classification and structuring efforts.

Monday, August 31, 2009

Puzzlepart - Exciting Times Ahead

Today was my last day as an MSFT FTE, tomorrow I'm co-founder and partner at Puzzlepart - as the second person in the company after my old colleague Mads Nissen.

At Puzzlepart we will continue our joint SharePoint adventure. Together we have delivered quite a few SharePoint and MSCRM solutions, such as this case study shown by Steve Ballmer himself at the 2005 business summit. As you can see from the video, we focus on delivering business value - and we're working on our design skills.

As they say at my old employer: I'm super excited to be here! And to my now former colleagues at MCS: Thank you for all that you do!

Monday, August 17, 2009

DDD Lessons Learned: CQRS, Fat Domain Model, Concrete Aggregate Entity

It's been five years since Eric Evans' classic book Domain-driven design: tackling complexity in the heart of software was published. Last week two interesting articles came out; What I've learned about DDD since the book by Eric himself and Domain Models: Employing the Domain Model Pattern by Udi Dahan. They both talk of lessons learnt about some of the concepts of domain-driven design (DDD).

Udi talks about a very common misconception about domain Entity objects, what I like to call the Fat Domain Model: using the persistent object model as the Domain Model. I've done this myself for many years, trying to avoid Martin Fowler's Anemic Domain Model and Fat Service Layer anti-patterns by adding the domain entity's behavior to the related data entity object.

Microsoft does a variant of the Fat Service Layer in their Three-Layered Services Application guidance: they use anemic Business Entity objects and keeps the business logic in separate Business Component objects, exposed by Service Interface objects. So coming from this, it seemed natural to merge the entity and logic into one when I started practicing DDD.

The description of the Entity pattern from Eric's book is clear on what goes into the entity object:

"It is natural to think about the attributes when modeling an object, and it is quite important to think about its behavior. [...] strip the Entity object's definition down to the most intrinsic characteristics [...]. Add only behavior that is essential to the [domain] concept and attributes that are required by that behavior."

So, only the data that is required by the domain logic should be in the domain object. Eric use the term "information about the business situation" in the book when explaining the Domain Layer, another hint that this is not traditional data entities. Querying and viewing entities is not domain logic, so there is no need to add all attributes of the ORM data entities used in the presentation layer to the DDD Entity object. Querying and viewing entities belongs to the Service Layer. I suggest using two separate object models (Command and Query Responsibility Segregation, CQRS) as shown in Udi's article, but think twice before implementing your forms-over-data application using the Domain Model pattern.

In his QCon 2009 presentation, Eric talks about the DDD Aggregate pattern and how it can be hard to model an Aggregate Boundary and decide to which Entity object a specific behavior belongs. Here is the definition of Aggregate from the book:

"First we need an abstraction for encapsulating references within the [domain] model. An Aggregate is a cluster of associated [domain] objects that we treat as a unit for the purpose of [ensuring consistency of] data changes. Each Aggregate has a root and a boundary. The root is a single, specific Entity contained in the Aggregate."

I think the Aggregate pattern is easy to misunderstand as it focuses on modeling the relationships and boundary of the root and its associated collection of domain objects - immediately making people think of the classical "purchase order with items collection" OO data encapsulation design or even ER-design. In addition, the strong focus the Aggregate being an abstract consistency mechanism leads to the misconception that Aggregate domain logic must belong to the root Entity object and that logic that involves associated Aggregate data will require that data to be modeled into the collection of domain objects.

Eric uses the example of calculating the weight of a grape cluster Aggregate. This logic doesn't belong on the grape Entity, so the only place to put this is on the stem Entity - because the Aggregate itself is not an object, just an abstract concept. He also talks about how this kind of logic typically is designed as an iteration of the associated object collection, while lessons learnt has shown that this OO legacy mindset is not mandated by the domain business rules. Udi also touches the same problem in the "UnpaidOrdersAmount" example in his domain model article.

The fact is that the domain aggregate is not just an abstract concept, it should be modeled as what I like to call a Concrete Aggregate. This alleviates the root Entity from getting responsibilities that doesn't naturally belong to it. This kind of non-entity domain logic goes into the Concrete Aggregate as its behavior and prevents leaking domain logic into the Service Layer.

Using Concrete Aggregate objects might also lessen the need for fully modeling objects within Aggregate boundaries. E.g. having a GrapeCluster object that can calculate its weight directly without loading a Stem and its collection of Grape objects makes the domain model much simpler. Again note the focus on business rules behavior rather than OO entity-relationship modeling.

I admit struggling with naming the Concrete Aggregate pattern, maybe Aggregate Entity would be a more self-explanatory name.

Thursday, July 02, 2009

List Content Type Fields & Forms: CAML vs Code

In a feature based site definition I recently made, I had a site content type hierarchy with a base content type and two child content types defined in a feature. The child types inherits their site columns and display/edit/new form from their parent.

Another dependent feature contained a list definition and a provisioned list instance that would reference the two child site content types and thus snapshot inherit them as list content types.

The columns of the list are based on the inherited site columns, in reality a snapshot copy of them at the time the list was created. All provisioning of site columns, site content type, list definitions, list instances and list content types are defined in CAML in the features.

As you can see from the above figures, everything looked fine and dandy. Until I tried to create a new list item: all the custom fields where missing in the new item form. The same problem applied to the display and edit forms also.

Using the "List settings" UI to remove and then add the list content types manually proved to fix the problem, so it had to be an issue related to the way CAML <MetaData/ ContentTypes/ ContentTypeRef> makes the snapshot copy.

Knowing that the fields of a form are rendered using the <ListFieldIterator>, it was time to check the field collection of the list content types using SharePoint Manager 2007.

In the above figure site content types are to the left and list content types to the right. The "News article" content type added through the "List settings" UI has inherited a full snapshot, but the "Stand-alone article" added by CAML has inherited just the ootb BaseType "Item" fields. So the <RenderingTemplate> for NewForm finds only two fields to render. And the actual problem is in the CAML inheritance.

The problem is that the list columns do not inherit the site columns correctly. In fact, the only official way of making <ContentTypeRef> work correctly using CAML is to repeat the site column definition inside the list definition <Fields> element. This is not a good approach with regards to maintenance of your solution.

There is a similar problem with list content types related to site content types, which is easily rectified by forcing a ContentType Update(push down changes) during provisioning of the site content type feature (see exercise 13-5 in the Building the SharePoint User Experience book).

The list feature is activated after the site content type feature. You might think that you could force another site content type update to rectify the inheritance again. That isn't very feasible, as updating child content types depends on a having actual changes to push down; and there is none at this time - remember that the site content types have already been provisioned and pushed down. There are some "best" practices out there on Custom Fields in Schema.xml, but please read on before trying those.

Just adding the list content types from code will create correct snapshots with no fuzz:

"When you add a site content type to a list using the object model, Windows SharePoint Services automatically adds any columns that content type contains that are not already on the list. This is in contrast to provisioning a list schema with content types, in which case you must explicitly add the columns to the list schema for Windows SharePoint Services to provision them."

So instead of doing this using CAML, just add them when activating the list feature. The newly created list instance is empty, so the content types can easily be added:

public override void FeatureActivated(SPFeatureReceiverProperties properties)
using (SPWeb web = (SPWeb)properties.Feature.Parent)

//enable management of content types
SPList list = web.Lists["News"];
list.ContentTypesEnabled = true;
//inherit the list content types from site content types
list.ContentTypes.Add(web.ContentTypes["Stand-alone article"]);
list.ContentTypes.Add(web.ContentTypes["Series article"]);
//remove item as list content type

NOTE: This will only work for list instances created by the feature. The code will not run for list instances added later on - and there is no ListAdding/ListAdded events in WSS3.0/MOSS. For lists added manually, the list content types must also be added manually. Or create an "associate [my] list content types" feature that you can re-activate at will to bind the content types to lists as applicable. Use a marker content type to look for to avoid hardcoding the list names. Some like to use a timer job to bind the list content types.

The code gets a reference to the list instance, turns on content type management / content types on the new menu, adds the list content types as new snapshot copies, and finally removes a dummy "Item" content type (that is there to make the CAML list definition valid).

My changed list definition Schema.xml looks like this:

As you can see, I have commented out the CAML <ContentTypeRef> for my two child site content types, and instead just referenced the standard "Item" content type - the dummy that is removed during feature activation. If you still would like to have those <ContentTypeRef> in the Schema.xml, then remove and re-add them from the code.

Note how you still can add your custom fields as <ViewFields> even if their content type is not referenced. Content type management for the list can also be turned on using the EnableContentTypes attribute on the root <List> element in Schema.xml.

That's it. I deleted the list and reactivated the list feature - and voila: the new, edit and display forms contains all fields of the content type selected from the new menu.

[UPDATE] SharePoint 2010 adds the Inherits attribute that makes CAML inheritance work as expected, and using FieldRefs unnecessary. See also the new 2010 feature upgrade mechanism UpgradeActions AddContentTypeField element with the PushDown attribute that force changes to a site content type and all derived types.

Wednesday, July 01, 2009

Review: Building the SharePoint User Experience

Lately I've been reading Building the SharePoint User Experience by fellow Norwegian Bjørn Furuknap, and it is an easy read even if it covers "under the hood" aspects of SharePoint UX such as list definitions, site definitions, content types, custom fields, features, stapling, onet.xml, and the beast itself: CAML. Don't be fooled by the term UX, this book is not about doing SharePoint design using Photoshop or design as in interaction design (master pages, navigation, CSS, etc). In fact, the book covers content types at great length without ever mentioning search and findability at all. This is indeed a hardcore book for SharePoint developers.

I absolutely recommend reading this book. Part 1 and 2 covers the basics of the SharePoint building blocks and a thorough walk-through of these artifacts. The nice thing in part 2 is that the text is based on actually using the stuff, as opposed to the documentation at MSDN which is rather poor when it comes to real life aspects.

What I liked the most is part 3, which is a series of exercises that builds a complete feature based site definition. Doing these exercises will help you really learn the quirks of the covered SharePoint aspects; things that seems easy when reading about it, but that has a lot of gotchas in practice. Don't just read the book, do the exercises! Get the companion eBook (PDF format) to avoid all the mindless typing.

Chapter 15 contains a rather lame way of configuring the site's navigation aspects using code. Your information architect and not least, your interaction designer, will cringe at the tought of such a central aspect being buried inside code instead of designer friendly XML.

Side note on feature based site definitions: have a look on the SharePoint Site Configurator at CodePlex made by the MCS IW-team in Norway.

There are some quirks in the exercises which can cause you some grief. The errata page for the book is rather thin, so here is an errata list from my walkthrough:
  • Exercise 11-2, step 3: Note the TypeName here, it will bite you in exercise 11-7.
  • Exercise 11-7, step 3: Here the @Type value of the site column is not the same as the TypeName in the custom field. This wil cause the general "the given key was not present in the dictionary" CreateSPField error when activating the feature. Use SPM2007 to diagonse the problem by clicking on the site's Fields node, this will tell you that “Field type ‘TimesFieldType’ is not installed properly” as there is no such custom field.
  • Chapter 11, page 253: The feature Categories list will not be created until chapter 13, so manually create a Categories custom list for use in chapter 12.
  • Exercise 11-9, step 1: The lookup column @List attribute must be removed before running the code in step 2, otherwise you will get the "Cannot change the lookup list of the lookup field" error activating the feature.
  • Exercise 11-9, step 2: Out of the blue, a reference to TimesSiteColumns.cs and some FeatureActivated code that does not exist yet. To make this code snippet work for this site scoped feature, you need code from exercise 13-5 to set the "web" C# local member.
  • Exercise 11-9, step 2: If you removed both @List and @ShowField in step 1, the you must add this code to set the show field: categoriesField.LookupField = "Title";
  • Exercise 12-5, step 8: The 20002 download is empty.
  • Exercise 12-6, step 4: You will only get the content type's defined NewForm because you manually added the list content types using the UI, this will not work in e.g. exercise 13-6. More on this later.
  • Exercise13-1, step 1: Make this feature web scoped, not site scoped - or you will get an error in exercise 15-2.
  • Exercise13-5, step 1: The file name should be TimesContentType.cs.
  • Exercise13-5, step 2: The name of the content type shoud be "News Article" (with a space).
  • Exercise13-5: The <MetaData> <ContentTypeRef> elements must be in the list template Schema.xml file to add the content types as list content types. In addition, the <List> element's EnableContentTypes attribute should be included and true. Note that this is not sufficient to get a fully working list instance. More on this later.
  • Exercise 13-6, step 3: The 20003 download is empty.
  • Exercise13-6, step 5: The NewForm of the "News article" child list content types will not show the custom columns added in exercise 13-5 by forcing updated inheritance. Only the standard "Item" fields are shown. More on this later.
  • Exercise 14-2, step 2: There cannot be multiple <asp:Content> controls in an .ASPX page that target the same @ContentPlaceHolderID, here "PlaceHolderMain"
  • Exercise 15-2, step 1: Using the ExecuteUrl element does not work when creating a site-collection from Central Admin.
  • Exercise 15-2, step 2: Refers to a mystery FeatureAdded method and a TimesSetup.cs file, and the code refers to a missing "web" C# local member. Put this code after the code in step 4.
  • Exercise 15-3, step 2: Using "/" as the URL will cause the "Cannot open /: no such file or folder" error. This applies to the rest of the navigation exercises.
  • Exercise 15-5, step 1: Remember to add the <WebFeatures> <Feature> from the above text to the onet.xml file first.
In addition, it seems that parts of chapter 14 is missing; the title says "creating custom editing ... pages", but there is no such lessons there.

Now to the "More on this later" references: In exercise 13-3 you learned that list content types do not correctly relate to child site content types when defined in CAML. In exercise 13-5, this was corrected using code. However, the same problem applies to provisioning list columns, they do not correctly inherit the site columns if not repeated in the list's CAML definition; see the note in How to add a content type to a list.

This typically results in missing fields in new form, edit form, etc. See the comments on the MSDN How to create a custom list definition article for links to several blog posts, but don't blindly follow their advice of adding the fields explicitly to the list Schema.xml.

There is a much simpler solution to the NewForm/EditForm/ViewForm problem by using FeatureActivated code for the list. And that is the topic of my next post.

Tuesday, June 16, 2009

SharePoint jQuery: Managing Scripts

In the two previous posts, I've shown some jQuery scripts for manipulating the SharePoint user interface by adding the scripts directly as source text in a Content Editor Web Part (CEWP). This is, however, not a good way of adding the scripts to pages considering maintenance and sharing of the provided functionality across multiple pages. Just think of making a correction to the generic scrolling view with frozen header script to improve its performance, after adding it directly in CEWPs on tens or hundreds of view pages.

A better approach is to store the scripts as files that are included in CEWP just like the jQuery library itself. You can use a single document library to store the script files; but as some of your jQuery scripts will be more generic (reusable) than others, I would recommend using several managed document libraries based on script classification.

I like to classify the scripts like this:
  • Unique: the script is unique to the web page
  • Shared: the script is shared between multiple related web pages, typically for all view pages of a specific list
  • Common: the script is common for unrelated web pages, typically for view pages for more than one list
Store common scripts as include files in a ”MasterScriptLibrary” in the root site-collection to make them common for all site-collections within the same SharePoint web application. This includes the jQuery script itself. Store shared scripts as include files in a "ScriptLibrary" per managed site-collection to make them shared between all sites and subsites within the site-collection. Use a document library in the top-level site of each site-collection as the script library to store your jQuery JavaScript files.

In addition, any custom CSS style sheets you use in your scripts should also be stored in the script libraries. Remember to assign ”Read” permissions on the script libraries to the group ”Authenticated Users”.

Unique scripts need no script library as they are included directly in CEWPs. Really large companies should consider having a global script library that stores jQuery files used across all SharePoint web applications.

Add the include files in this order in a CEWP: jQuery, styles, common and then shared. Add any unique script after all included styles and script.

Try to avoid using more JavaScript files than strictly necessary, as many include files will impact the load time of the page.

Always use page relative (../path/file.js) or server relative (/path/file.js) URLs in the <script src= > attribute. See this post for how to get site relative URLs. Avoid using absolute URLs at all costs. The web application root site-collection is the "server" in SharePoint terms.

If you don't want to expose and manage the scripts in document libraries, you can always provision the script files to the [12] hive using a feature: SharePoint jQuery deployment feature. See this EndUserSharePoint post for more alternatives for managing include files.

Related slidedeck

SharePoint Site Relative CEWP Included Script

This post is about injecting site relative JavaScript include files in SharePoint using CEWP. It is not about injecting JavaScript files using the server-side ASP.NET script manager or any other server-side mechanism.

We all know that the src attribute of the <script> element is either page relative or server relative; or God forbid, absolute. You cannot use the tilde (~) to make the URL site relative, but you can achieve this using the SharePoint page L_Menu_BaseUrl variable and inject the include file into the page dynamically:

<script type="text/javascript">
' src="' + L_Menu_BaseUrl +'/ScriptLibrary/SiteSpecificScript.js"',

' type="text/javascript"><\/script>');

This actually makes the include file URL server relative, but without hardcoding the site path into the URL - making it equivalent to being site relative.

Sunday, June 07, 2009

Excel Solutions in SharePoint

In my last post, I talked about the situational solutions that users themselves has created in Excel - and how these isolated workbook islands counters knowledge management and the ability to leverage the information and people ecosystem of your company. To subject the Excel workbook sprawl to shared knowledge management, you must provide Excel-style services as components of your Enterprise 2.0 (E2) platform.

The nice thing about Excel workbooks is that it gives the users great flexibility and information processing capabilities in a tool that is quite simple to learn and use. The flexibility of Excel must be preserved if you are to gain the promised E2 benefits: innovation by harvesting your company's collective intelligence through sharing and collaboration.

This post is about the state of most existing Excel 2003 and 2007 solutions I've seen, and how to incorporate them into the SharePoint 2007 platform. I will also talk about the future of Office automation (VBA, macros) and Excel Services/Excel Web Access in the upcoming SharePoint 2010 (SP2010).

Excel has good integration with SharePoint lists, and storing data in SharePoint lists has the benefit of shared data entry and collaboration on the list and collateral information. In Office 2003 a linked SharePoint list can be edited in both Excel and SharePoint and the changes then synchronized back and forth. The list can even be edited offline. This two-way link synchronization mechanism was removed in Excel 2007 - data can still be linked and the link updated, but there is no ootb way to edit that data in Excel 2007 and push it back into Excel. A list sync add-in exists, but this will require installation on client PCs, possibly an issue in most large companies.
Still, data can still be edited in SharePoint lists and linked into Excel workbooks to use e.g. the impressive charting capabilities of Excel. The simplest way to do this is to switch the list into datasheet view and open the task pane using the action menu (click to enlarge figure). Export the data as a linked list to Excel, create the chart or pivot table from the linked list, enable auto-update of the linked list, then save the workbook back into a document library. Your users can then open the workbook to do reporting based on the live SharePoint list data.

So, what if you want to have those charts and pivot tables as parts of your SharePoint sites? Wouldn't it be nice to have a unified, shared Excel-style workspace?

Excel 2003 users can try this: Publish a graph/chart created in MS Excel onto your SharePoint Site. Note that Excel 2003 uses the Office Web Components (OWC) that are deprecated. Excel 2007 users can also publish charts as web-pages as shown here. In both cases, the Page Viewer web-part is used to show the published workbook elements.

If you have Excel 2007 and MOSS enterprise edition, then the workbook can be published to Excel Services and instead be made directly available as part of SharePoint through Excel Web Access!?? You think, but there are pitfalls...

Excel Services (MOSS enterprise edition only) is a natural choice for enabling shared access to workbook solutions, while allowing the users to create the workbooks themselves. This combines the flexibility of Excel while providing shared management and publishing of the solutions. There are, however, some limitations like no data editing, no VBA macros or add-ins and, the most painful in MOSS, SharePoint Excel Services cannot consume live data stored in linked SharePoint lists! The latter limitation is a very common topic on the Excel Services forum. According to information on the forum, this problem is very likely to be fixed in SP2010. In the meantime, have a look at consuming SharePoint lists using Excel UDF.

The same problem applies to SQL Server Reporting Services (SSRS), there is no simple ootb way to report on data stored in SharePoint lists. Again, this problem is very likely to be fixed in SP2010. In the meantime, have a look at consuming SharePoint lists using SSRS. Another issue with using SSRS is that there is no self-service provisiong of new report types, not exactly enabling E2 freeform, emergent social collaboration on solving business problems.

Also keep an eye on the coming Office 14 web applications and the integration with SP2010. This might give the best of both Excel and SharePoint Excel Services, with a more seamless edit-publish mechanism than today. Maybe even working editable shared web-enabled workbooks will become a reality.

So if you somehow enable your Excel Services workbook to consume data from SharePoint lists (I won't even mention the unsupported database view approach), then you will most likely want to provide a richer user experience by allowing filtering of the information shown in the Excel Web Access web-parts (EWA). And provided that there are more EWA web-parts on your page, you may also want to connect them, e.g. when clicking a cell in one EWA web-part triggers an event that affects other EWA web-parts.

The EWA web-parts are filter connections consumers. The MOSS filter web-parts can be used to build a powerful filtering UI that connects to Excel Services parameters that do the actual data filtering. Note that the Filter Web Parts are in the MOSS enterprise edition only, so if you're on standard MOSS then you're left with the rather crude HTML Form Web Part or 3rd party filtering web-parts.

You can also use the ExcelServicesAjax.js script in a Content Editor Web Part to connect EWA web-parts using JavaScript and to use AJAX to access the Excel Services API. Refer to Shahar Prish's blog or his book Professional Excel Services for more details. To avoid scripting connections, whole worksheets can be published instead of multiple single elements. The latter requires less of non-technical users.

If Excel Services is not an option for you, then have a look at Dundas Charts for SharePoint or FusionCharts. Dundas can consume data from a variety for data sources, and Dundas has ootb support for SharePoint lists. This gives the users the E2 flexibility to create mashups by connecting data to charts themselves, within the collaboration platform. The Fusion Flash components also include maps, but there seems to be no ootb "for SharePoint" integration.

The chart creation user experience with Dundas is quite good when all that is needed is configuration, but there is a disconnect to SharePoint terms that will put non-technical users off (e.g. Dundas use "int32" instead of "number", and what happened to "choice"?). In addition, any customization needed beyond configuration, will require VB.NET/C# knowledge, again a duh! for user self-service solutions.

These charting web-parts are also subject to the same filtering and connections aspects as the EWA web-parts. Dundas uses parameter filtering and supports standard web-part connections. Note the lack of wildcards or (all) or (blank) choices in the filter parameters - this makes Excel style chart filters on text values impossible to recreate.

All in all, if you have data in SharePoint lists, then using SharePoint linked workbook lists in the Excel client may be the best approach while waiting for SP2010. If your data is in a store supported by Excel Services, then I would recommend using Excel Services as this combines user flexibility, self-service, sharing and knowledge management.

I had intended to write about what to do with Excel VBA macros and linked workbooks, and the future of VBA with relation to Excel Services, OOXML, VSTA/VSTO/.NET and Office 14-16 in this post, but due to the length that will be the topic of another post.

Saturday, May 23, 2009

SharePoint: Excel Long-Tail Knowledge Management

In these uncertain times, push systems for knowledge sharing and management are giving less results than before due to the limited planning visibility into the future - it is just becoming too hard to provide the users with the tools they need before they need it. The knowledge workers have turned to pull systems to create situational software solutions to assist in solving their business problems. Excel has always been business user's favorite tool for capturing situational solutions and managing their personal knowledge. Excel has become the most successful long-tail client in the enterprise.

For businesses to be successful, they must now cater for rapid innovation through collaborative knowledge sharing pull systems, known as Enterprise 2.0 systems (E2). Thus, having knowledge locked into personal Excel worksheets on peoples disks pose a threat to learning and innovation across a company. The same can be said about any knowledge that is not shared and findable, typically locked down by ultra traditional policies such as "need to know basis" applying by default strict access control.

Companies want to unlock all this intrinsic knowledge by using collaboration platforms, and of these SharePoint is what most organizations decide to implement (adoption rate between 55% and 80%).

So business users now face a challenge on how to move their Excel solutions into the E2 sharing and collaboration platforms. Most of them even want to enable shared information entry into worksheets, so they welcome SharePoint as the data capture tool. But how to provide the rich knowledge visualization of Excel on the web? Enter MOSS Excel Services. I recommend reading the Introduction to Excel Services and Excel Web Access article for a good introduction to ES/EWA. Also read Business Intelligence with SharePoint and Excel on TechNet Magazine for an analysis of some usage scenarios.

It might all seem simple at this point, but there are some limitations with Excel Services and multiple other options for re-implementing Excel client-side solutions functionally on the SharePoint platform. Add to the picture VBA macro infested Excel workbooks, and there is no simple migration path. In addition, the choices you make can exclude the business "super users" from being able to create situational solutions - hindering the Web 2.0 (W2) style flexibility that you aimed to provide in the first place.

A SharePoint roadmap for Excel long-tail workbooks is needed, and that is the scope of my next post.

Sunday, April 26, 2009

SharePoint Governance Guidelines

The SharePoint Products and Technologies governance checklist guide on the Technet SharePoint Governance Resource Center is very popular among our customers. It covers aspects from architectural planning, roles and policies, to operational practices.

To complement this official checklist, I've collected some more concrete guidelines that my customers often request when getting started with MOSS. It covers additional aspects such as driving findability and user lifecycle management. The guidelines are available here: MOSS governance guidelines.pdf

If you have any suggestions for more recommended governance practices, please leave a comment.

[UPDATE] See the updated governance guidelines here, including new SharePoint 2010 aspects.

Thursday, March 26, 2009

Emergent vs Deliberate, Push vs Pull

The Web 2.0 "Emergent vs Deliberate" struggle for control is not new, the discussions are at least a couple of years old. However, it is still very applicable inside the enterprise due to the lack of Web 2.0 features in business applications. Add to this that Information Architects - strongly supported by document and records management experts, prefer controlled vocabularies over social freeform tagging for knowledge management. Imagine enforcing a centrally controlled taxonomy, instead of just suggesting tags to users based on the wisdom of crowds. The suggested tags are emergent, their quality driven by all users that have tagged the resources.

The "Push vs Pull Systems" by John Hagel is a broader model that describes the transition from centrally decided and pushed tools and applications into a decentralized pull-based model where users themselves find and combine software to solve the business problem at hand. Companies will want to try to harvest the benefits of the freeform, emergent social collaboration tools, while being able to govern these tools and processes. The problem is that most business applications today are not designed to combine "deliberate" with the "pull model". As stated by Dion Hinchcliffe:

"The challenge will be learning how to apply these new models effectively to business while not strangling them with the traditional aspects of enterprise software that can greatly limit their potential and have led to poor outcomes and excessive structure in the past."

Typically, a central business development unit supported by a well-known management consulting company will decide on SharePoint for collaboration and management of knowledge and then use the push-model to drive the adoption across the company. I've seen this strategy in practice lately, and I can tell you that it takes a lot of effort to deliver something close to a "consumer web" experience. This combination of SharePoint, deliberate and push-model to enable Enterprise 2.0 poses several challenges as described in "Sharepoint and Enterprise 2.0: The good, the bad, and the ugly". Compare this approach with Excel, the most successful long-tail application ever inside enterprises, that empowers business users to create self-service situational software solutions.

You need to relax the idea of central command-and-control taxonomy and governance if you’re to leverage emergence in the information and people ecosystem of your company, partners and customers. What's more important? To enble users to solve business problems or to enable them to store gold-standard records of those problems?

Wednesday, March 25, 2009

Enterprise 2.0 Zen: Emergent Crystalization of Quality

I am a huge fan of the book "Zen and the Art of Motorcycle Maintenance" which looks into the metaphysics of quality. A central part of this is "crystalization" driven by quality, you have to have a sense of what's good and what's not. Crystalization separates useful from meaningless in perceived information, it applies quality.

The elevator pitch of Enterprise 2.0 is that it is freeform, emergent, social collaboration that draws on our collective intelligence to solve business problems. So what exactly is "emergent"?

The idea behind the notion of emergence is simple:

Individual agents, self motivated and operating according to simple rules, can suddenly produce patterns. In certain circumstances, these patterns can evolve into behavior that is intelligent.

Emergence is what happens when the whole is smarter than the sum of its parts.

[Emergence: the connected lives of ants, brains, cities and software by Steven Johnson]

It is micro-rules leading to adaptive macro-behavior. So emergence is like crystalization, it can produce patterns based on standards for interaction that may lead to good results - it is collective quality if you like.

Wednesday, March 11, 2009

WCF Queued Dual Router Updated

I've updated the source for the WCF Queued Dual HTTP Request Response Router to now support sending out-of-band fault and notification messages back to the consumer.

The added unified fault handling support for router and services is using ChannelFactory<T> were T is IOutputChannel to support generic one-way channel shapes. The same generic WCF channel shape is used for sending notification messages.

The messages sent over the IOutputChannel are created using Message CreateMessage(...) to keep things generic in the router and on the service end. The fault and notification messages on the consumer end are of course strongly typed.

Faults and notification messages are sent from service providers using a ThreadStart delegate to send messages on separate threads to avoid being blocked by ongoing processing. This code is in the RoutedResponseScope class.

The sample code also shows how to integration "unit" test async dual WCF request-response MEPs with static EventHandler<T> events using anonymous delegates and ManualResetEvent thread synchronization.

Download the router example code here. The code is provided 'as-is' with no warranties and confers no rights. This example requires Windows Process Activation Service (Vista/WinServer2008), MSMQ 4.0 and .NET 3.0 WCF Non-HTTP Activation to be installed.

Wednesday, February 25, 2009

WCF Queued Dual HTTP Request Response Router

There are a lot of examples available for how to make WCF-based publish/subscribe messaging solutions, but not very many thay provides a simple queued dual HTTP request-response router. IDesign provides the queued Response Service from Juval Löwy's book Programming WCF Services, which provides a feature rich set of WCF goodies. Recommended. I've used it in combination with Sasha Goldshtein's Generic Forwarding Router to create a dual channel queued router.

The router container uses HTTP endpoints externally to interact with consumers, and MSMQ queues internally to talk to the service providers. This gives fewer queues to manage and monitor, and the consumers need not know anything about MSMQ.

The message forwarding ChannelFactory is cached for best performance. Read more about Channel and ChannelFactory caching in Wenlong Dong's post Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices.

Some useful resources related to making this router work:

Note the poorly documented requirement that the WAS activated service endpoint path must be reflected in the MSMQ queue name:

<add key="targetQueueName" value=".\private$\ServiceModelSamples/service.svc" />

<endpoint address=

Failure to adhere to this convention will prevent messages added to the queue from automatically activating the WAS-hosted WCF service. Also ensure that the WAS app-pool identity has access rights on the queues.

Download the router example code here. The code is provided 'as-is' with no warranties and confers no rights. This example requires Windows Process Activation Service (Vista/WinServer2008), MSMQ 4.0 and .NET 3.0 WCF Non-HTTP Activation to be installed.

Friday, February 20, 2009

WCF: Message Headers and XmlSerializerFormat

Sometimes you need to use the classic XmlSerializer due to interoperability or when doing schema first based on XSD contracts that contains e.g. XML attributes in complexTypes. I've used the [XmlSerializerFormat] switch on services many times without any problems, but recently I had to make use of a custom header - and that took me quite some time to get working.

This is the wire format of the header:

<s:Envelope xmlns:s="" >
<h:CorrelationContext xmlns:h="urn:QueuedPubSub.CorrelationContext" xmlns="urn:QueuedPubSub.CorrelationContext" xmlns:xsi="" xmlns:xsd="" >
. . .

The WCF OperationContext provides access to the message headers:

CorrelationContext context = OperationContext.Current. IncomingMessageHeaders.GetHeader<CorrelationContext> (Constants.CorrelationContextHeaderName, Constants.CorrelationContextNamespace);

I had used [XmlElement] attributes to control the name and namespace of the [Serializable] class members, only to get this error:

'EndElement' 'CorrelationContext' from namespace 'urn:QueuedPubSub.CorrelationContext' is not expected. Expecting element 'CorrelationId'.

That really puzzled me. To make a long story short, the MessageHeaders class and GetHeader<T> method only supports serializers derived from XmlObjectSerializer, such as the DataContractSerializer - but not XmlSerializer. To make this work, your header class must be implemented as a [DataContract] class even for [XmlSerializerFormat] services.

My working message header contracts looks like this:

[DataContract(Name = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
public class CorrelationContext
public Guid CorrelationId { get; set; }
public string FaultAddress { get; set; }
public int Priority { get; set; }
public string ResponseAddress { get; set; }

[MessageContract(IsWrapped = true)]
public abstract class MessageWithCorrelationHeaderBase
[MessageHeader(MustUnderstand = true, Name = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
[XmlElement(ElementName = Constants.CorrelationContextHeaderName, Namespace = Constants.CorrelationContextNamespace)]
public CorrelationContext CorrelationContext { get; set; }

This code was made and tested on .NET 3.5 SP1.