Tuesday, November 29, 2005

MSCRM 3.0 in a multi AD forest infrastructure

MSCRM 3.0 by default supports a single AD domain (really a single AD forest) and a single Exchange 'organization'. The full spectrum of MSCRM functionality will be available to your users when your infrastructure adheres to these requirements. I will call the AD domain into which you install MSCRM, SQL Server 2000/2005 and SRS, the native domain. The same term is used for the Exchange organization of the native AD domain.

These are most likely infrastructure challenges you will encounter outside the native domain:

  • Outlook desktop client: access to MSCRM platform services
  • Outlook laptop client: desktop + go offline and online
  • Exchange: Routing of incoming e-mails to MSCRM users and queues
  • SQL Server Reporting Services: access to SRS services for reporting
First an overview of what sould work and what should not, dependent on some 'unsupported' infrastructure scenarios:

If you have multiple AD forests without explicit trusts, then the users not in the native domain will get only basic MSCRM functionality; the web client over HTTPS with basic authentication. These users will not be able to use neither the online nor the offline Outlook client (MSCRM desktop / laptop client) as they are not logged on to the domain. Note that such users will not get full reporting functionality with SQL Server Reporting Services (SRS) in this scenario.

If you have multiple Exhange organizations without explicit trusts; then the users not in the native organization (forest) will get only basic send e-mail functionality, the 'e-mail router' will not be able to automatically route incoming e-mails as the mailboxes are not in the native organization. In addition, the router cannot access the native AD domain when not explicitly trusted.

If you have users in an NT4 domain with a one-way trust from the native domain, these users will be able to use both the web and desktop client. They will not be able to use the laptop client as they cannot go off/online, incoming e-mail will not be routed to them, and they will not get full reporting functionality.


Then an overview of how can a multi AD forest and Exchange organization be configured to support full MSCRM functionality:

First of all, forget getting full MSCRM functionality for NT4 domains. Microsoft does not support NT4 anymore, so your're on your own.

The good news is that your users across several AD forests will be able to get the full spectrum of functionality available in MSCRM 3.0. This will just require some configuration of your infrastructure.

The most important aspect is that you have to add at least one-way trusts from the native MSCRM domain to the other domains. Trusting requires a LAN, WAN, or VPN connection between your domains. Support for full MSCRM over plain HTTPS is not possible.

The MSCRM Outlook client requires Windows Authentication / Kerberos against the native AD domain and usage of the default security credentials on the client PC. Thus, by adding one-way trusts, your users will be able to use both the MSCRM desktop client and the laptop client. Sending e-mail will of course work, while routing of incoming e-mails to users and queues will require some more configuration (see below).

Note that basic SRS reporting functionality will be available with just one-way trusts. For full reporting functionality, two-way trusts are needed between the AD forests. Alternatively, you need to configure a fixed identity on the clients for accessing the SRS reports (KB article to be published).

MSCRM 3.0 now supports having multiple Exchange servers in your native Exchange 'organization', including Exchange clusters. It is no longer required that you have a single Exchange server handling all incoming internet e-mails for your 'organization', as the functionality of the MSCRM e-mail router has changed in v3.0.

The v1.2 router had this limitation, which made it impossible to have one common MSCRM database in a company with multiple Exchange organizations (mail domains). E.g. I work in a company with several daughter companies and thus mail domains (itera.no, objectware.no, gazette.no, etc). This meant that with v1.2 we could not get full mail functionality in MSCRM. With MSCRM 3.0, we finally can.

The router no longer inspects all incoming mail messages, but rather a specific MSCRM mailbox.

The inspection of all incoming mails have been replaced by Exchange rules that must be deployed to each Exchange server that contain one or more mailboxes of MSCRM users and queues.
Click to enlarge figure

The Exchange rules, the MSCRM mailbox and the E-mail Router by default require mailboxes to be in the native domain and native 'organization', as the router must be able to access the MSCRM platform services to do its work.

You can deploy the mail routing rules and components to other Exchange organizations, provided that you configure the routing service to use an identity that has access to the MSCRM platform services. This will of course require that you have at least a one-way trust between the AD domains.

Wednesday, November 23, 2005

MSCRM 3.0 added fields - row size limitation

MSCRM 1.2 had an undocumented limit to the number of (actually, the combined size of) fields you could add to an entity. At least the MBS marketing department did not know anything but "you can add any number of custom fields as you like". This limit is imposed by the SQL Server 2000 maximum row size of 8KB, minus some overhead for replication. In addition, v1.2 used updatable SQL views with 'before triggers', which further limited the available size. Some of the entities in v1.2 is quite large to begin with, e.g. the Contact entity, and you would soon hit the roof.

In v3.0, they have raised the limit by providing an extra full row for custom fields, i.e. a separate table on each entity for the added fields. In addition, SQL replication is gone, and so are the updatable views. Thus you will be able to exploit the full range of bytes in a row as you please. The new tables are named *ExtensionBase, e.g. AccountExtensionBase. All text is of course still unicode, thus each char will take up two bytes in the database row.

Note that all new custom fields are added to the *ExtensionBase table, custom fields are no longer injected into the native table of an entity.

SharePoint has a similar mechanism for custom metadata on lists; the metadata fields all share the same database table. Although this limitation exists, it rarely imposes practical restrictions in our solutions, and I think that the same will apply to MSCRM custom fields.

Saturday, November 19, 2005

Noogen.Validation - WinForms validation made easy

After doing mostly ASP.NET and SharePoint solutions, I was quite pleased with the validation mechanisms of ASP.NET. I was very surprised and disappointed when I moved to developing WinForms solutions, and had to downgrade from the ASP.NET validator and validation summary controls to the WinForms stuff.

Gone were the validators and I had to use an ErrorProvider on my forms, as if I need something to provide me with errors. The worst was the need for iterating recursively over all controls on a form when clicking OK to ensure that all error had been corrected and that validation events returned success, before e.g. calling my biz-logic to save changes. I just wanted to have my Page.IsValid property back.

At my current project we agreed that the standard validation mechanism was to awkward for us, and one of the developers did some research to find a component that would make WinForms validation as simple as WebForms validation. This lead us to Noogen.Validation at CodeProject, a control that we have been using for some time now.

I just love the simplicity and flexibility of the Noogen.Validation component, and I strongly recommend it. It is the best add-on component I have used since the Farpoint Spread control for VB6. Thanks, Noogen!

Wednesday, November 16, 2005

System.Transactions LTM "limitation"

I have used the 'transcation context' aspect of component services, such as [Transaction(TransactionOption.Required)] in .NET and MTSTransactionMode in VB6, in a lot of components I have implemented since the first release of MTS/COM+. I just love having declarative transactions through the context, as this gives maximum flexibility in the ways components can be instanciated, mixed and used. EnterpriseServices does, however, incure some performance overhead due to e.g. using the DTC (Microsoft Distributed Transaction Manager).

ADO.NET 2.0 provides a promising new mechanism that is similar to the COM+ transaction context, through the System.Transactions namespace (MSDN Mag intro article). System.Transactions provides a new Lightweight Transaction Manager (LTM) that in combination with SQL Server 2005 is capable of providing transaction context through the TransactionScope class, without the overhead of DTC. The LTM is the starting point of a SS2K5 transaction, and it can be promoted into using DTC when other resource managers are involved in a transaction.

I was very disappointed when I implemented my first nested TransactionScope and ran my unit test on the biz logic method. The code gave this exception:

"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."

This was caused by code that involved two TableAdapters within a single transaction. Each TableAdapter contains its own SqlConnection that it opens and closes when appropriate. All connections are identical and use the same SS2K5 database (resource manager). Still, LTM decides that due to two connections being opened, a full DTC transaction is needed. Microsoft has confirmed this "limitation", which I say is rather a design error in System.Transactions. It is afterall the same resource (database) within the same resource manager (SS2K5). Quote MSDN: "The LTM is used to manage a transaction inside a single app domain that involves at most a single durable resource."

What is the point of using System.Transactions instead of System.EnterpriseServices when even the simplest real life scenario with multiple TableAdapters causes DTC to be required ? This is required e.g. when updating an order and its order lines (one DataSet, two DataTables, two TableAdapers). Microsoft should really start providing samples that go beyond single connection, single class, single component, single assembly applications.

I recognize that System.Transactions is a lightweight framework as opposed to System.EnterpriseServices, but it should not cause "bad" component interface design such as passing open SqlConnection objects around as parameters to avoid DTC for a single resource.

[UPDATE] If you need to stay LTM, you should consider using the DbConnectionScope provided by the ADO.NET team. Note that this will use a single connection for the duration of the scope; staying LTM, but also keeping the connection resource open longer - which counters connection pooling advantages.

[UPDATE] Read about the LTM improvement in .NET 3.5 and SQL Server 2008 and some less known System.Transactions gotchas here.

Monday, November 14, 2005

VSTS/TFS - xcopy to latest build; assembly references

The Team Build system of the Team Foundation Server builds a solution to a drop folder $(DropLocation) with a new sub-folder for each successful build $(BuildNumber). Using a dynamic folder as the source of a project's referenced assemblies in Visual Stuido is not supported, thus a post build action is needed to copy the built assemblies to a fixed location.

The task of copying the generated assemblies to a fixed 'latest build' folder is called 'Publish' in TFS. How to configure this custom action to xcopy *.* is, however, not bleeding obvious when setting up your build; in addition, the documentation seems to be incorrect. We have used this custom action configuration (see last reply) to publish to our \latestbuild\ folder. The "workaround" is to use <CreateItem> instead of an <ItemGroup>.

I really think that the Team Build wizard should include an option to specify a latest build location in the Location step.

These MSDN blog posts 'Part I' and 'Part II' explain how assembly references are resolved in Team Build. Note how the recommendations are different for intra-solution and cross-solution references:
  • Intra solution: use project references, not file references to your assemblies
  • Cross solution: use file references to your assemblies, and add an AfterBuild custom post build step in each of the assembly projects to copy the generated assemblies to the common 'binaries' location
Note that the 'post build step' custom action must be added to the assembly project, not the team build project. Ensure that you scroll down to see all the text of Manish Agarwal's part II posting on assembly references.

'Part III' is about references to a set of common assemblies, and this is where the 'xcopy' team build custom action comes in handy, e.g. to the shared location \Objectware.ShipBroker.Application\latestbuild\

Friday, November 11, 2005

VS2005 Add new data source wizard crash

Today I had a strange problem with VSTS 8.0.50727.42: I could not add a new data source of type 'object' to my Windows control library project. Adding a database or web-service data source worked fine.

The wizard would crash when trying to open the 'Select the object you wish to bind to' page of the wizard. The error message was this:

An unexpected error has occured.
Error message: Object reference not set to an instance of an object.


If I made a new Windows control library project and added classes from my business entity assembly as object data sources, the wizard worked as expected (kind of stupid that several classes cannot be added in one go, though). After several hours of experimenting with project references, structure, and even names, I finally saw the pattern of when the wizard worked and when it crashed. The wizard is dependent on your project having at least one public class in the root folder of your project.

In my Windows control library project I have structured the source code into several folders with no classes in the root. VS by default generates a class called 'UserControl1' in the root, and if you delete this class, the wizard will fail.

I now use a dummy class 'XDummyClassForDataBinding' in my project.

Thursday, November 10, 2005

VSTS - test project location and output

As a seasoned developer, I have a legacy of project folder structure preferences. Among other things, I like to keep non-source code stuff such as solutions and setup projects separate from the actual source code.The structure typically looks like this:

\source
\sln
\app1
\src
\app1
\test
\app1.test
\setup
\latestbuild
\references


\test
\app1.test
\testresults

Adding a unit test using VSTS (right click method name - "Create unit test"), however, creates the test project folder as a subfolder at the location of your .SLN file. It is easy to move the generated test project. Just remove it from the solution, move the project files and folders with Explorer, then add the test project to the solution from the new location (Add-Existing project). You should also move the localtestrun.testrunconfig file to the applicable test project folder. Note that the Test Manager file (.VSMDI) of a solution cannot be moved.

The bottom \test\ folder in the above list is the target for all output and reports created when running unit tests. I use a folder outside the \source\ folder to keep this stuff separate from the source code of the unit tests and the application itself.

VSTS produces a 'run details' file each time you run a unit test, and the test results are stored as .TRX files in a TestResults folder (yes) at the location of your .SLN file. The location of the test output was configurable in 'Edit test run configurations - Local test run - Deployment' in the VSTS betas, but this setting is now visually gone. Fear not, the setting is still in the .testrunconfig file, which is plain XML.

Open your .testrunconfig file and edit these elements:

<userDeploymentRoot type="System.String">..\..\test\app1.test\testresults\ </userDeploymentRoot>
<useDefaultDeploymentRoot type="System.Boolean">False</useDefaultDeploymentRoot>


Note that the last setting must be false, not true as someone has posted on forums.microsoft.com.

With these modifications to the default VSTS unit testing structure, my solution is now the way I like it. Maybe I am fighting the Visual Studio system too much, afterall Microsoft may have done usability studies to decide that their structure is the best...

Wednesday, November 09, 2005

MSCRM: issues with one-way trusts between domains

At one of our customer we had to setup a new Active Directory domain for MSCRM 1.2 as their existing domain was NT4. Thus, all the users and their mailboxes stayed in the NT4 domain, while MSCRM and SQL Server were installed in the new AD domain. This deployment is "supported" by Microsoft, but beware of the small print and ommisions.

First of all, "go offline" in Sales for Outlook (SFO) does not work when the users recide in a trusted NT4 domain. We never got to test "go online" for obvious reasons. This might be due to v1.2 using SQL Replication, which in v3.0 has been replaced by the good, old BCP tool. Note that v3.0 still uses MSDE as the offline database and not SQL Express. Both SQL Server 2000 and 2005 are supported by MSCRM 3.0 as the master database.

Then the famous "E-mail Router": setting up routing of incoming e-mails as shown in the implementation guide works, sort of. Install a new Exchange Server in the AD domain and use either a CRM subdomain or forwarding of non-CRM e-mails to the original Exchange Server. Beware of the small print, however! Only e-mails to mailboxes registered in the native AD domain of MSCRM will be processed by the router. Thus, mails to a user will not be routed, even when a reply to a MSCRM e-mail, as they are in the NT4 domain. The only AD mailboxes we had were for queues (support@myco.com, etc), and routing of incoming e-mails to these queues works like a breeze.

We are currently deploying MSCRM 3.0 in a simmilar scenario, this time with five customer divisions, each with its own AD domain that are not within a single, common AD forest. Each domain (customer division) has its own Exchange server. I will post our experiences on the limitations with this infrastructure later on.

Note that an Exchange 'organization' cannot span AD forests, and that MSCRM is limited to one Exchange 'organization'. This restricts MSCRM with full Exchange e-mail functionality to a single AD forest.

Tuesday, November 01, 2005

Configuration of c360 "My workplace" add-in

Anyone that implements professional MSCRM solutions will at some point need one or more c360 add-ins. We have used their SearchPak, Email to Case (just love it), and this week the "My Workplace" add-in. The workplace add-in allows users to personalize the view of queues; selecting activity and case columns, specifying sorting, etc. The standard MSCRM queue view sorts alpabetically on subject, while sorting on received/due date is normally requested.

The installation went quite OK. As usual we had to replace our customized isv.config file with our backup copy of the original, otherwise the setup kit will not be able to modify the file. This is a bit annoying, as a diff-merge is then needed to merge the news changes into the working copy of our customized isv.config file. Make sure that every added <NavBarItem>element is on one line only, as linebreaks will prevent MSCRM from running, and lock the config file. Use iisreset.exe to release the file lock if you get syntax problems.

The added "My Workplace" module (QueueManager) would not load, responding with this error message: The request failed with HTTP status 400: Bad Request. The offending code was easily located after adding a new web.config file in the \custom\c360\ folder, then setting <customerrors mode="off" /> and <compilation debug="true" /> to see the actual error. It was the c360 license provider that was not able to call the MSCRM web-service to get details about the authenticated user. The call to .WhoAmI() method of the MSCRM platform proxy object resulted in a SOAP error.

In addition, c360 code is well behaved and writes info to the Windows application event log on the MSCRM server. The event source is "c360.Toolkit" and provides you with data such as the page URL and the URL of the web-service. The web-service URL shown in the event was wrong, using the server name instead of the IIS site name.

This is the "undocumented" way to configure the exact URL of the web-service:

  1. Open the \custom\c360\config\c360.config file
  2. Add this <appSettings> element:
    <add key="WebServicesUrl" value="http://server/MSCRMServices" />
Note that for some reason it is not the c360.QueueManager.config that must be changed.

You might need to use the IP-address of your MSCRM site instead of the host name to get things working. Authentication between IIS sites on the same server can sometimes be hard to diagnose when using e.g. host headers, but using the specific IP-address of a web-service has always worked for me. This MSDN article on IIS authentication and credentials is recommended reading.

The need for some of the c360 add-ins will decrease with MSCRM 3.0, but c360 will no doubt continue to provide products that will complement and enhance the standard MSCRM functionality. They have announced support for v3.0 for all their add-ins within two weeks of MSCRM v3.0 RTM.

Objectware is the Norwegian c360 partner.