Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Saturday, August 25, 2012

Use Azure VMs with On Premises Gateway as SP2010 Branch Office Farm

Here are some notes from my ongoing experience of setting up a SharePoint 2010 "branch office" farm in Azure using the current preview of persistent Azure virtual machines in a Azure virtual network connected to the on premises Active Directory using an Azure gateway to a Juniper VPN device.
Useful TechEd Europe 2012 web-casts that show how things work in general:
VM+VNET in Azure: http://channel9.msdn.com/Events/TechEd/Europe/2012/AZR208
AD in Azure: http://channel9.msdn.com/Events/TechEd/Europe/2012/SIA205
SP in Azure: http://channel9.msdn.com/Events/TechEd/Europe/2012/OSP334

Step by step instructions that I followed:
How to Create a Virtual Network for Cross-Premises Connectivity using the Azure gateway.
How to Install a Replica Active Directory Domain Controller in Windows Azure Virtual Networks.

Creating a virtual network (vnet) is easy and simple using the Azure preview management portal. I recommed creating the local network first, as the vnet wizard otherwise might fail - without giving any useful exception message. We had an issue caused by naming the local network "6sixsix" which didn't work due to the name starting with a digit. Also note that the VPN gateway only supports one LAN subnet in the current preview.

Plan your subnets upfront and make sure that they don't overlap with the on premises subnets. Register both the existing on premises DNS server and the planned vnet DNS server when configuring the vnet. A tip here is that the first VM created in a subnet will get .4 as the last part of the IP address, so if your ADDNSSubnet is 10.3.4.0/24, then the vnet DNS will get 10.3.4.4 as its IP address. Note that you can't change the DNS configuration after adding the first VM to the network, this includes creating the Azure gateway which adds devices to the gateway subnet.

After creating the Azure virtual network, we created and started the Azure gateway for connecting to the on premises LAN using a Site-to-Site VPN tunnel using a secure IPSec connection. Creating the gateway takes some time as some devices or VMs are provisioned in the gateway subnet you specified. We then sent the public IP address of the gateway, plus the shared key and the configuration script for the Juniper VPN device to our network admin. The connection wouldn't come up, and to make a long story short, the VPN configuration needs the 'peerid' to be set to an IP address of a device in the gateway subnet. Our gateway subnet was 10.3.1.0/24 and after trying 10.3.1.4 first (see above tip), the network admin tried 10.3.1.5 and that worked. I'll come back to this below when telling you about our incident when our trial Azure account was deactivated by Microsoft.

With the Azure virtual network up and running and connected to the on premises LAN, I created the AD DNS virtual machine using the preview portal "create from gallery" option. As SP2010 is not supported on WS2012 yet, I decided to use the WS2008R2 server image in this Azure server farm. Note that you should use size "large" for hosting AD as you need to attach two data disks for storing the AD database, log files and system state backup.

I did not use powershell for creating this first VM, instead I manually changed the DNS setting on the network adapter (both IPv4 and IPv6) and then manually joined the to-be AD DNS VM to the existing domain. Note that while you're at it, also set the advanced DNS option "Use this connection's DNS suffix in DNS registration" for both network adapters. Otherwise you will get the "Changing the Primary Domain DNS name of this computer to "" failed" error when trying to join the domain.

Following the how-to for setting up a replica AD in Azure work fine, we only had some minor issues due to the existing AD being on WS2003. For instance, we found no DEFAULTIPSITELINK when creating a new site in AD, so we had to create a new site link first, then create the site and finally modify the site link so that it linked the Azure "CloudSite" and the LAN site. Then the dcpromo wizard step for AD site detection didn't manage to resolve against our WS2003 domain controller, just click "ok" on the error message and manually select the "CloudSite" in the "Select a site" page.


I really wanted to set up a read-only domain controller (RODC) to save some outgoing (egress) traffic and thus save some money, as this branch farm don't need a full fidelity domain controller. However, it is not possible to create a RODC when the existing DC is on WS2003, because RODC is a WS2008 feature. So for "Additional Domain Controller Options" we went with DNS and "Global Catalog" (GC). GC isn't required, but if not installed then all authentication traffic on login need to go all the way to the on premises DC. So to save some traffic (and money), and get faster authN in the branch farm, we added the GC - even if the extra data will drive up Azure storage cost.

The next servers in the farm were added using powershell to ensure that 1) the VM is domain joined on boot, and 2) that the DNS settings for the VM is automatically configured.

Here are some tips for using New-AzureDNS, New-AzureVMConfig and New-AzureVM:
  • You can use the Azure vnet DNS server or the on premises DNS server with New-AzureDNS. I used the former.
  • The New-AzureVMConfig Name parameter is used when naming and registering the server in AD and DNS. Make sure that the full computer name is unique across the domain.
  • The New-AzureVM ServiceName parameter is used for the cloud DNS name prefix in the .cloudapp.net domain. It is also used to provision a "Cloud Service" in the Azure preview management portal. Even if multiple VMs can be added to the same service name (shared workload), I used uniqe names for the farm VMs (standalone virtual machine), connected using the vnet for load balancing.
  • To get the built-in Azure image names, use this powershell to go through the images in the gallery until you find the one you're looking for:
           (Get-AzureVMImage)[1].ImageName
After adding the SQL Server 2012, web server and application server VMs using powershell, I logged in using RDP and verified that each server was up and running, domain joined and registered in the DNS. Note that the SQL Server image is not by default configured with separate data disks for data and log files. This means that the SQL Server 2012 master database etc is stored on the OS disk in this preview. You need to add data disks and then change the SQL Server file location configuration your self. Adding two data disks will require that the SQL Server VM is of size "large".

The next step was to intall SharePoint 2010 on the farm the next day. Thats when the trial account was deactivated because all the computing hours was spent. Even if you then reactivate the account, all your VM instances are deleted, keeping only the VHD disks. As Microsoft support says, it is easy to recreate the VMs, but they also tell you that the AD virtual machine needs a static IP which you can only get in Azure if you never delete the VM. Remember to recreate the VMs in the correct order so that they get the same IP addresses as before.

What is worse is that they also delete the virtual network and the gateway. Even if it is also easy to recreate these, your gateway will get a new public IP address and a new shared key, so you need to call your network provider again to make them reconfigure the VPN device.

I strongly recommend not using a spending capped trial account for hosting your Azure branch office farm. Microsoft deleted the VMs and the network to stop incurring costs, which was fine with non-persistent Azure VM Roles (PaaS) anyway, but not as nice for a IaaS service with a persistent server farm.

I recommend exporting your VMs using Export-AzureVM so that you don't have to recreate the VMs manually if something should happen. The exported XML will contain all the VM settings, including the attached data disks.

How to deatch Azure VMs to move or save cost: http://michaelwasham.com/2012/06/18/importing-and-exporting-virtual-machine-settings/

When we recreated the Azure virtual network and the gateway, the VPN connection would not come back up again. The issue was that this time the gateway devices had got different IP addresses, so now the "peerid" had to be configured as 10.3.4.4 to make things work.

Now the gateway is back up again, and next week I'll restore the VMs and continue with installing SP2010 on the Azure "branch office" farm. More notes to come if I run into other issues.

- - - continued - - -

Installing the SharePoint 2010 bits went smooth, but running the config wizard did not. First you need to allow incoming TCP traffic on port 1433 on the Azure SQL Server. Then the creation of the SharePoint_Config database failed with:

    Could not find stored procedure 'sp_dboption'.

...even if I had downloaded and installed SP2010 with SP1 bits. So I downloaded and installed SharePoint Server 2010 SP1 and June CU from 2011 due to the issue caused by using SQL Server 2012 and that fixed the problem. Got "Configuration Successfull" without any further issues.

Finally, I tested and verified it all by creating a SP2010 web-application with a team site, creating a self-signed certificate with IIS7 and adding an Azure port mapping for SSL (virtual machine endpoint, TCP 443 to 443), allowing me to login to the team site using my domain account over HTTPS from anywhere.

A note on the VM firewall config is that ping is by default blocked, thus you can't ping other machines in the vnet unless you configure the firewall to allow it. Also note that you can't ping addresses outside of the virtual network and the connected LAN anyway; even if you can browse to www.puzzlepart.com, you can't ping us.

Sunday, February 01, 2009

Service Compatibility - A Primer

In a comment on the InfoQ article Contract Versioning, Compatibility and Composability about my definition of service forwards and backwards compatibility, the problem of talking about compatibility of services compared to the definition of schema compatibility is acknowledged.

The "problem" is that a service version that is compatible with the specifications of older versions of the service, can be achieved using both backwards and forwards compatible schemas. That is correct, but doesn't preclude having a definition of forwards and backwards compatibility for service providers, a.k.a "services". Service compatibility is based on ability to validate a message, it is not based on using wildcards in the schema definition.
For a definition of the three types of forward compatibility, see my post on schema, service and routing compatibility.

Seen only from the service provider perspective, how it handles incoming messages is what defines if a service is forwards or backwards compatible (or both). How consumers handle messages sent by the service is really not of any concern for the provider - wait, read on.

Thinking about this within SOAP 1.x constraints, where a WSDL operation has a request message and a response message with fixed schema definitions/version (unilateral contracts), will lead to the conclusion that operations cannot be classified as forwards or backwards compatible, only the message schemas. This is a limitation of SOAP, but not of messaging in general.

In the following examples, the v1.2 service provider is backwards compatible and interacts with a v1.1 consumer. However, the schemas are not designed to be forwards compatible - they do not support XML extensibility (schema wildcards). In this scenario, the consumer can do either XSD validation of response messages against the v1.1 schema, or do 'validation by projection' of response messages - i.e. do "ignore unkown" validation. Doing 'validation by projection' is a recommended practice for compatibility and really simplifies building SOA solutions - this is also how WCF validates messages. So how to handle consumers that only do XSD validation, without relying on schema wildcards?

In REST, the consumer can put an "accept formats" header in the v1.1 request message, and the service provider can then respond with a v1.1 schema even if the service version is v1.2. The service provider adheres to it's obligations by being backwards compatible, and the consumer is allowed to express it's version expectation - the service has bilateral contracts.

Service Virtualization is a mechanism that can help with service versioning. The task of such an abstract endpoint intermediary is to handle versioning through both service compatibility and schema compatibility. A virtual service supports multiple versions of the service on the same endpoint, and must be capable of processing older requests. The virtual endpoint must have a mechanism that allows for the latest major v1.2 provider to handle v1.1 consumers. The intermediary mediates between the schema versions by transforming or projecting/enriching the messages as needed.

Back to the example, the service provider v1.2 response message can be stripped down to a v1.1 message by the intermediary as it is sent back to the consumer. The net effect is that the service has virtual bilateral contracts.

In messaging in general, by definition there are no duplex channels, only one-way channels (see
Enterprise Integration Patterns by Hohpe/Woolf). On top of this, you can have a logical two-way channel for message exchange patterns such as request-response, specified using a "reply-to" address and a "reply-format" (bilateral contracts). The message compatibility is defined by the schema constructs, but just as in REST, the version of the incoming message does not dictate the version of the response message. It is the implementation of the endpoint that processes the messages that defines the compatibility policy of the endpoint, not the schemas.

So, service compatibility do not require using forwards compatible schemas in addition to backwards compatible schemas. The message validation policy is what defines service compatibility.

It is of course much simpler to just have a service compatibility policy based on that the schemas used in the services must be both forwards and backwards compatible - as shown in the "SOAP-style unilateral contracts" service compatibility figures.



Click figures to enlarge.

This way, the service provider or consumer platform need not handle "request-format" and "reply-format" that have different versions. In such a unilateral schema compatibility policy world, services are just intrinsically compatible through schema compatibility.

Wednesday, November 12, 2008

Service+Schema Versioning: Flexible/Strict Strategy

In a few weeks time I will be giving a session at NNUG on SOA service and schema versioning strategies and practices. A central topic will be schema compatibility rules, where I will recommend creating a policy based on the "Strict", "Flexible" and "Loose" versioning strategies described in chapter 20.4 in the latest Thomas Erl series book Web Service Contract Design and Versioning for SOA. I guess David Orchard is the author/editor of part III in the book.

I recommend using a “Flexible/Strict” compatibility policy:

  • Flexible: Safe changes to schemas are backwards compatible and cause just a point version

  • Strict: All unsafe schema changes must cause a new schema version and thus a new service version

  • Do not require forwards compatible schemas (Loose, wildcard schemas) - schemas should be designed for extensibility, not to avoid versioning

  • Service interfaces should also have a Flexible/Strict policy
Safe changes is typically adding to schemas, while unsafe changes are typically modifying or removing schema components.

Note that forwards compatible schemas is not required to have forwards compatible services, as service compatibility is defined by the ability to validate messages. WCF uses a variant of 'validation by projection' (ignore unknown) for forwards compatibility, but also supports schema wildcards.


As Nicolai M. Josuttis shows in the book SOA in Practice (chapter 12.2.1 Trivial Domain-Driven Versioning), even simple backwards compatible changes might cause unpredicted side effects such as response times breaking SLAs and causing problems for consumers. It is much safer to provide a new service version with the new schema version, as if there is a problem, only the upgraded consumers that required the change will be affected.

Note that even adding backwards compatible schema components can be risky, but adding is typically safe. Josuttis recommends using "Strict" as it is a very simple and explicit policy, but I prefer "Flexible/Strict" as this gives more flexibility and less service versions to govern.

Avoid trying to implement some smart automagical mechanism for handling schema version issues in the service logic. Rather use backwards compatibility, explicit schema versions and support multiple active service versions. In addition, consider applying a service virtualization mechanism.

Wednesday, August 06, 2008

Service Virtualization: MSE on Channel9

Service virtualization can be an important architectual mechanism for your service-oriented solutions, and the Managed Services Engine is a free WCF-based tool available at CodePlex.

There are two videos at Channel9 about MSE that I recommend watching to learn about the capabilities of MSE:
  • Code to live: service virtualization, versioning, etc
  • Talking about MSE: virtual services and endpoints, protocol adaption, policy enforcement, etc
A topic related to service virtualization is consumer-driven contracts, and you can do that with MSE, but the WCF LOB Adapter Kit might be an even better tool for that.

Thursday, May 15, 2008

Configure MSE UDDI Integration

The Managed Services Engine repository can be synchronized with UDDI. All service endpoints hosted on an MSE runtime server can be exported to make your virtualized services discoverable from e.g. VisualStudio. View this web-cast (slide deck) by Raul Camacho to learn about SOA lifecycle management and MSE service virtualization.

Enable and configure the UDDI sync using the <serviceCatalogUddi> element in the config file of the MSE catalog service executable on the MSE server (Microsoft.MSE.Catalog.ServiceHost.exe.config).

NOTE: In UDDI Services, data published for a service, provider, or tModel can be modified only by the current owner. Make sure that the identity used to publish MSE data to UDDI is the owner of the configured UDDI provider (BusinessKey).

The UDDI sync configuration must adhere to these rules:

  • Make sure that you use the "uddipublic" URLs for "UDDI" authN
  • Make sure that you use the "uddi" URLs for "WINDOWS" authN
  • The identity used to run the MSE catalog service must have access to the UDDI web-services
  • The identity specified for the sync must have publish rights in UDDI and own the BusinessKey
The UDDI Services Console MMC snap-in must be used to configure domain groups for the different UDDI roles, including "publisher" rights.

Using "WINDOWS" authN is strongly recommended. You can specify "UDDI" as the <UddiAuthenticationScheme> and put a username + password of an identity that has access to UDDI in the config file to test the UDDI sync. Do not do this for production systems.

<!-- DO NOT CHANGE THE ORDER OF THE ELEMENTS -->
<serviceCatalogUddi>

<UddiIntegrationMode>true</UddiIntegrationMode>
<UddiPublishUrl>http://***/uddipublic/publish.asmx</UddiPublishUrl>
<UddiInquireUrl>http://***/uddipublic/inquire.asmx</UddiInquireUrl> <BusinessKey> UDDI PROVIDER GUID HERE </BusinessKey>
<UddiAuthenticationScheme>UDDI</UddiAuthenticationScheme>
<UddiUserName>***\***</UddiUserName>
<UddiPassword>*******</UddiPassword>
</serviceCatalogUddi>


Check the UDDI config if you have problems when publishing services and also check the Application Event Log. Use the MMC UDDI Services Console to adjust the logging level to diagnose errors. Note that you must set the level to info / verbose to see SQL exceptions, e.g. when you get an error like this:

Failed to Publish EndPoint service to UDDI due to error [Exception Information Type[FaultException] Message[A UDDI specific error occurred while publishing Endpoint data to UDDI] ].

Using "WINDOWS" against "uddipublic" will give you this error in the event log: UDDI_ERROR_AUTHTOKENREQUIRED_NOTOKENPUBLISHATTEMPT

Note that the UDDI web-portal requires ASP.NET 1.x. Check that the IIS web-sites use ASP.NET 1.x if the UDDI web-pages have errors (e.g. the treeview fails).

Friday, February 15, 2008

SOA modeling: business, process, service, data

It is the last day at TechReady6 here in Seattle, and I've been to some quite interesting sessions among the ~1200 available sessions, attending mostly on SOA stuff. It has been several "Oslo" related talks about composite, service-oriented applications; one that really stood out was Hatay Tuna on an upcoming modeling offering. As you may know, modeling is a very central aspect of "Oslo", and Hatay presented a really cool tool for modeling business capabilities (MSBA/Motion), business processes, services (process, activity/capability, entity) and finally the data that goes into these interrelated models. I cannot tell you more right now, but read more about the ideas behind this modeling tool and guidance at Hatay's blog.

This weekend I'm attending a bootcamp on Microsoft's
SOA Maturity Model (SOAMM) - a technology agnostic assessment of an organization's level of maturity on developing, using and governing service-oriented systems. This methodology is based on several years of field experience through many MCS projects and consists of a standard interview process+tool across both business and IT (operations and development) roles that leads to an assessment, which then can be used to produce a roadmap for how the company can achieve its SOA goals.

PS! I spoke with William Oellermann about the XPS only
documentation of the Managed Services Engine, and he promptly made the documentation available in PDF format for you Java & Mac guys out there. I see that Anders Norås already has downloaded the documentation nine times :)

Friday, November 02, 2007

Getting started with MSE, on SQL Server Express

As you can imagine, I just had to try to install and test the Managed Services Engine for service virtualization when it was made public on CodePlex this week. For a quick intro to MSE, its relation to "Oslo" and SOA governance, and MSE future directions; read this InfoQ interview with William Oellermann.

Not bothering to read the installation guide, I just ran the installer on my Vista machine with SQL Server Express installed - and got a "Failed to create SQL database: MSE6DB" error. The cause of the error is that the installer by default does not target a named SQL Server instance when trying to create the database used by MSE and the sample services provided with the MSE6 toolkit.

The solution is documented in the install guide (but I'm repeating it here for my friend Anders Norås):

  • Ensure that SQL Server Express is running on your computer, but close SSE Management Studio
  • Open a command windows (CMD.EXE) and navigate to the folder of the MS6.msi installer
  • Run the installer (using /i):
    msiexec /i mse6.msi SQLSERVER=.\SQLEXPRESS
  • Locate the Microsoft.MSE.Repository.Service.exe.config file in the “~\Program Files\Microsoft Managed Services Engine” folder and change the data source accordingly:
    <DBConnString>Initial Catalog=MSE6DB;Data Source=.\SQLEXPRESS; . . . ;

Note that you must be an administrator the local system if you are not in the SSE server role 'dbcreator' or equivalent.

This gives you the database MSE6DB that contains the MSE repository:
Try starting the Managed Services Engine MMC snap-in. If you get a service repository (net.pipe://localhost/ServiceCatalog/Pipe) connection error, then restart the “MSE Catalog Server” and “MSE Runtime Server” services in that order to reload the database config.

Finally, you must install some missing perf counters to make the service virtualization (hosted endpoints) work at run-time, otherwise you'll get an error when invoking the services. You must also restart the “MSE Catalog Server” and “MSE Runtime Server” services for the changes to take effect.

Now you're set to follow the 30min walkthrough to learn more about MSE and the wonderful world of service virtualization and see why it is important for life-cycle management of services in SOA systems.

PS! all of the walkthrough is correct, just pay attention in the POX step: you need to keep the action as-is from the previous step, only change what the text say you should change (binding XML, etc). Do not rediscover or change the selected operation in the grid.