Archive for May, 2008

Custom routing for ASP .NET MVC

Monday, May 26th, 2008

Custom routing for ASP .NET MVC

Those familiar with the MVC framework for ASP .NET will know that one of its primary features is the mapping of URLs to methods on controllers. For example, /Products/Find will cause the ProductsController to be created and have its Find method invoked. It is also possible to pass arguments to methods, for instance /Products/Load/53 would call the Load method of the ProductsController, supplying 53 as the argument.

Organising controllers

Whilst this allows the developer to structure their code better, keeping presentational logic in the view and application logic in the controller, it isn’t ideal. To continue the example, as the project grows it will provide an increasing amount of features related to products, all of which will be delivered by the ProductsController. As a result the code for searching for products will end up in the same class as that used to edit products, and so on.

Everyone has their own take on the MVC pattern and in the past I have tended to use one controller per use case. The use cases in question here are Find Product and Edit Product and as such their functionality would be provided by the FindProductController and EditProductController, rather than living together in a single ProductsController.

A simple way to implement this pattern is to keep the ProductsController and have it delegate all its work. For example, the Load method would simply create an instance of EditProductController and call its Load method, passing any arguments as well. Whilst this is feasible it is more of a workaround than a genuine solution. It would be far better to cut out the ProductsController altogether, and have methods on the two controllers be called directly. The routing engine in ASP .NET MVC is very flexible and, by developing a custom route, it is possible to do this.

Creating a custom route

It is the job of a route to take a URL and call the appropriate method of a controller. The default MVC route has a format of {controller}/{action}/{id}, however our use case route will use {useCaseNoun}/{useCaseVerb}/{action}/{id}. The key difference is that the controller token has been replaced with two new tokens, noun and verb. This will allow us to provide the following routes

URL Controller Action Behaviour
/Product/Find/Search FindProductController Search Execute a search for products and display the results
/Product/Find/Clear FindProductController Clear Reset all the fields of the search page
/Product/Find FindProductController [default] Execute the default controller method (more on this shortly)
/Product/Edit/Load/17 EditProductController Load(17) Load the product with ID 17 and display its data for editing
/Product/Edit/Save/23 EditProductController Save(23) Save the supplied data against the product with ID 23
/Product/Edit/Clear EditProductController Clear Clear out the edit page ready for entering a new product

A fringe benefit of adopting this strategy is that both controllers can have a method of the same name, e.g. Clear, but have the method perform a completely different task. With a single ProductsController there could only be one Clear method.

In order to register the custom route with the MVC framework, some changes need to be made to the Global.asax file. Its Application_Start method calls RegisterRoutes which, using the default MVC project template, will already set up the default route format of {controller}/{action}/{id}. To this method we need to add the following

routes.Add(new Route("{useCaseNoun}/{useCaseVerb}/{action}/{id}", new MvcRouteHandler())
{
Defaults = new RouteValueDictionary(new { action = "Index", id = "" }),
});

Note that the default action is Index so, in the case of the /Product/Find URL in the table above, this would map to the Index method of the FindProductController.

At this point we can use Phil Haack‘s Url Routing Debugger to test that our URLs are being correctly routed. To do so we get a reference to Phil’s RouteDebug.dll and add the following code after RegisterRoutes is called

RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes);

It is then possible to enter each of URLs into a browser and see which route they match. Check out Phil’s post for further details.

Creating a handler for the route

The format of our route is such that the {controller} token is no longer present. As a result the MvcRouteHandler that is associated with the route will not be able to identify which controller to use. Typically it just extracts the value of the controller token, appends “Controller” to it, and instantiates an object of that type. To resolve this issue we need to replace MvcRouteHandler with a route handler of our own.

Fredrik Normén produced an excellent blog post, Create your own IRouteHandler, which describes how to do this. For our route, we need to create two new classes, the first of which implements IRouteHandler, as shown below

public class UseCaseRouteHandler : IRouteHandler
{
    public IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        return new UseCaseMvcHandler(requestContext);
    }
}

This class, UseCaseRouteHandler, is used in place of MvcRouteHandler, and simply creates a new IHttpHandler which will do the real work. The implementation of IHttpHandler is actually our second class, UseCaseMvcHandler. This inherits from MvcHandler and overrides the ProcessRequest method, during which the correct controller is identified and then created. It is this behaviour that we need to redefine.

To determine how our ProcessRequest should work, I downloaded the source code of the MVC framework itself, which is available from CodePlex. A quick inspection of MvcHandler‘s ProcessRequest shows that the GetRequiredString method is used to extract the values of the route’s tokens. For the default routing this is just a case of getting the controller name, whereas our custom route needs to grab both the {useCaseNoun} and {useCaseVerb} tokens. I moved this logic into a separate function, GetControllerName, which is shown below

private string GetControllerName()
{
    string noun = this.RequestContext.RouteData.GetRequiredString("useCaseNoun");
    string verb = this.RequestContext.RouteData.GetRequiredString("useCaseVerb");
    return verb + noun;
}

So, if the URL is /Product/Find/Search, this method will extract a noun of “Product”, a verb of “Find” and return the value “FindProduct”.

I then copied MvcHandler’s ProcessRequest code into UseCaseMvcHandler and replaced the line extracting the controller token value with a call to the GetControllerName function. Simple. Well, almost! Unfortunately the resource strings are not available to inheriting classes, and neither is the ControllerBuilder property. I replaced the former with a hard-wired string, whilst the latter is accessible via the ControllerBuilder class’ static Current property.

At this point the code is almost ready to run. We just need to adjust the code in RegisterRoutes so that our route uses the new UseCaseRouteHandler class. This is done as follows

routes.Add(new Route("{useCaseNoun}/{useCaseVerb}/{action}/{id}", new UseCaseRouteHandler())
{
Defaults = new RouteValueDictionary(new { action = "Index", id = "" }),
});

Identifying which view to show

Having commented out the call to the routing debugger, I then browsed to /Product/Edit/Load/17 and…BANG! An exception with the message,

The RouteData must contain an item named ‘controller’ with a non-empty string value

was shown. After some digging through the MVC source, it seems that the code responsible for identifying which view to create (the ViewEngine class does this) was also trying to find a controller token in the URL, in order to work out which subfolder of Views to look in. The Load method of EditProductController calls RenderView, passing “Edit” as the viewName argument. By altering this to “~/Views/Product/Edit.aspx” I was able to work around this issue.

This was a far from satisfactory solution however. Fully-qualifying all of the view names is a potential maintenance problem in the future, if views are moved or folders renamed. To combat this I introduced a UseCaseControllerBase class, from which EditProductController and FindProductController now inherit. This class overrides RenderView and works out the full path to the view. The following code shows how

public abstract class UseCaseControllerBase : Controller
{
    protected override void RenderView(string viewName, string masterName, object viewData)
    {
        string noun = this.RouteData.GetRequiredString("useCaseNoun");
        string fullViewName = string.Format("~/Views/{0}/{1}.aspx", noun, viewName);
        base.RenderView(fullViewName, masterName, viewData);
    }
}

The ideal resolution would be to customise the behaviour of the ViewEngine, however that is beyond the scope of this article.

This post demonstrates the flexibility of the routing subsystem provided by ASP .NET. It also shows how to improve the separation of functionality between controllers. If you are interested the sample code is available from CodePlex.

Technorati tags:

How to publish a web site with MSBuild

Sunday, May 18th, 2008

How to publish a web site with MSBuild

In part two of my MSBuild tutorial I needed to find a way to call Visual Studio’s Publish Web Site feature from MSBuild. Much trawling of the interweb failed to find anything of use, so in the end I had to produce my own target which copied the relevant files into the output folder.

The problem with this approach is that it only works for files of type dll, aspx or config. It is a simple task to add an extension, for example png, however on larger projects this becomes impractical. Developers would have to check the build script each time they added or removed a file just in case the Publish target needed to be updated. These are just the sort of jobs that get forgotten, which can lead to invalid builds later on.

Fortunately I came across a post on Mirosław Jedkynak’s blog showing how to use MSBuild to publish a web site. As some of you may be aware, a project file (vbproj or, in the case of my demo, csproj) contains targets of its own, as well as importing targets from other files that come as part of a Visual Studio installation. One such file is Microsoft.WebApplication.targets. This file provides the _CopyWebApplication target which will effectively replace my home-brewed numberswiki.com

Publish target.

In order to make use of this target we need to pass it two properties, WebProjectOutputDir and OutDir, which will ensure that the files get published into the correct folder. Here is an example

<Target Name="Publish">
  <RemoveDir Directories="$(OutputFolder)"
             ContinueOnError="true" />
  <MSBuild Projects="BuildDemoSite.csproj"
           Targets="ResolveReferences;_CopyWebApplication"
           Properties="WebProjectOutputDir=$(OutputFolder);
           OutDir=$(WebProjectOutputDir)\" />
</Target>

As you can see the ResolveReferences target is also called, this ensures that any third party dependencies are copied over as well.

Integrating this into my demo build script was simple, however I noticed that some files were being copied over that I didn’t want. These were the build script itself, and the environment-specific config files. This is because their build action was set to Content. Once I had switched it to None and ran the script again, everything was fine. The build action can be set by right-clicking on a file in the Solution Explorer and selecting Properties. Build Action is the first item in the list.

I have posted a new version of the build script to CodePlex for those interested in taking a look.

Technorati tags:

A custom MSBuild task for merging config files

Sunday, May 11th, 2008

A custom MSBuild task for merging config files

In the last part of my MSBuild tutorial I mentioned that the target for merging config files was less than ideal. Although we were able to use the XmlRead and XmlUpdate tasks to make life easier, the list of settings to merge still needed to be maintained in the build script.

If a developer were to add a setting to, or remove a setting from, the web.config file, they would need to mirror this change in the build script. This is exactly the type of job which is easily forgotten, leading to problems further down the line.

Ideally the list of settings to merge would be self-maintaining, i.e. any settings found in the environment-specific config file would simply be enumerated and copied into web.config, rather than being explicitly listed in the build script. I decided to create my own MSBuild task for doing this. The spec for it is as follows

  • Copy over the value of the debug attribute under the compilation node
  • Copy over the value of the mode attribute under the customErrors node
  • Enumerate each of the settings under the appSettings node and copy them over too

It has been developed to copy a setting over only if it exists in both the source and target files. As such it will not add settings missing from the target file or remove settings that are not in the source file.

It is very simple to create your own MSBuild task, just follow these steps

  • Create a class which inherits from Task
  • Add properties for storing the arguments the task uses. Apply the Required attribute to make the argument mandatory
  • Use the LogMessage method to show progress and any errors in the command window
  • Override the Execute method, returning a flag to indicate success or failure

There are plenty of examples to follow in the Community Tasks source code and a more detailed article can be found on MSDN. It is also possible to debug a task, check out the MSBuild Team Blog for an explanation.

Consuming the custom task from a build script

Having produced the task, called MergeConfig, the next step was to integrate it with the sample solution created back in part one of the tutorial. This is done by adding this statement to the build script

<UsingTask TaskName="MergeConfig" AssemblyFile="MergeConfigTask.dll" />

This piece of XML tells MSBuild that there is task called MergeConfig in the MergeConfigTask.dll. Note that the path to the dll is relative to the build script’s directory. Having done this we can add a target which will then call the new MergeConfig task, like so

<Target Name="MergeConfig">
  <MergeConfig SourceConfigFilename="$(Environment).config"
               TargetConfigFilename="$(OutputFolder)\Web.config" />
</Target>

In addition to achieving the main goal, which was to have a self-maintaining list of settings to be merged, the use of this custom task has reduced the size of the build script, by around thirty lines. If you would like to make use of this task in your own script it can be downloaded from CodePlex. I have also updated the sample project to illustrate how the task is used.

Technorati tags: ,

Automating the build with MSBuild (part three)

Monday, May 5th, 2008

Automating the build with MSBuild

Welcome to part three of the MSBuild tutorial; please take a look at the previous parts if you haven’t already.

At this point there are a couple of minor issues with our build script that need to be resolved before pressing on with any new targets.

Usability

The first of these is usability. Right now there is no easy way to run all of the targets in one go. This is not very helpful if someone wants to run the full build process from start to finish. To remedy this, I’ve introduced a new target called Run, below

<Target Name="Run">
  <CallTarget Targets="Compile" />
  <CallTarget Targets="Publish" />
  <CallTarget Targets="SetConfig" />
</Target>

This simply uses the CallTarget task to hand the work off to the targets we created in the last two posts. Notice that it isn’t necessary to call Clean and GetConfig as they are dependencies of Compile and SetConfig respectively. MSBuild will automatically run them, in the correct order.

The finishing touch is applied by adding the DefaultTargets attribute to the root Project node, with a value of Run. This tells MSBuild to call the Run target if no targets are explicitly passed in from the command line. Having added the attribute, the whole script can be executed with this simple command

msbuild Build.xml

As you can see the /t switch has gone. Much nicer!

DRYing out the XML

Our second issue involves refactoring. It is important to treat our build script just like any other code and apply good practices to it. Some repetition has crept in, which violates the DRY principle, specifically

  • Each XPath to a config setting occurs twice
  • The Output folder is referred to numerous times

The best solution is to extract these strings into properties and then use the $(PropertyName) syntax to refer to them. It doesn’t take long and makes the script much easier to maintain in future. You can see the end result here.

Deploying the web site

Now the housekeeping is out of the way, we can start work on the last target, which is to deploy the web site to the correct location and configure IIS accordingly. This means copying the contents of the Output folder to another computer and either adding or updating a virtual directory to run the site.

The first part is pretty simple, and just a case of using the RemoveDir and Copy tasks in a similar way to the Publish target in part two. The following XML illustrates this

<Target Name="Deploy">
  <RemoveDir Directories="$(DeploymentFolder)"
             ContinueOnError="true" />
  <ItemGroup>
    <DeploymentFiles Include="$(OutputFolder)\**\*.*" />
  </ItemGroup>
  <Copy SourceFiles="@(DeploymentFiles)"
        DestinationFolder="$(DeploymentFolder)\%(RecursiveDir)" />
</Target>

Nothing new to see there, so we’ll move on quickly to configuring the IIS virtual directory. There is no built-in task to do this, however the MSBuild Community Tasks Project comes to the rescue again with the WebDirectoryDelete and WebDirectoryCreate tasks. It is important to recreate the virtual directory each time to ensure we have a clean deployment. This is accomplished by adding the following XML to our Deploy target

<WebDirectoryDelete VirtualDirectoryName="$(VirtualDirectory)"
                    ContinueOnError="true" />
<WebDirectoryCreate VirtualDirectoryName="$(VirtualDirectory)"
                    VirtualDirectoryPhysicalPath="$(DeploymentFolder)" />

Something worth noting at this point is that this only works for the local machine. If you are deploying to another box, which is highly likely, then you would need to supply the ServerName, Username and Password attributes as well.

The final piece of the puzzle is to amend the Run target to ensure it calls our new Deploy target at the end. At this point it is possible to run the entire script and get a newly built and deployed web site in a matter of seconds. Very useful indeed.

Environment-specific deployment

At this point we need to revisit the Environment property which was defined back in part two. This is still hard-wired to Test, as are the new DeploymentFolder and VirtualDirectory properties. As a consequence, users of the build script will have to edit it if they want to build for, say, a live environment.

To prevent this we can introduce a PropertyGroup for each of the environments, and move the DeploymentFolder and VirtualDirectory properties into each of them. Then, by adding the Condition attribute to each group, it is possible to test which environment is being used and bring its PropertyGroup into play. The following snippet shows how to do so

<PropertyGroup Condition="$(Environment) == 'Test'">
  <DeploymentFolder>C:\Temp\BuildDemoSite\Test</DeploymentFolder>
  <VirtualDirectory>BuildDemoTest</VirtualDirectory>
</PropertyGroup>

<PropertyGroup Condition="$(Environment) == 'Live'">
  <DeploymentFolder>C:\Temp\BuildDemoSite\Live</DeploymentFolder>
  <VirtualDirectory>BuildDemoLive</VirtualDirectory>
</PropertyGroup>

As you can see, the value of the Condition attribute is an expression that evaluates to true or false. In our case, we test the value of the Environment setting. In future, if further deployment environments are required, it is simple to add a new PropertyGroup.

Despite these changes, we still have that pesky Environment property – let’s delete it. Instead, it can specified via the command line when the script is run, as follows

msbuild Build.xml /p:Environment=Live

By using the /p switch it is possible to specify a value for the property, in a similar way to calling a function and passing in arguments. However, there is always a chance that someone may forget to do this, in which case you may wish to restore the Environment property and have it act as a default.

Summary

That’s the end of the tutorial. Although the final script (available on CodePlex) achieves all of the aims set out in part one there are still a couple of areas for improvement, specifically

  • There is no MSBuild task which provides the same behaviour as Visual Studio’s Publish feature
  • A certain amount of repetition remains when merging config settings. Ideally all settings would be merged automatically, perhaps via a custom MSBuild task (something for another post perhaps)

Overall however the end result is still very useful, and only scratches the surface of what MSBuild can do. If you are interested in learning more, the following links may be of use

All of the source code for the tutorial is available from CodePlex. I hope you’ve found it useful and welcome any feedback.

Technorati tags: