Consistent validation with ASP .NET MVC and jQuery

March 6th, 2010

Recently I have been developing a couple of small web applications with version 1 of ASP .NET MVC, using jQuery’s validation plugin to provide a better client-side experience. As some of you may be aware, the validation features in the first release of MVC were sparse, although Phil and the team have certainly corrected this with the recent second version.

One aim I had with my validation features was to deliver consistent behaviour and appearance between client and server, so that users would get the same experience whether they had scripting turned on or off. It proved a little awkward at first, but I got there in the end, so thought I would post the results to my blog in case anyone else is trying to do the same.

When creating a strongly-typed view, MVC provides templates for common scenarios, e.g. Create, Edit, Details and so on. This gives the developer a head start and removes the need for a lot of monotonous coding. The server-side markup it generates for each field in the Create and Edit views is similar to the following

<label for="FirstName">FirstName:</label>
<%= Html.TextBox("FirstName") %>
<%= Html.ValidationMessage("FirstName")

As you can see, each property of the model (in this case, the first name of a person) gets a label, an input control for editing its value, and any validation messages linked to the field are shown next to it. When this is rendered to the client, we get HTML like this

<label for="FirstName">FirstName:</label>
<input class="input-validation-error" id="FirstName" name="FirstName" type="text" value="" />
<span class="field-validation-error">First name must be entered.</span>

Whilst jQuery’s validation provides similar results straight out of the box, it’s not quite what I need. For starters, it uses a label to show the error message, whereas MVC uses a span. This is easily corrected by using the errorElement option of the plugin. So the script for the validator now looks like this

$().ready(function() {
  $('form').validate({
      errorElement: 'span',
      rules: { FirstName: { required: true } },
      messages: { FirstName: { required: 'Please enter the first name.' } }
  });
});

However that left me with a tricky problem – the input and error elements don’t have the correct classes attached to them, so the styling rules are not being applied. As you can see from the earlier markup, MVC applies the field-validation-error class to the element containing the error message, and the input-validation-error class to the element containing the invalid value. This is different to the jQuery plugin, which applies the error class to both elements.

Initially I tried playing around with the errorClass option, but could only get one or other of the correct classes applied. In the end I used the highlight and unhighlight functions, which are called when an error message is shown or hidden, respectively. By default, highlight adds the errorClass to the input element, and also removes the validClass. The validClass (valid by default) allows you to style the element to indicate that it contains valid input. My custom implementation of highlight continues to do this, but adds another couple of lines to apply the correct classes to the input element and the error message span. The JavaScript looks like this

highlight: function(element, errorClass, validClass) {
  $(element).addClass(errorClass).removeClass(validClass);
  $(element).addClass('input-validation-error');
  $(element.form).find('span[for=' + element.id + ']')
    .addClass('field-validation-error');}

The unhighlight function just does the reverse; I’ve omitted it for the sake of brevity here. I’ve been using this code for a while now and it seems to have had the desired effect. This is a great example of how flexible many of jQuery’s plugins are, as well as providing excellent functionality out of the box.

The sample code for this is available here, I’ve also included a slight change which ensures the validClass is applied correctly if you are targeting styles for it.

See also: jQuery validation advanced options, jQuery validation home

Plugging ELMAH into ASP .NET MVC

January 22nd, 2010

Over the past year or so ELMAH has becoming increasingly popular, and it’s not hard to see why. It has an excellent set of error logging and reporting features, doesn’t need to be referenced directly in any application code and is simple to configure.

Recently a troublesome ASP .NET application came into my care and I decided the first course of action was to plug ELMAH in. At the most basic level, this involves dropping the ELMAH dll into bin and a few changes to web.config; Scott Hanselman shows you how to do this on his blog.

However, because the app was built using ASP .NET MVC, its default HandleError attribute was intercepting all exceptions thrown by controllers, before ELMAH could got a look at them. As a result not all exceptions were being logged. To counter this, I created a custom version of the HandleError attribute, called ElmahHandleErrorAttribute. Well actually, Atif Aziz, author of ELMAH did, in a post on Stack Overflow, I just copied it!

The last piece of the puzzle is to create a base controller, from which all controllers in the application inherit, and apply the ElmahHandleErrorAttribute to it. Now all exceptions in the application will pass through the ELMAH pipleine.

It’s important to test that any custom error pages are also shown after ELMAH has done its bit. To do so, switch customErrors in web.config to on so that those pages are shown in preference to the traditional yellow screen of death (more information on customErrors can be found at MSDN). If you have error pages specific to particular HTTP error codes (i.e. one for 404s and one for the rest) then add these now.

Next, do something that is guaranteed to throw an exception in the app. Most of the time a typo in the connection string, if using a database, is a simple but effective way of doing this. Afterwards, ELMAH will have logged the exception (go to /elmah.axd to check) and the friendly error page will be rendered.

The reason I suggest testing these pages is that when developing locally, most of us run with customErrors off so we get the YSOD in all its glory, stack trace and all – and so we should. As a result, if there are any issues with custom error pages not being shown correctly, they often only come out during formal testing, at which point it is often more time consuming to fix.

I’ve tripped over this myself a couple of times, particularly if the custom error page accesses the database. Imagine a situation in which the database is offline, and an exception is thrown. ELMAH logs the details, and ASP .NET MVC tries to render the custom error page, at which point another attempt is made to connect to the database! The custom error page will then blow up – not good at all.

The lesson here is that custom error pages should be as basic as possible. If they use the same master page as the rest of the app, and that master page shows a database value somewhere in the header or footer (e.g. friendly name of the current user, which is quite common), they are vulnerable to this problem. Far better to have a separate master page for all error pages which uses the same styles as the rest of the site but consists of plain, basic markup.

One last point to make is that ELMAH isn’t just for web apps, although it works best there. I recently worked on a solution that had a web front end and a console application for pumping data into the database. I made use of ELMAH in both apps, thereby having a central repository for all exception information. I’ve pointed my feed reader at ELMAH’s RSS feed (/elmah.axd/rss) and am notified as soon as any part of the solution experiences an error.

So, if you’re not yet on board the ELMAH bandwagon, now is the time to join in.

Strongly-typed access to EPiSever properties using generics and extension methods

May 2nd, 2009

When building an EPiServer template, a common task is to retrieve the value of a property before showing it on the page. This property could be of any type and may even be null if the content editor has not supplied a value. On our early EPiSever projects it was typical to find a utility method similar to that below, which would help to avoid NullReferenceExceptions

public string GetProperty(string propertyName)
{
    if (CurrentPage[propertyName] == null)
    {
        return string.Empty;
    }
    return (string)CurrentPage[propertyName];
}

This is useful up to a point, but sprinkle some generics and it then becomes very powerful

public T GetProperty<T>(string propertyName)
{
    if (CurrentPage[propertyName] == null)
    {
        return default(T);
    }
    return (T)CurrentPage[propertyName];
}

Not only does the method now return a strongly-typed value, which saves the developer having to cast, it also copes nicely with missing values. By using the default keyword we are telling .NET to create the default value for the requested type. For instance a missing integer would cause 0 to be returned.

One gotcha to be aware of is that, for missing string properties, null will be returned and not String.Empty as you may expect. This is because string is a reference type, not a value type like int, DateTime and so on. Calling default(T) on reference types will always return a null reference.

Despite this GetProperty is a handy function to have on an EPiServer project and will likely be used tens or even hundreds of times depending on the size of the site being built.

Using extension methods

Ideally this function would form part of the EPiServer PageData class itself, rather than being an isolated helper method. Then we could call it as follows

int someValue = CurrentPage.GetProperty<int>("someProperty");

Although it might seem that inheritance can help us to achieve our aim, that is not the case. OK, we could create a new class which inherits PageData, and add GetProperty to that, but EPiServer would carry on using PageData regardless. What we need to do is add code directly to PageData.

Until version 3 of the .NET framework this would have been impossible to do. However a new feature called extension methods allows us to “spot-weld” methods and properties to any class at all, even System.Object.

Changing GetProperty to an extension method gives us the following

public static class PageDataExtensions
{
    public static T GetProperty<T>(this PageData page, string propertyName)
    {
    if (page[propertyName] == null)
    {
        return default(T);
    }
    return (T)page[propertyName];
    }
}

The method has been moved into a different, static, class. The key difference though, is the addition of a PageData parameter, which has the this keyword applied to it. Doing so tells .NET to extend that class with this method.

To use the extension method, add a using statement to import the new PageDataExtensions class, start typing CurrentPage.Properties.Get and you will see that IntelliSense has detected it. So just call it like any other function, as follows

decimal someValue = CurrentPage.GetProperty<decimal>("someDecimalProperty");

This is a really simple technique to use and is very effective in helping to better organise code. As Nessa from Gavin and Stacey would say, “Tidy!”.

Add logging to your application using log4net (part two)

December 28th, 2008

Add logging to your application using log4net

Picking up the code from the previous post, let’s add another appender to the config file as this helps to demonstrate the relationship between loggers and appenders.

log4net provides an SmtpAppender which will send out log messages via email. This is useful for high priority messages such as errors, as the recipients are notified of a problem immediately rather than being contacted by disgruntled users. There is also no need to search the log files as the relevant information is contained in the email. The following snippet is an example of how to configure the SmtpAppender

<appender name="EmailAppender" type="log4net.Appender.SmtpAppender">
    <to value="foo@bar.com" />
    <from value="LoggingDemo website &lt;email.appender@foo.com&gt;" />
    <subject value="Message from LoggingDemo website" />
    <smtpHost value="exchange.foo.com" />
    <bufferSize value="0" />
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date{yyyy-MM-dd HH:mm:ss.fff} %-5level %message%newline" />
    </layout>
</appender>

The majority of the settings are self-explanatory, however it is worth noting that buffer size is set to zero. Had this been omitted the default of 512 would have been used, meaning that only when the buffer had 512K or more of logs would an email be sent, containing all of the messages. Setting the buffer size to zero will cause each log message to be sent in a separate email.

The appender can be added to the root logger using an appender-ref node in exactly the same way that the text file appender was. If you run the sample site again a log message will be appended to the LoggingDemo.log file and sent to the specified email address.

Capturing more detail about exceptions

So far we’ve just been logging simple messages. In the case of logging errors, messages such as “an error occurred”, or even the exception’s Message property, do not provide enough detail to diagnose a fault. However the logging methods (Warn, Error, and so on) on the ILog interface actually allow any object to be passed as the message – it does not have to be a string. So if we pass in an Exception object log4net will transform that into text before passing it to all of the logger’s appenders. If you try this you’ll see that properties such as the type of exception, the message and the stack trace are all recorded.

This is a reasonable solution although still short of the ideal. Other properties of the exception such as TargetSite, Source and Data (a little-known but useful dictionary in which extra error information can be stored) are not logged. Furthermore, in a web scenario you may wish to log data such as the URL of the request, the session ID, and so on.

What is needed is complete control over how an exception message is formatted before being sent to the appenders. This is where the IObjectRenderer interface comes in. We can create an exception renderer which, when registered in the config file, will be used each time log4net needs to convert an Exception into text before it is logged. So let’s do just that.

Creating and configuring the ExceptionRenderer class

Add a new class to the project called ExceptionRenderer, import the log4net.ObjectRenderer namespace, and make the class implement the IObjectRenderer interface. There is just one method to implement, RenderObject, and it provides us with a TextWriter to write our message with. From here it is pretty simple to cast the obj argument to an Exception and start writing out the properties, as shown below

public class ExceptionRenderer : IObjectRenderer
{
    public void RenderObject(RendererMap rendererMap, object obj, TextWriter writer)
    {
        Exception thrown = obj as Exception;
        while (thrown != null)
        {
            RenderException(thrown, writer);
            thrown = thrown.InnerException;
        }
    }

    private void RenderException(Exception ex, TextWriter writer)
    {
        writer.WriteLine(string.Format("Type: {0}", ex.GetType().FullName));
        writer.WriteLine(string.Format("Message: {0}", ex.Message));
        writer.WriteLine(string.Format("Source: {0}", ex.Source));
        writer.WriteLine(string.Format("TargetSite: {0}", ex.TargetSite));
        RenderExceptionData(ex, writer);
        writer.WriteLine(string.Format("StackTrace: {0}", ex.StackTrace));
    }

    private void RenderExceptionData(Exception ex, TextWriter writer)
    {
        foreach (DictionaryEntry entry in ex.Data)
        {
            writer.WriteLine(string.Format("{0}: {1}", entry.Key, entry.Value));
        }
    }
}

Note that the while loop ensures that all inner exceptions are also logged.

To make log4net aware of the renderer add the following line of XML under the root log4net node

<renderer renderingClass="LoggingDemo.ExceptionRenderer, LoggingDemo" renderedClass="System.Exception" />

The important point to note here is that the fully-qualified names of the classes must be used, much like when defining the appender’s type.

In order to test the renderer add a button to the default page and in its click handler add a call to a method which throws an exception. Then wrap this call in a try-catch block and, in the catch part, log the exception by passing it into the logger’s Error method. The code below should suffice

protected void ThrowButton_Click(object sender, EventArgs e)
{
    try
    {
        DoSomething();
    }
    catch (Exception ex)
    {
        ILog logger = LogManager.GetLogger(string.Empty);
        logger.Error(ex);
    }
}

private void DoSomething()
{
    ApplicationException ex = new ApplicationException("DoSomething() has failed.");
    ex.Data.Add("SomeKey", "SomeValue");
    ex.Data.Add("AnotherKey", "AnotherValue");
    throw ex;
}

The log file should now contain a well-formatted message listing all of the exception’s properties. As projects develop the ExceptionRenderer can be enhanced to record any other information you see fit.

Going further

The idea of producing custom renderers extends beyond exceptions. I mentioned logging web-specific data earlier; the best approach to this is to implement an HttpContextRenderer which would then output the URL, the query string, etc. Then the HttpContext object itself can just be passed straight into one of the logging methods of the logger.

The sample web application for this post is available here.

Screencast: An introduction to log4net

December 15th, 2008

As a follow up to my last post about log4net I have recorded a screencast covering the same material. This is my first attempt at producing video so I would welcome any comments you may have. The screencast can be downloaded here.

Add logging to your application using log4net

November 30th, 2008

Add logging to your application using log4net

Logging is a feature that every application will need at some point during its lifetime, if not during development and testing then certainly when it has gone live. Despite this, developers often neglect to build it in from the start and then end up hurriedly shoehorning it in at the last minute. As a result the logging infrastructure is limited in its flexibility and usefulness.

Even if logging is integrated from the start, developers often choose to build their own logging code, using classes in the System.IO namespace to simply write some timestamps and messages out to a text file. Whilst there is nothing wrong with this, employing a dedicated logging framework can both cut down on development time and offer a rich set of features to play with. This article will provide an introduction to one such framework: log4net.

Introduction

log4net is an open source logging tool produced by the Apache Logging Services project. As such it is free to use and distribute, and its source code is available for download. The framework itself is built upon four key concepts

  • an appender is responsible for writing log messages to a particular destination, e.g. database, text file, email
  • a logger is made up of one or more appenders and provides methods for logging messages and objects
  • a layout governs the formatting of log messages before they are written by the appender
  • a renderer is used to transform an object into a log message. Users of log4net can create their own renderers to ensure that objects such as exceptions are formatted to their liking

This may seem a lot to digest at first but in reality only two items need to be set up in order to use log4net: an appender and the root logger.

Configuration

To get started with log4net, fire up Visual Studio 2008 and create a new Web Application project. Add a reference to the log4net dll (download from here) and a new XML document called Logging.config, which will be used to contain log4net’s configuration. It is possible to put this directly in the web.config however that would mean resetting the web site each time the logging settings were altered, which is impractical in a live scenario.

The configuration XML begins with a root log4net node, followed by one or more appender nodes. The simplest appender to use is the TextFileAppender, which can be set up as follows

<appender name="TextFileAppender" type="log4net.Appender.FileAppender">
  <file value="LoggingDemo.log" />
  <appendToFile value="true" />
  <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
  <layout type="log4net.Layout.PatternLayout">
    <conversionPattern value="%date{yyyy-MM-dd HH:mm:ss.fff} %-5level %message%newline" />
  </layout>
</appender>

As with all appenders, a friendly name (TextFileAppender) must be given, along with the full type name of the appender (log4net.Appender.FileAppender). Within the appender node though, the settings are very much specific to the particular appender. In the case of FileAppender, log4net is configured to append log messages to LoggingDemo.log, in the root of the web site. Had appendToFile been false, a new file would be created each time, which isn’t very useful. The minimal lock is being used so that the file can be deleted without resetting IIS to release it, although this hampers performance a little. The final item of interest is the layout, which will be covered in a later post.

The last piece of config is the most important: setting up the root logger, which is done like this

<root>
  <appender-ref ref="TextFileAppender" />
</root>

The log4net config file must contain at least one logger: the root logger. More loggers can be added underneath the root, building up a hierarchy, although that is beyond the scope of this article. In this example the root logger has been set to use the TextFileAppender, so all logged messages will be sent to the LoggingDemo.log file.

Usage

Now that log4net has been configured it’s time to add some C# to the test web site. First off an assembly-level attribute must be added to the source code, as follows

[assembly: XmlConfigurator(ConfigFile="Logging.config", Watch=true)]

The best place for this code is in AssemblyInfo.cs with all other assembly-level attributes. By adding this log4net becomes aware of where the logging config file is, and whether or not to monitor it for changes. A using statement is also needed to import types from the log4net namespace.

The next task is to get a reference to a logger object, which implements the ILog interface. This can be done by calling LogManager.GetLogger, passing in a type or a name, which helps to identify which logger should be returned. In this example there is only the root logger, so passing in any type is sufficient to retrieve the root logger. Add the following code to the Page_Load method of the Default.aspx page

[chsarp]
ILog log = LogManager.GetLogger(this.GetType());
[/csharp]

Recommended practice is to pass in the current type, that way if a type-specific logger is added later on, no code changes are required.

At this point the ILog object is ready to be used. It is important to decide what logging level is appropriate for the message, choosing one from this list

  • DEBUG
  • INFO
  • WARN
  • ERROR
  • FATAL

For each level of detail there are methods for logging messages, exceptions, and passing in a format string with arguments, in a similar way to using the string.Format function. Check out the ILog interface definition for more details.

For now, just use the Info method and pass in any message, in this case just log the fact that the event has fired, for example

log.Info("Page_Load has fired");

Now run the web site and browse to the page; a log file should be created with a single line containing the log message. Hurrah!

Adjusting the logging level

Let’s return to the concept of logging level. The idea behind this is that, by adjusting the level in the config file, it is possible to see more, or less, messages in the log file. A typical scenario is that an application will run with minimal logging most of the time, recording just FATAL and ERROR level information.

However if a bug is reported it is simple to switch the level to INFO or even DEBUG, which will cause many more messages to be logged and hopefully provide an insight into the problem. The rule of thumb is that, for the level in the config file, all messages are logged at that level, and below. For example if log4net was told to log at the WARN level, all messages at the WARN, ERROR and FATAL levels will be logged.

To alter the logging level in the config file, change the root logger as follows

<root>
  <level value="WARN" />
  <appender-ref ref="TextFileAppender" />
</root>

Now that WARN is being used, run the test project again and this time the INFO message is not written to the LoggingDemo.log file. Note that it is also possible to set the logging level to ALL and NONE in addition to the specific levels mentioned earlier.

That’s it for now. In part two another appender will be added and I’ll also describe how to customise the format of the log messages.

Technorati tags:

Developing against cookies created in another domain

August 2nd, 2008

Developing against cookies created in another domain

Recently I found myself developing a web page which needed to log a lot of cookie values before redirecting the browser to another part of the customer’s site.

The cookies I needed to access were created under www.company.com, and therefore unavailable to my page when developing under localhost. So, even though I could make sure the cookies were in my browser by visiting www.company.com, it was not possible for my code to read their values during development. Of course, once deployed to the customer’s server it would be part of their domain and the problem would go away.

In order to test the code thoroughly I really needed access to the cookies. A colleague pointed out to me that I could do so simply by modifying my HOSTS file, adding a line similar to the following

127.0.0.1 local.company.com

Having done this, and flushed the DNS cache by issuing ipconfig /flushdns from a command prompt, I could access my page by browsing to http://local.company.com/page.aspx. The cookies were now available because the the code was running under the same domain in which they were created.

As a final touch, you may wish to set the Start Action for the web application project to open up the new URL. Like all good tips this one is simple to implement yet very effective.

Technorati tags:

Custom routing for ASP .NET MVC

May 26th, 2008

Custom routing for ASP .NET MVC

Those familiar with the MVC framework for ASP .NET will know that one of its primary features is the mapping of URLs to methods on controllers. For example, /Products/Find will cause the ProductsController to be created and have its Find method invoked. It is also possible to pass arguments to methods, for instance /Products/Load/53 would call the Load method of the ProductsController, supplying 53 as the argument.

Organising controllers

Whilst this allows the developer to structure their code better, keeping presentational logic in the view and application logic in the controller, it isn’t ideal. To continue the example, as the project grows it will provide an increasing amount of features related to products, all of which will be delivered by the ProductsController. As a result the code for searching for products will end up in the same class as that used to edit products, and so on.

Everyone has their own take on the MVC pattern and in the past I have tended to use one controller per use case. The use cases in question here are Find Product and Edit Product and as such their functionality would be provided by the FindProductController and EditProductController, rather than living together in a single ProductsController.

A simple way to implement this pattern is to keep the ProductsController and have it delegate all its work. For example, the Load method would simply create an instance of EditProductController and call its Load method, passing any arguments as well. Whilst this is feasible it is more of a workaround than a genuine solution. It would be far better to cut out the ProductsController altogether, and have methods on the two controllers be called directly. The routing engine in ASP .NET MVC is very flexible and, by developing a custom route, it is possible to do this.

Creating a custom route

It is the job of a route to take a URL and call the appropriate method of a controller. The default MVC route has a format of {controller}/{action}/{id}, however our use case route will use {useCaseNoun}/{useCaseVerb}/{action}/{id}. The key difference is that the controller token has been replaced with two new tokens, noun and verb. This will allow us to provide the following routes

URL Controller Action Behaviour
/Product/Find/Search FindProductController Search Execute a search for products and display the results
/Product/Find/Clear FindProductController Clear Reset all the fields of the search page
/Product/Find FindProductController [default] Execute the default controller method (more on this shortly)
/Product/Edit/Load/17 EditProductController Load(17) Load the product with ID 17 and display its data for editing
/Product/Edit/Save/23 EditProductController Save(23) Save the supplied data against the product with ID 23
/Product/Edit/Clear EditProductController Clear Clear out the edit page ready for entering a new product

A fringe benefit of adopting this strategy is that both controllers can have a method of the same name, e.g. Clear, but have the method perform a completely different task. With a single ProductsController there could only be one Clear method.

In order to register the custom route with the MVC framework, some changes need to be made to the Global.asax file. Its Application_Start method calls RegisterRoutes which, using the default MVC project template, will already set up the default route format of {controller}/{action}/{id}. To this method we need to add the following

routes.Add(new Route("{useCaseNoun}/{useCaseVerb}/{action}/{id}", new MvcRouteHandler())
{
Defaults = new RouteValueDictionary(new { action = "Index", id = "" }),
});

Note that the default action is Index so, in the case of the /Product/Find URL in the table above, this would map to the Index method of the FindProductController.

At this point we can use Phil Haack‘s Url Routing Debugger to test that our URLs are being correctly routed. To do so we get a reference to Phil’s RouteDebug.dll and add the following code after RegisterRoutes is called

RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes);

It is then possible to enter each of URLs into a browser and see which route they match. Check out Phil’s post for further details.

Creating a handler for the route

The format of our route is such that the {controller} token is no longer present. As a result the MvcRouteHandler that is associated with the route will not be able to identify which controller to use. Typically it just extracts the value of the controller token, appends “Controller” to it, and instantiates an object of that type. To resolve this issue we need to replace MvcRouteHandler with a route handler of our own.

Fredrik Normén produced an excellent blog post, Create your own IRouteHandler, which describes how to do this. For our route, we need to create two new classes, the first of which implements IRouteHandler, as shown below

public class UseCaseRouteHandler : IRouteHandler
{
    public IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        return new UseCaseMvcHandler(requestContext);
    }
}

This class, UseCaseRouteHandler, is used in place of MvcRouteHandler, and simply creates a new IHttpHandler which will do the real work. The implementation of IHttpHandler is actually our second class, UseCaseMvcHandler. This inherits from MvcHandler and overrides the ProcessRequest method, during which the correct controller is identified and then created. It is this behaviour that we need to redefine.

To determine how our ProcessRequest should work, I downloaded the source code of the MVC framework itself, which is available from CodePlex. A quick inspection of MvcHandler‘s ProcessRequest shows that the GetRequiredString method is used to extract the values of the route’s tokens. For the default routing this is just a case of getting the controller name, whereas our custom route needs to grab both the {useCaseNoun} and {useCaseVerb} tokens. I moved this logic into a separate function, GetControllerName, which is shown below

private string GetControllerName()
{
    string noun = this.RequestContext.RouteData.GetRequiredString("useCaseNoun");
    string verb = this.RequestContext.RouteData.GetRequiredString("useCaseVerb");
    return verb + noun;
}

So, if the URL is /Product/Find/Search, this method will extract a noun of “Product”, a verb of “Find” and return the value “FindProduct”.

I then copied MvcHandler’s ProcessRequest code into UseCaseMvcHandler and replaced the line extracting the controller token value with a call to the GetControllerName function. Simple. Well, almost! Unfortunately the resource strings are not available to inheriting classes, and neither is the ControllerBuilder property. I replaced the former with a hard-wired string, whilst the latter is accessible via the ControllerBuilder class’ static Current property.

At this point the code is almost ready to run. We just need to adjust the code in RegisterRoutes so that our route uses the new UseCaseRouteHandler class. This is done as follows

routes.Add(new Route("{useCaseNoun}/{useCaseVerb}/{action}/{id}", new UseCaseRouteHandler())
{
Defaults = new RouteValueDictionary(new { action = "Index", id = "" }),
});

Identifying which view to show

Having commented out the call to the routing debugger, I then browsed to /Product/Edit/Load/17 and…BANG! An exception with the message,

The RouteData must contain an item named ‘controller’ with a non-empty string value

was shown. After some digging through the MVC source, it seems that the code responsible for identifying which view to create (the ViewEngine class does this) was also trying to find a controller token in the URL, in order to work out which subfolder of Views to look in. The Load method of EditProductController calls RenderView, passing “Edit” as the viewName argument. By altering this to “~/Views/Product/Edit.aspx” I was able to work around this issue.

This was a far from satisfactory solution however. Fully-qualifying all of the view names is a potential maintenance problem in the future, if views are moved or folders renamed. To combat this I introduced a UseCaseControllerBase class, from which EditProductController and FindProductController now inherit. This class overrides RenderView and works out the full path to the view. The following code shows how

public abstract class UseCaseControllerBase : Controller
{
    protected override void RenderView(string viewName, string masterName, object viewData)
    {
        string noun = this.RouteData.GetRequiredString("useCaseNoun");
        string fullViewName = string.Format("~/Views/{0}/{1}.aspx", noun, viewName);
        base.RenderView(fullViewName, masterName, viewData);
    }
}

The ideal resolution would be to customise the behaviour of the ViewEngine, however that is beyond the scope of this article.

This post demonstrates the flexibility of the routing subsystem provided by ASP .NET. It also shows how to improve the separation of functionality between controllers. If you are interested the sample code is available from CodePlex.

Technorati tags:

How to publish a web site with MSBuild

May 18th, 2008

How to publish a web site with MSBuild

In part two of my MSBuild tutorial I needed to find a way to call Visual Studio’s Publish Web Site feature from MSBuild. Much trawling of the interweb failed to find anything of use, so in the end I had to produce my own target which copied the relevant files into the output folder.

The problem with this approach is that it only works for files of type dll, aspx or config. It is a simple task to add an extension, for example png, however on larger projects this becomes impractical. Developers would have to check the build script each time they added or removed a file just in case the Publish target needed to be updated. These are just the sort of jobs that get forgotten, which can lead to invalid builds later on.

Fortunately I came across a post on Mirosław Jedkynak’s blog showing how to use MSBuild to publish a web site. As some of you may be aware, a project file (vbproj or, in the case of my demo, csproj) contains targets of its own, as well as importing targets from other files that come as part of a Visual Studio installation. One such file is Microsoft.WebApplication.targets. This file provides the _CopyWebApplication target which will effectively replace my home-brewed numberswiki.com

Publish target.

In order to make use of this target we need to pass it two properties, WebProjectOutputDir and OutDir, which will ensure that the files get published into the correct folder. Here is an example

<Target Name="Publish">
  <RemoveDir Directories="$(OutputFolder)"
             ContinueOnError="true" />
  <MSBuild Projects="BuildDemoSite.csproj"
           Targets="ResolveReferences;_CopyWebApplication"
           Properties="WebProjectOutputDir=$(OutputFolder);
           OutDir=$(WebProjectOutputDir)\" />
</Target>

As you can see the ResolveReferences target is also called, this ensures that any third party dependencies are copied over as well.

Integrating this into my demo build script was simple, however I noticed that some files were being copied over that I didn’t want. These were the build script itself, and the environment-specific config files. This is because their build action was set to Content. Once I had switched it to None and ran the script again, everything was fine. The build action can be set by right-clicking on a file in the Solution Explorer and selecting Properties. Build Action is the first item in the list.

I have posted a new version of the build script to CodePlex for those interested in taking a look.

Technorati tags:

A custom MSBuild task for merging config files

May 11th, 2008

A custom MSBuild task for merging config files

In the last part of my MSBuild tutorial I mentioned that the target for merging config files was less than ideal. Although we were able to use the XmlRead and XmlUpdate tasks to make life easier, the list of settings to merge still needed to be maintained in the build script.

If a developer were to add a setting to, or remove a setting from, the web.config file, they would need to mirror this change in the build script. This is exactly the type of job which is easily forgotten, leading to problems further down the line.

Ideally the list of settings to merge would be self-maintaining, i.e. any settings found in the environment-specific config file would simply be enumerated and copied into web.config, rather than being explicitly listed in the build script. I decided to create my own MSBuild task for doing this. The spec for it is as follows

  • Copy over the value of the debug attribute under the compilation node
  • Copy over the value of the mode attribute under the customErrors node
  • Enumerate each of the settings under the appSettings node and copy them over too

It has been developed to copy a setting over only if it exists in both the source and target files. As such it will not add settings missing from the target file or remove settings that are not in the source file.

It is very simple to create your own MSBuild task, just follow these steps

  • Create a class which inherits from Task
  • Add properties for storing the arguments the task uses. Apply the Required attribute more info

    to make the argument mandatory

  • Use the LogMessage method to show progress and any errors in the command window
  • Override the Execute method, returning a flag to indicate success or failure

There are plenty of examples to follow in the Community Tasks source code and a more detailed article can be found on MSDN. It is also possible to debug a task, check out the MSBuild Team Blog for an explanation.

Consuming the custom task from a build script

Having produced the task, called MergeConfig, the next step was to integrate it with the sample solution created back in part one of the tutorial. This is done by adding this statement to the build script

<UsingTask TaskName="MergeConfig" AssemblyFile="MergeConfigTask.dll" />

This piece of XML tells MSBuild that there is task called MergeConfig in the MergeConfigTask.dll. Note that the path to the dll is relative to the build script’s directory. Having done this we can add a target which will then call the new MergeConfig task, like so

<Target Name="MergeConfig">
  <MergeConfig SourceConfigFilename="$(Environment).config"
               TargetConfigFilename="$(OutputFolder)\Web.config" />
</Target>

In addition to achieving the main goal, which was to have a self-maintaining list of settings to be merged, the use of this custom task has reduced the size of the build script, by around thirty lines. If you would like to make use of this task in your own script it can be downloaded from CodePlex. I have also updated the sample project to illustrate how the task is used.

Technorati tags: ,