Saturday, August 15, 2009

ASP.NET : Using ClientID in external JavaScript files

In the past year I have been writing a lots of JavaScript code. Mostly because I have been working on same ASP.NET WebForms application for more then 10 months by now. We use JavaScript for things like validation, async HTTP requests etc. To keep our code clean,we try to keep all of our JavaScript in external files. The problem with external JavaScript files is that you cannot use server tags in them, so you cannot obtain ClientID of ASP.NET controls by using <%= control.ClientID %> syntax. My first workaround for this problem was to add JavaScript variables manually on every page by using ClientScript.RegisterClientScriptBlock() method. Every page needed to have a collection of controls which was called JSControls. In the Page_Load event of the page I would add to JSControl collection all controls for which I need access from JavaScript. Code would look like this:

private List<Control> JSControls = new List<Control>();
protected void Page_Load(object sender, EventArgs e)
{
     JSControls.Add(txtName);
     JSControls.Add(txtLastname);
}

Then I had a function that would generate JavaScript code for each control in JSControl collection:

public static string GetClientScriptBlock()
{
StringBuilder sb = new StringBuilder();
sb.Append("<script type=\"text/javascript\">");
foreach (Control c in JSControls)
{
    if (c != null)
    {
        sb.Append(string.Format("var {0} = '{1}';", c.ID + "ClientID", c.ClientID));
    }
}
sb.Append("</script>");
return sb.ToString();
}

Then, again in the Page_Load event I needed to call RegisterClientScriptBlock with JavaScript code block:

protected void Page_Load(object sender, EventArgs e)
{
JSControls.Add(txtName);
JSControls.Add(txtLastname);                         
Page.ClientScript.RegisterStartupScript(this.GetType(), "myClientBlock", GetClientScriptBlock(), false);
}

This works fine, so if I need to access for example ClientID of TextBox control called txtName I would just refer to txtNameClientID variable, like this:

document.getElementById(txtNameClientID); 

While this approach is fine it has some disadvantages. One problem is that there could be controls with same ID but in different parent controls. Another disadvantage is that this approach is not very reusable, it would force me to violate DRY principle, since the same code had to be added to every page.

A better solution

There is a better solution. A reusable custom control can be created to automate JavaScript variable creation for us. The idea is to put the control on a Page, define for which controls from the page you need ClientID’s and the control would do the rest of the work. The custom control would also have some kind of Namespace property, that would solve the problem with same ID on different controls. Design-Time support would also be nice to have here, since we would not need to write control id's manually. Let’s make our idea to realization!

Control syntax

Goal is to have control with syntax as simple as possible. Suppose that we call our control JSClientIDList, we want to have syntax like this:

<ReducingComplexity:JSClientIDList runat="server" ID="JSClientIDList1" Namespace="namespace1">
<JSControls>
    <ReducingComplexity:ControlItem ControlID="btnOK" />
</JSControls>
</ReducingComplexity:JSClientIDList>
We see that this syntax is very straightforward. JSClientIDList control has two important properties. The first one is Namespace property, which defines some kind of prefix for each of the controls. The second important property is JSControls which defines list of ASP.NET controls for which JavaScript variables will be created. We will need another class for representing a control in our JSControls list. We will call that class ControlItem. The whole code would look like this:
using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using System.ComponentModel;
using System.Diagnostics;
using System.Web.UI.WebControls;
using System.Text;

namespace ReducingComplexity.Web.Controls
{
[PersistChildren(false)]
[ParseChildren(true)]
public class JSClientIDList : Control
{
    private string m_Namespace;
    [Browsable(true)]
    [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
    public string Namespace
    {
        get
        {
            return m_Namespace;
        }
        set
        {
            m_Namespace = value;
        }
    }

    [Browsable(true)]
    [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
    [PersistenceMode(PersistenceMode.InnerProperty)]
    public List<ControlItem> JSControls { get; set; }

    public JSClientIDList()
    {
        JSControls = new List<ControlItem>();
    }
    protected override void OnPreRender(EventArgs e)
    {
        Page.ClientScript.RegisterClientScriptBlock(this.Parent.GetType(),this.ClientID, GetClientScriptBlock(), true);
        base.OnPreRender(e);
    }
    protected override void AddParsedSubObject(object obj)
    {
        if (obj is ControlItem)
        {
            this.JSControls.Add((ControlItem)obj);
            return;
        }
    }
    private string GetClientScriptBlock()
    {
        StringBuilder sb = new StringBuilder();
        sb.AppendFormat("var {0}=new Object();", this.Namespace);
        foreach (ControlItem ci in this.JSControls)
        {
            Control c = this.FindControl(ci.ControlID);
            string clientId = c != null ? c.ClientID : "";
            sb.AppendFormat("{0}.{1}='{2}';", this.Namespace, ci.ControlID, clientId);
        }
        Debug.WriteLine(sb.ToString());
        return sb.ToString();
    }
}
public class ControlItem
{
    [TypeConverter(typeof(ControlIDConverter))]
    public string ControlID { get; set; }
}
}

I’ll try to explain most important parts of the code. To enable clean syntax we needed to use PersistChildren and ParseChildren attributes. These attributes define how nested content of the control will be interpreted. More details can be found here. Next we also needed to override AddParsedSubObject method, which will add nested controls to JSControls collection.
protected override void AddParsedSubObject(object obj)
{
if (obj is ControlItem)
{
      this.JSControls.Add((ControlItem)obj);
      return;
}
}
Another interesting part is usage of ControlIDConverter as TypeConverter. This will enable us to use Design-Time support for ControlID property, or to be more precise it will provide dropdown list of all controls available for addition, so that we can choose from the list.

image

When I first started implementing this functionality I didn’t really know about ControlIDConverter class. My plan was to write my own Type Converter which would provide such functionality, but I failed. To cut the long story short,the reason I failed was I didn’t know I could use GetService() function of ITypeDescriptorContext interface to get the instance of IDesignerHost. Anyway, Reflector reveals all the secrets:

image

We can also see that the actual registration of JavaScript code block is done in OnPreRender event since in this event all controls in Control collection are available.

For example if we define Namespace property to “Namespace1” and have a ASP.NET button control with ID "btnOK" then to get ClientID of the btnOK button from external JavaScript file we would use following syntax:

document.getElementById(Namespace1.btnOK);

And of course JSClientIDList control would have to be declared like this:

<ReducingComplexity:JSClientIDList runat="server" ID="JSClientIDList1" Namespace="namespace1">
      <JSControls>
          <ReducingComplexity:ControlItem ControlID="btnOK" />
      </JSControls>
</ReducingComplexity:JSClientIDList>

By using this simple control our goal of using external files for JavaScript code has been achieved.

Friday, August 14, 2009

Good design comes over time

image Have you ever tried to provide design for Mark IV coffee maker problem that Robert C. Martin presented in his "Designing Object Oriented C++ Applications using the Booch Method" book. Well I have and I was not successful to come up with any kind of elegant solution. But I was quite impressed with the solution that Uncle Bob presented for this design problem. You can find the solution here.

What I find interesting in the above document is section titled "How did I really come up with this design?". In that section Uncle Bob says:

I did not just sit down one day and develop this design in a nice straightfoward manner. Indeed, my very first design for the coffee maker looked much more like Figure 11–1. However,I have written about this problem many times, and have used it as an exercise while teaching class after class. So this design has been refined over time.

This paragraph was very encouraging to me, because as you see even Robert C. Martin himself did not get it right the first time. We are not going to get it right the first time, no matter how much we try. Good designs come over time, they are not obvious immediately. Our initial design will change over time, and we need to ensure that changes that happen will lead to better design. I see great power in refactoring here. By doing refactoring we change design of our software to something better and still preserve clean and maintainable code. Unit testing and TDD in general is of great help here as well. It is the immediate feedback that makes us less fearful of making changes to our code.

Tuesday, August 11, 2009

Pseudocode Programming Process

Today I had to implement a feature on project I'm working on. The feature was not a trivial one. It was rather complex. I had a general idea how it could be implemented. The implementation I was having in mind involved several complex data structures and some recursion function calls as well. After short thinking about the problem I started to do the actual implementation in C#. To be more precise I tried several implementations, but I kept failing, and found myself rewriting the code over and over. I was completely lost in complexity of the problem and all the details (complex data structures and recursion calls). I kept doing so for about 1.5 hours (maybe two), and I still had no working solution. Then I realized that I have to change my problem solving approach.

If you have read "Code complete" book you probably remember Pseudocode Programming Process (PPP) that Steve described in the book. Pseudocode Programming Process is a way of designing algorithms in pseudo code. Basically, this means that an algorithm is described in high-level English-like way. You can find more about the PPP on following links:

Anyway, after writing down the algorithm in high-level English and then making every line of comment into a line or fewof code (applying PPP practices), I had working solution in about 20 minutes. YES, 20 minutes including pseudocode and C# implementation. How that compares to 1.5 hours spent for literally nothing? I can only but recommend PPP as way of handling complex algorithms.

Thursday, August 6, 2009

Bugs and bytes

Yesterday, at work we had a bug in application we’re developing. Nothing critical but rather inconvenient. Our application has a very common functionality of file download. Some users can upload files to the system and some other users can download those files. Pretty straightforward functionality, right? It’s an ASP.NET application so it should be very easy to implement this. Here's the code (not the real code but the relevant part of it):

 protected void ViewFile(int fileId)
       {
           byte[] fileData = DAL.GetFile(fileId);
           Response.ContentType = "application/octet-stream";
           Response.AddHeader("Content-Disposition", "attachment; filename=test.txt");
           Response.OutputStream.Write(fileData, 0, fileData.Length);
           Response.End();
       } 

So what could possibly go wrong here? Not much? Except…, files can be EMPTY too! An empty file is a file that is 0 bytes in size. So if you try to write an empty file with the code above an exception would be throw at Response.OutputStream.Write() line. How do we write an empty file to HttpResponse then? Very simple: you DONT write anything. Just skip Response.OutputStream.Write line and call the Response.End(). Anyway, what am I trying to prove here with this simple example? My point is that we need to be more thoughtful when writing code. We must carefully think about code we write. What could go wrong? How is API we’re using behaving? What exceptions can be thrown? What assumptions are being made? One approach I find useful with dealing this kind of issues is Defensive programming. Here’s quote from wiki about Defensive programming:

A difference between defensive programming and normal practices is that few assumptions are made by the programmer, who attempts to handle all possible error states. In short, the programmer never assumes a particular function call or library will work as advertised, and so handles it in the code.

If the code had been written in Defensive programming mind-set this bug would have never been made. I must also note that Defensive programming is not only choice for solving these kind of issues. Another approach is Design by contract. In the end it really does not matter which approach you choose, the goal is to create more robust and quality software.

Wednesday, August 5, 2009

Strings and performance

I really cannot stress enough importance of using StringBuilder class for string concatenation. The reason is obvious: PERFORMANCE! I know that most programmers are aware of the possible string concatenations performance problems but somehow this issue is still being overlooked and many make the mistake.The difference in performance between using and not using StringBuilder is HUGE as I'll show in simple demo application. I'm not saying that you should use StringBuilder for every string concatenation, but be very alert when you have loops that do string concatenation. Stop for a minute and think about possible performance issues. How much iteration do you expect your loop to have? If you loop through collection that you expect to get bigger and bigger over time, or a collection that you know nothing about (such as one coming from external system) then using StringBuilder has no alternative. Let's take a look how long does it take to do for example 30 000 string concatenations. by using StringBuilder. Code looks as follows :

class Program
  { 
      static void Main(string[] args) 
      { 
          StringBuilder sb = new StringBuilder(); 
          DateTime start = DateTime.Now;  
          for (int i = 0; i <30000; i++) 
          { 
              sb.Append("some string"); 
          } 
          TimeSpan ts = DateTime.Now - start;  
          Console.WriteLine("Finished!"); 
          Console.WriteLine(ts.TotalSeconds); 
          Console.ReadLine(); 
      } 
}

On my computer it takes exactly 0.15625 seconds which is very very fast.

image

Now let's take a look at same code but without using StringBuilder

 class Program
  { 
      static void Main(string[] args) 
      { 
          string str= string.Empty; 
          DateTime start = DateTime.Now;  
          for (int i = 0; i <30000; i++) 
          { 
              str+= "some string"; 
          } 
          TimeSpan ts = DateTime.Now - start;  
          Console.WriteLine("Finished!"); 
          Console.WriteLine(ts.TotalSeconds); 
          Console.ReadLine(); 
      } 
  }

Without StringBuilder it takes 21.396625 seconds.

image

Difference in performance is very noticeable and benefit of using StringBuilder is obvious. It's worth noting that performance issues are usually not discovered until system has been deployed and used in production, but by that time damage can already be done. That's why string concatenation must be taken with care.

Monday, August 3, 2009

Tip on employing the domain model pattern

Today, I read Udi's latest article about domain model pattern published in MSDN magazine. I want to comment on following part of the article:

When designing a domain model, spend more time looking at the specifics found in various use cases rather than jumping directly into modeling entity relationships—especially be careful of setting up these relationships for the purposes of showing the user data. That is better served with simple and straightforward database querying, with possibly a thin layer of facade on top of it for some database-provider independence.

Udi couldn't be more right here. I had similar dilemma few weeks ago,and my reasoning happened to be same as Udi's. I had an association that seemed so natural and correct. Association was as follows:

image

But after looking deeper at problem I was trying to solve it turned out that this association was only needed for presentation purposes. So I dropped the association altogether from the domain model. Presentation issue was solved by using a simple query (using NHibernate). This shows that domain model should be used for capturing core business behavior only. In the article Udi also wrote about Domain Events pattern that can help solve complex problems quite elegantly. I strongly recommend reading this article.