Wednesday, November 19, 2008

CallQueue: Implementing a Sequential Web Service Call Queue for AJAX application

In AJAX based applications its common that user might end up breaking your AJAX calls by clicking on numerous places in very short interval of time. Let us assume there is a page where there are several of hyperlinks which make WebService calls and do some stuffs on callback. If user clicks on five hyperlinks being impatient or may be just for fun, there will be five different WebService calls made. All of those calls had the same parameters or UI state while they were invoked. But on completion of one or more WebService calls it may happen that the UI state or data passed to the rest of the WebServices calls no longer exist or expired, thus will be result in inconsistent UI behaviour and/or invalid data. This is one of the important scenarios an AJAX developer should consider when he designs an application.

The Specification
To address the scenario, I would prefer implementing a Sequential WebService Calls Queue which will be able to schedule the tasks/WebService calls and help keeping UI and data consistent over the AJAX calls. We should achieve the following features from this queue:

  • Enqueue any WebService call anytime in the application.
  • Dequeue any previously queued call regardless of currently executing call and location in the application.
  • Each WebService call should have an identifier so that we can track the call and dequeue anytime later by SSQ.dq(call_id).
  • Each call should have a timeout value which will determine the maximum amount of time we will consider for that particular call before we invoke the next call, after that we will remove from the queue.
  • A timer will act as scheduler but will not run forever. It should run only when necessary.
  • Each call should be able to declare its completion at any time by notifyCompleted, so that the scheduler timer will not wait for the prior task and should dequeue the next call.
  • notifyCompleted should also be optional. The currently running call should automatically be dequeued from the scheduler queue after the timeout of its own.
  • Each call should be able to mark as replaceIfExists so that if user`s any activity already enqueued this call, should be replaced by the current one.
  • The queue instance should be exclusively available to the user and all over the page, meaning that the same queue class will be used to serve the functionality in one page per user basis.

The Usage
You should be able to use this library as follows:

  1. Include GenericQueue.js and SequentialServiceQueue.js to your project
  2. Add the reference in the pages that you want them to be used

We will be naming the class as SequentialServiceQueue and in short SSQ. Let us have a look at the WebService calls in Service Queue fashion:

var id1 = SSQ.nq('SomeMethod1', false, 1000,
function()
{
// Do some stuffs
SomeWebService1.SomeMethod1(SomeParameters, onSomeMethodCallCompleted);
});

function onSomeMethodCallCompleted(result)
{
// Do stuffs
SSQ.notifyCompleted(id1);
}

var id2 = SSQ.nq('SomeMethod2', false, 1000,
function()
{
// Do some stuffs
SomeWebService2.SomeMethod2(SomeParameters,
function(result)
{
// Do stuffs
SSQ.notifyCompleted(id2);
});
});


You can not only queue WebService calls, but also regular JavaScript codeblock:



var service_id1 = SSQ.nq('Service1', false, 1000,
function()
{
// Do some stuffs
SSQ.notifyCompleted(service_id1);
});


The GenericQueue

To accomplish the SequentialServiceQueue, first of all we should define an all purpose GenericQueue class which will able to handle any queue requirement out of the box. The queue is fairly simple, just like old Computer Science data structure class. Here are few of the functions from the class:



this.nq = function(element)
{
array.push(element);
++rear;
}

this.dq = function()
{
var element = undefined;

if (!this.is_empty())
{
element
= array.shift();
--rear;
}

return element;
}

this.for_each = function(func)
{
for (var i = 0; i < rear; ++i)
func(i, array[i]);
}

this.delete_at = function(index)
{
delete array[index];

var i = index;
while (i < array.length)
array[i]
= array[++i];

array
= array.slice(0, --rear);
}


The SequentialServiceQueue


The following is how this class starts. You will notice here that the timer_id is for our scheduler timer, running_task indicates the currently executing call, interval is a variable for the timer_id which you can determine as your wish. interval is the knob of how fast or slow you want the scheduler to run. queue as you can understand is an GenericQueue instance we have just created above. Note that the GenericQueue is not a static class rather its a instance class unlike the SSQ. You have also noticed that the ms_when_last_call_made and ms_elapsed_since_last_call are pretty self-describing. get_random_id is reponsible for preparing new id for the newly enqueued call.



var SequentialServiceQueue =
{
timer_id:
null,
ms_when_last_call_made:
0, // milliseconds (readonly)
ms_elapsed_since_last_call: 0, // milliseconds (readonly)
running_task: null,
interval:
10, // milliseconds
queue: new GenericQueue(),

get_random_id:
function() {
var min = 1;
var max = 10;
return new Date().getTime() + (Math.round((max - min) * Math.random() + min));
},


From the code below, as soon as any new call is enqueued, we check for if it is allowed to replace if already exists in the queue with the same name. If found any, we just update that, otherwise we create a brand new task and enqueue and start our updater which is the scheduler in our case.





nq: function(name, replaceIfExists, timeout, code) {
var id = this.get_random_id();

if (replaceIfExists) {
var isFound = false;
this.queue.for_each(
function(index, element) {
if (element !== undefined && element !== 'undefined' && element.name == name) {
element.id
= id;
element.replaceIfExists
= replaceIfExists;
element.timeout
= timeout;
element.code
= code;
isFound
= true;
}
});
}

// Enqueue new task
if (!isFound || !replaceIfExists) {
this.queue.nq(
{
id: id,
name: name,
replaceIfExists: replaceIfExists,
timeout: timeout,
code: code
});
}

// We have got new tasks, start the updater
this.startUpdater();

return id;
},




The following is the core part of the class which is the scheduler. Inside startUpdater, it executes the block of code in every interval we defined before. And inside the looping code, we check for if there is already any running task, if yes we check for the timeout whether it should make a good timeout or not. Otherwise we let it run as it was. However, if there is no running task at this moment, we dequeue a task and start executing the code and set a starting time for that to keep track of how long it is being running.



detachTask: function(id) {
this.dq(id);
this.running_task = null;
this.ms_when_last_call_made = 0;
this.ms_elapsed_since_last_call = 0;

// See if we are done with the queued tasks
if (this.queue.is_empty())
this.stopUpdater();
},

startUpdater:
function() {
var _self = this;
if (this.timer_id == null) {
this.timer_id = setInterval(
function() {
if (_self.running_task == null) {
// We dont have any running task, lets make the first one
_self.running_task = _self.queue.dq();
if (_self.running_task != null) {
_self.ms_when_last_call_made
= new Date().getTime();
_self.running_task.code();
}
}
else {
// We have a running task already
_self.ms_elapsed_since_last_call = new Date().getTime() - _self.ms_when_last_call_made;

// Should the current task be skipped?
if (_self.ms_elapsed_since_last_call > _self.running_task.timeout)
// Time's up. leave the task alone. Let other tasks start executing.
_self.detachTask(_self.running_task.id);
}
}, _self.interval);
}
},

stopUpdater:
function() {
if (this.timer_id != null) {
clearInterval(
this.timer_id)
this.timer_id = null;
}

this.queue = new GenericQueue();
},


Keep an eye on my blog for continued development and improvements, and download CallQueue from: http://code.msdn.microsoft.com/callqueue

Monday, November 17, 2008

My eye friendly Visual Studio dark theme

I am not sure about how you guys feel about your IDE look & feel. First few years it was alright for me. However the more I used Visual Studio, the more I experienced problem with my eyes as well as monotony of the same old white background IDE. So, I made a dark theme of Visual Studio which stopped hurting my eyes again. I also tried to keep the syntax yet readable and make careful selection of colors so that important IDE benefits of syntax highlighting was not overlooked. I also tried to avoid absolutely black background, because that demands your eyes to have extra attention to the text since they were prepared to see nothing. So I choose deep navy blue so that it does not hurt your eyes as well as gives your eyes an impression that there may be few things on the screen to read. You may not find it friendly to your eyes, because I am kind of biased to Blue, since its my favorite. You can download the theme from here.

C# View:

CSharp

HTML View:

HTML

XML View:

XML

Good thing about this theme is it only overrides your Text editor's Font and Color, all other settings of your Visual Studio will remain same. Before you apply this theme to your Visual Studio, you may want to export your current IDE settings, so that you can get back to your old one anytime. To export your current settings and look & feel, use Tools > Import & Export settings. Even if you forget to have a backup, you can reset your IDE settings through that Wizard and will get back Visual Studio factory settings.

Friday, November 14, 2008

Building applications for Windows Azure

Windows Azure is an upcoming operating system for the cloud from Microsoft, announced on October 27 at PDC. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage Web applications on the Internet through Microsoft data centers. Azure goes beyond what other providers, such as Rackspace's Mosso or Amazon's EC2, offer. First, it will be available with a complete suite of tools and technologies for building your next big cloud application. Second, the Azure platform's goal is to support all developers and their choice of IDE, language, and technology; meaning that you can use your favorite tools for all kinds of development as well as Python, PHP, Ruby, Eclipse and so on. It supports popular standards and protocols including SOAP, REST, and XML. Using the Windows Azure tools, developers can build, debug, and deploy to Windows Azure directly from their existing development environment.

AzureTodolist

I have written an article "Building applications for Windows Azure" which will walk you through the steps to build an application from scratch on the recently released Windows Azure CTP, Microsoft’s answer to cloud computing. This application will let users add tasks into their Todolists and track them at a later time. The objective of this application is not to use any local data storage like SQL database. Instead, it will store and retrieve data from the cloud which means no matter which language/platform you write your application on, it will be able to access the data. For instance, if you would like to develop an iPhone application which will be able to play with your saved tasks on the go, you will be able to do so. Hope you will enjoy the article.

Link: http://dotnetslackers.com/articles/aspnet/Building-applications-for-Windows-Azure.aspx

Client Perspective of Windows Azure Services Platform

Windows Azure was announced on PDC 2008 (Oct 27) and will hopefully be released mid next year. You probably already know about Azure by this time. If no, I would like to quote some from www.azure.com as intro: The Azure Services Platform is an internet-scale cloud computing and services platform hosted in Microsoft data centers. The Azure Services Platform provides a range of functionality to build applications that span from consumer web to enterprise scenarios and includes a cloud operating system and a set of developer services. Fully interoperable through the support of industry standards and web protocols such as REST and SOAP, you can use the Azure services individually or together, either to build new applications or to extend existing ones.

Let us have a quick overview of the clients we usually use in our daily life, then we will explore the potential of Azure and what is waiting for us down to the road:

Windows Client

  • Full capabilities, power, rich UI, high performance
  • Ability to utilize local resources e.g. audio, camera.
  • Capable of blending different hardware and software resulting in amazing applications
  • Private data, reliable and fastest
  • Personal, trusted and full control

Web Client

  • Data accessibility, availability
  • Connect with devices, data services, friends
  • Sociability, ability to share
  • Interaction, collaboration, email, instant messaging
  • Searchable
  • Data security is questionable
  • Open formats and standards for data exchange

Mobile Client

  • Low capability compared to regular PC horsepower
  • Portable: palm reach
  • Smart device while powered by web. e.g. Search for restaurants nearby
  • Your 24/7 companion

Smart Client

Did we forget about Smart Clients? These applications are usually Windows Forms application which take all the advantages of Windows Client including offline storage, utilization of local resources as well as the goodness of internet connectivity. The Smart Client was coined few years ago, but for .NET developers all Windows Client is pretty much generally considered as Smart Clients. The idea behind Smart Client was to utilize full local computing capabilities and exploit the web's accessibility, availability and nature of openness through XML Web Services to developers build great software.

Mesh

An important part of Live Services is Mesh, which enables developers to build application for consumer devices which are physically close to them. If you are at home you would be able to access through Windows Client, if you are on the go, you would be through Mobile Client, if you have Internet access only, you would be able to avail the service through Internet. Mesh allows us to get all those devices get connected, exchange data and stay synchronized. You can even add a Mac as device into Mesh to work with. Data communication among devices through Mesh is secured, since the data transferred between them are surely encrypted.

image

The following screenshots show how you can sync your local PC using Mesh and how your friends can share files or work on files you gave access them to:

demo-howto-share-newpost demo-howto-sync-addfolder

Goodness of all Clients

Windows Azure Services Platform is actually an array of technologies, consists of set of tools, and extremely scalable on demand powered by Microsoft datacenters and their ingenious virtualization technology. The core of this platform is to provide user with best experience taking the goodies of each client platform we use in our daily life. Combining the power of HTTP, XML, REST and WebServices Azure lets developers to build cloud enabled applications. The basic advantage of using open and standard protocol for communicating between clients is decreased dependency on the tools, languages or platform of the client. Client could be made by Ruby, Python or it could be in iPhone.

image

Azure enables you to store your data in the cloud. No matter which client you used to work on your data, if you change your client, time-space, you will be able to get back to the same data once you worked on some other devices/OS.

Why hosting cloud application with Microsoft?

Microsoft has 460 millions Live users to date backed up by hundreds of thousands of servers in their datacenters. How many times have you seen Microsoft's sites down? I have not seen many times in life. So, their ability to host, manage super scalable and high traffic websites is not questionable at all. Who else can be a better host of your next big application other than Microsoft?

Thursday, November 13, 2008

Cloudship: Membership Provider for the Cloud

Planning to move to the Azure Cloud, but already tied to the Membership API? I have recently written an article on Windows Azure which guides you to build a complete Membership provider library which can be leveraged by existing application to link to Microsoft’s cloud platform Windows Azure with no friction. Goals of this project were to be able to use regular ASP.NET Login controls, existing Membership code e.g. Membersip.UpadateUser(), MembershipUser.ChangePassword().

azure_inheritance

Last, but not least one of the major goals was ease of use. No matter which suite of controls you use: ASP.NET AJAX or MVC, Cloudship can be leveraged as your Membership provider. To install it, into your existing application, simply:

1. Reference the DLL
2. Insert few lines inside web.config
3. You are good to go

In this article you will learn how to implement your own Membership API, thus you can start moving you application data to the Azure Cloud – the future of development and business with Microsoft. Here you go - Cloudship: Membership Provider for the Cloud.

Saturday, November 1, 2008

Fixing DevelopmentStorage's database cannot be found problem on Windows Azure

This could be a common problem who are not using SQL Express. If you run an Azure application you may find it seeks for SQL Express instance in your machine if you do not have already. You may also find "An error occurred while processing this request." error due to this reason while you try creating tables from your models by StorageClient.TableStorage.CreateTablesFromModel. All you need to do is fire up Visual Studio and open the config file for DevelopmentStorage at C:\Program Files\Windows Azure SDK\v1.0\bin\DevelopmentStorage.exe.config. Now modify the connection string and the dbServer attribute of the service tag for Table, and save.

<connectionStrings>
<add name="DevelopmentStorageDbConnectionString"
connectionString
="Data Source=.\SQLEXPRESS;Initial Catalog=DevelopmentStorageDb;Integrated Security=True"
providerName
="System.Data.SqlClient" />
</connectionStrings>

<appSettings>
<add key="ClientSettingsProvider.ServiceUri" value="" />
</appSettings>

<developmentStorageConfig>
<services>
<service name="Blob"
url
="http://127.0.0.1:10000/"/>
<service name="Queue"
url
="http://127.0.0.1:10001/"/>
<service name="Table"
url
="http://127.0.0.1:10002/"
dbServer
="localhost\SQLExpress"/>
</services>
...


Restart Visual Studio and open up the Azure project again, now you should be able to run the DevelopmentStorage with the existing database installation of your PC.

Monday, October 27, 2008

jQuery intellisense in Visual Studio

Those who are excited like me about the news of jQuery integration into Visual Studio, started adopting jQuery replacing ASP.NET AJAX Client side API. Microsoft also declared there will be a patch for Visual Studio which will support jQuery as well as intellisene for that. For the enthusiasts who just can't for it, here is the way how we can start developing using jQuery with full intellisense support inside Visual Studio 2008:

1. Download jquery-1.2.6-vsdoc.js

2. Inside your JavaScript files, add a reference to it by placing the following line at the top of the JavaScript file:

/// <reference path="jquery-1.2.6-vsdoc.js" />


That's it. Enjoy!

Friday, March 28, 2008

Simple Form Validation - A Reflection based approach

Are you tired of placing multiple Validation controls on Form? If you are bored of following scenario like me, keep on reading the post:

Validators

A simple Email address validation can consist of whether

  • The field is empty
  • Longer than limit
  • Email address format is invalid
  • Already in use

Ordinary solution to this problem is placing multiple validation controls for a single TextBox. You can simply it by replacing all with a single Custom Validator. Our goal is to reduce amount of controls on the form to keep it simple. To do that, we would have to write code for Custom Validator that does it all. We also would like to write minimum code to validate the control without compromising manageability. Let us assume we would write the following code inside the ServerValidate of that control:

protected void cvEmailAddress_ServerValidate(object source, ServerValidateEventArgs args)
{
ValidationController.ValidateControl<ProfileValidator>(cvEmailAddress, ProfileValidator.Fields.EmailAddress.ToString(), args);
}

Let us declare a ValidationErrorResult object that contains error messages and text to display in the UI:

public sealed class ValidationErrorResult
{
public string ErrorMessage { get; set; }
public string Text { get; set; }
}

And an Attribute which would be used to tag a specific method which would be responsible for validation of particular control:

[AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
public sealed class ValidationMethodAttribute : Attribute
{
public ValidationMethodAttribute(string fieldName)
{
this.FieldName = fieldName;
}

public string FieldName { get; private set; }
}

If you are already familiar with Attirbute based programming, I hope you know the attribute of this piece of code is in fact ValidationMethod. We will soon see how to use this. The following is the method that checks the value and make a list of ValidationErrorResult that consists of which rules got failed. Notice that the ValidationMethod attribute contains the field name of the object which determines no matter whatever your method name is, that field name helps Validation controller to find this method out for validation.

[ValidationMethod("Email")]
public static List<ValidationErrorResult> ValidateEmail(object value)
{
var email = value as string;
var results = new List<ValidationErrorResult>();

// Blank
if (string.IsNullOrEmpty(email))
results.Add(new ValidationErrorResult()
{
ErrorMessage = "You did not provide an Email Address.",
Text = "Cannot be left blank"
});

// Length 128
if (email.Length > 128)
results.Add(new ValidationErrorResult()
{
ErrorMessage = "You exceeded length limit.",
Text = "Keep it less than 129 characters"
});

// Valid Email Address
if (!Regex.IsMatch(email, "^[\\w\\.\\-]+@[a-zA-Z0-9\\-]+(\\.[a-zA-Z0-9\\-]{1,})*(\\.[a-zA-Z]{2,3}){1,2}$"))
results.Add(new ValidationErrorResult()
{
ErrorMessage = "You provided an invalid Email Address.",
Text = "Invalid Email Address"
});

// Is Already In Use
if (IsAlreadyInUse(email))
results.Add(new ValidationErrorResult()
{
ErrorMessage = "You provided an invalid Email Address.",
Text = "Invalid Email Address"
});

return results;
}

Here is the ValidationController which goes through the Validation class and looks for the method that has the attribute which validates the control's value.

public class ValidationController
{
public static List<ValidationErrorResult> Validate<T>(string fieldName, object value)
{
var results = new List<ValidationErrorResult>();
var type = typeof(T);
var methods = type.GetMethods(BindingFlags.Static | BindingFlags.Public);

var method = methods.Single<MethodInfo>(delegate(MethodInfo m)
{
return ((ValidationMethodAttribute[])m.GetCustomAttributes(typeof(ValidationMethodAttribute), false))[0].FieldName == fieldName;
});

return (List<ValidationErrorResult>)method.Invoke(null, new object[] { value });
}

public static void ValidateControl<T>(CustomValidator validator, string fieldName, ServerValidateEventArgs args)
{
var results = Validate<T>(fieldName, args.Value);

if (!(args.IsValid = !(results.Count > 0)))
{
validator.ErrorMessage = results[0].ErrorMessage;
validator.Text = results[0].Text;
}
}
}

Sunday, March 2, 2008

Use your personal blog with Windows Live Writer

I'm very glad to tell you that your ".NET Research" personal blog is compatible with Windows Live Writer. You can compose, format, insert photos inside your posts offline and publish when you become online totally from this client without even opening the ".NET Research" site. Let us the steps to do this assuming you have properly installed Windows Live Writer.

Step 1. Run Windows Live Writer and Weblog > Add Weblog account..

Step 2. Choose another weblog service like the following screen and click Next:

Step2

Step 3. Now type http://dotnetbd.org as your Weblog Homepage URL, enter your credential like here I used as admin:

Step3

Now Windows Live Writer will download some necessary files to work offline and will appear with a white blank screen for you to write your first post! That's it. These simple two steps will enable you to use this powerful tool to work with your ".NET Research" personal blog. Happy blogging!

Saturday, March 1, 2008

LINQ to Flickr

One of my colleagues Mehfuz Hossain developed a wonderful open source project which allows you to query Flickr photos by LINQ, also lets you insert, delete photos directly to/from Flickr. You wonder how to extend LINQ in such an amazing way? It’s easy by writing your own custom LINQ provider, which was not-so-easy until he came up with another handy open source project named LINQ Extender. He did all the expression parsing stuff to ease our pain. Now you can make your own LINQ to Anything using this so easily.

For your heads up on LINQ extenders, here he wrote an article and LINQ to Flickr, open source project is hosted at Codeplex.

Tuesday, February 5, 2008

A "transactional" generic DbHelper for LINQ to SQL

In LINQ to SQL, the data model of a relational database is mapped to an object model expressed in the programming language of the developer. When the application runs, LINQ to SQL translates into SQL the language-integrated queries in the object model and sends them to the database for execution. When the database returns the results, LINQ to SQL translates them back to objects that you can work with in your own programming language. You may want to make a data access layer that separates the data operation from business layer like the following:

DbHelper.Insert<Student>(
new Student()
{
FirstName = "Tanzim",
LastName = "Saqib",
Email = "me@TanzimSaqib.com",
Website = "http://www.TanzimSaqib.com"


}, true);    // Use Transaction?



To make use of such transactional generic DbHelper, you might want to write a singeton DbHelper class like the following. You might notice that the class is singleton, it's static and the DataContext is being used is private, and only initialized using the connection string if not yet.



public static class DbHelper
{
private const string CONNECTION_CONFIG_NAME = "StudentServerConnectionString";

private static StudentServerDataContext _StudentServerDataContext = null;

public static StudentServerDataContext GetDataContext()
{
if(_StudentServerDataContext == null)
_StudentServerDataContext = new StudentServerDataContext
(ConfigurationManager.ConnectionStrings
[CONNECTION_CONFIG_NAME].ConnectionString);

return _StudentServerDataContext;
}

public static void CleanUp()
{
_StudentServerDataContext.Dispose();
_StudentServerDataContext = null;
}

// ... code edited to save space



On application wide error or on end you can dispose the context so CleanUp method is useful here. To implement an Insert method see the following. You will find I have used a TransactionScope which ensures that the transaction is taking place without any interruption. If there is really any error the scope.Complete() method never gets invoked. This is how it ensures that the code inside the TransactionScope is taking place as a transaction. It is available from .NET 2.0 framework.



public static void Insert<T>(T t, bool isTransactional) where T : class
{
if (isTransactional)
{
using (var scope = new TransactionScope())
{
Insert<T>(t);

// On any Exception, Complete() method won't be invoked.
// So, the transaction will be automatically rollbacked.
scope.Complete();
}
}
else
Insert<T>(t);
}

public static void Insert<T>(T t) where T : class
{
using (var db = GetDataContext())
{
db.GetTable<T>().InsertOnSubmit(t);

try
{
db.SubmitChanges();
}
catch (Exception e)
{
// TODO: log Exception
throw e;
}
}
}

I did not show other methods as part of the CRUD implementation. The rest is left open for you to implement.

Monday, February 4, 2008

[New Article] 7 ways to do Performance Optimization of an ASP.NET 3.5 Web 2.0 portal

Web 2.0 applications are widely developed. These applications often work with third party contents, aggregate them, make various use of them and then make something useful and meaningful to the users. For the past few years, developers were also engaged with such endeavors and a lot of their websites have not addressed performance issues, thus resulting in an unpleasant experience to the users.

Performance is a vast area and great results can never be achieved by a silver bullet. This article explores some of the key performance issues that can occur while developing a Web 2.0 portal using server side multithreading and caching. It also demonstrates model driven application development using Windows Workflow Foundation.

URL: http://dotnetslackers.com/articles/aspnet/SevenWaysToDoPerformanceOptimizationOfAnASPNET35Web20Portal.aspx

Tuesday, January 29, 2008

Write your own DOM friendly extension methods for HtmlElement in Volta

logo-volta

I know there are GetById, GetById<> methods in Document object. But, I often miss a method that I feel should be in Volta, which iterates through its child nodes and find an element for me. Let us say, there is a HTML like the following:

<div id="divContainer">
<
b>Some text</b>
<
div id="firstDiv">
<
i>Some more text</i>
</
div>
<
div id="secondDiv">
Okay, I gotta go now
</div>
<
div anyAttribute="anyValue">
Babye
</div>
</
div>

The most important thing is, I can not get the last div by Document.GetById, because instead of id I chose anyAttribute. So, I wrote my own extension method which can run into not only Div but also any HtmlElement, and can find me the desired HtmlElement inside the prior one with the anyAttribute and anyValue. To make my intention clear, I'd like to show how I'd like to use that extension method:

var divContainer = Document.GetById<Div>("divContainer");
var anyDiv = divContainer.Find<Div>("anyAttribute", "anyValue");

if(anyDiv != null)
anyDiv.InnerHtml += "guys!";

So, I'd like to call my extension method Find<> which will take the type I'm looking for (in this case a Div) and that HtmlElement should have an attribute "anyAttribute" that contains "anyValue". Here is how I make up the extension method:

public static class HtmlExtensions
{
public static T Find<T>(this T parent, string attribute, string value)
where T : HtmlElement
{
var element = parent.FirstChild;

while(element != null)
if (element.IsProper<T>(attribute, value))
return element as T;
else
element = element.NextSibling;

return null;
}

public static bool IsProper<T>(this DomNode element, string attribute, string value)
where T : HtmlElement
{
if (element.GetType() == typeof(T) &&
element.Attributes != null &&
element.Attributes.GetNamedItem(attribute) != null &&
element.Attributes.GetNamedItem(attribute).Value == value)

return true;

return false;
}
}

This method can iterate only one depth. Multi depth implementation can be done by running a simple DFS which is left to you guys. Note one thing, I have called one extension method "IsProper" inside another extension method, and there is no harm in it. So, this is how you can add your own extension methods to the HtmlElement.

Saturday, January 26, 2008

Appearing on Microsoft Volta team blog

Microsoft Volta team blogged about me and one of my articles: http://labs.live.com/volta/blog/Volta+How+To+Flickr+Widget.aspx

VoltaTeamBlog

[New Article] ASP.NET AJAX Best Practices

While we develop AJAX applications, we often carelessly ignore giving up bad practices, which cause effects which are not so significantly visible when the site is not so large in volume. But, it’s often severe performance issue when it is the case for sites that make heavy use of AJAX technologies such as Pageflakes, NetVibes etc.

There are so many AJAX widgets in one page that little memory leak issues combined may even result the site crash into very nasty “Operation aborted”. There are a lot of WebService calls, lot of iterations among collection so that inefficient coding in a whole lead to make site very heavy, browser eats up a lot of memory, requires very costly CPU cycles, and ultimately causes unsatisfactory user experience. In this article many of such issues are demonstrated in the context of ASP.NET AJAX.

http://www.codeproject.com/KB/ajax/AspNetAjaxBestPractices.aspx

Friday, January 25, 2008

HttpRequestFactory vs. XMLHttpRequest in Volta

HttpRequestFactory was designed for use by tiersplitting internally and was not supposed to be exposed as part of the Volta API as Danny van Velzen from Microsoft Volta team told me today. So, its better if you use XMLHttpRequest instead because this factory class might not show up in the later releases. You will find this class in Microsoft.LiveLabs.Volta.Xml namespace.  As like as JavaScript's one, in this .NET version you can also Open URL, specify method name, and of course pass credentials. You can track response text, xml, status code, status text and also you can abort.

To retrieve your content, you must subscribe to ReadyStateChange event with a HtmlEventHandler which you can find in Microsoft.LiveLabs.Volta.Html namespace and check the status code. If it is 200 that means "HTTP OK", you can take the ResponseText or ResponseXML. See this example:

string content = string.Empty;
var request = new XMLHttpRequest();

request.ReadyStateChange += delegate()
{
if (request.Status == 200)
content = request1.ResponseText;
};

request.Open("POST", "http://tanzimsaqib.com/feed/", true);

However, you cannot fetch cross domain content by XMLHttpRequest. The Volta compiler creates client side JavaScript XMLHttpRequest and lets developers write code in .NET friendly way. So, I do not think there is any way to retrieve cross domain content in Volta, and leaving us on the same old HttpRequest class.

Friday, January 18, 2008

[New Article] Building a Volta Control : A Flickr Widget

This is my first article which is based on the first CTP of Volta considering its current limitations. You will see how you can create a Volta control that the compiler can convert into an AJAX Widget without requiring us writing a single line of JavaScript code: http://dotnetslackers.com/articles/aspnet/BuildingAVoltaControlAFlickrWidget.aspx

Monday, January 14, 2008

ASP.NET AJAX Best Practices: Avoid String concatenation, use Array instead

Don't you think the following block of code has written keeping every possible good practice in mind? Any option for performance improvement?

function pageLoad()
{
var stringArray = new Array();

// Suppose there're a lot of strings in the array like:
stringArray.push('<div>');
stringArray.push('some content');
stringArray.push('</div>');

// ... code edited to save space

var veryLongHtml = $get('divContent').innerHTML;
var count = stringArray.length;

for(var i=0; i<count; ++i)
veryLongHtml += stringArray[i];
}



Well, as you see the innerHTML of the div has been cached so that browser will not have to access the DOM every time while iterating through stringArray, thus costlier DOM methods are being avoided. But, inside the body of the loop the JavaScript interpreter has to perform the following operation:



veryLongHtml = veryLongHtml + stringArray[i];



And the veryLongHtml contains quite a large string which means in this operation the interpreter will have to retrieve the large string and then concatenate with the stringArray elements in every iteration. One very short yet efficient solution to this problem is using join method of the array like the following, instead of looping through the array:



veryLongHtml = stringArray.join(''); 



However, this is very efficient than the one we were doing, since it joins the array with smaller strings which requires less memory.

ASP.NET AJAX Best Practices: Avoid String concatenation, use Array instead

Don't you think the following block of code has written keeping every possible good practice in mind? Any option for performance improvement?

function pageLoad()
{
var stringArray = new Array();

// Suppose there're a lot of strings in the array like:
stringArray.push('<div>');
stringArray.push('some content');
stringArray.push('</div>');

// ... code edited to save space

var veryLongHtml = $get('divContent').innerHTML;
var count = stringArray.length;

for(var i=0; i<count; ++i)
veryLongHtml += stringArray[i];
}



Well, as you see the innerHTML of the div has been cached so that browser will not have to access the DOM every time while iterating through stringArray, thus costlier DOM methods are being avoided. But, inside the body of the loop the JavaScript interpreter has to perform the following operation:



veryLongHtml = veryLongHtml + stringArray[i];



And the veryLongHtml contains quite a large string which means in this operation the interpreter will have to retrieve the large string and then concatenate with the stringArray elements in every iteration. One very short yet efficient solution to this problem is using join method of the array like the following, instead of looping through the array:



veryLongHtml = stringArray.join(''); 



However, this is very efficient than the one we were doing, since it joins the array with smaller strings which requires less memory.

How to solve: Server Controls can't be accessed in View's code-behind in ASP.NET MVC

It's still long way to go for a final release of ASP.NET MVC, the one I've been using right now is just a December CTP. But, like me you might be experiencing this confusing problem. The server controls that you put in a View (ViewContentPage) can not be found in code-behind page. The reason behind it is - the Views don't have a back-end designer code file. I believe it's just a bug or they could not find time to fix/look into it. I'm sure it will be fixed in any of the upcoming versions.

To enable this, switch to Solution Explorer, right click on the View you are interested in, and choose Convert to Web Application. Now, you will find the server controls in code-behind file.

Saturday, January 12, 2008

ASP.NET AJAX Best Practices: Introduce function delegates

Take a look at the following loop. This loop calls a function in each iteration and the function does some stuffs. Can you think of any performance improvement idea?

for(var i=0; i<count; ++i)
processElement(elements[i]);



Well, for sufficiently large array, function delegates may result in significant performance improvement to the loop.



var delegate = processElement;

for(var i=0; i<count; ++i)
delegate(elements[i]);



The reason behind performance improvement is, JavaScript interpreter will use the function as local variable and will not lookup in its scope chain for the function body in each iteration.

Friday, January 11, 2008

ASP.NET AJAX Best Practices: Introduce DOM elements and function caching

We have seen DOM caching before and function delegation is also a kind of function caching. Take a look at the following snippet:
for(var i=0; i<count; ++i)
$get('divContent').appendChild(elements[i]);

As you can figure out the code is going to be something like:

var divContent = $get('divContent');

for(var i=0; i<count; ++i)
divContent.appendChild(elements[i]);

That is fine, but you can also cache browser function like appendChild. So, the ultimate optimization will be like the following:

var divContentAppendChild = $get('divContent').appendChild;

for(var i=0; i<count; ++i)
divContentAppendChild(elements[i]);

Thursday, January 10, 2008

ASP.NET AJAX Best Practices: Problem with switch

Unlike .NET languages or any other compiler languages, JavaScript interpreter can not optimize switch block. Especially when switch statement is used with different types of data, it's a heavy operation for the browser due to conversion operations occur in consequences, it's an elegant way of decision branching though.

Wednesday, January 9, 2008

ASP.NET AJAX Best Practices: Avoid using Array.length in a loop

In one of my earlier posts, I talked about DOM element accessing in a loop but forgot to talk about a very common, yet performance issue in AJAX. We often use code like the following:

var items = []; // Suppose a very long array 
for(var i=0; i<items.length; ++i)
; // Some actions

It can be a severe performance issue if the array is so large. JavaScript is an interpreted language, so when interpreter executes code line by line, every time it checks the condition inside the loop, you end up accessing the length property every time. Where it is applicable, if the contents of the array does not need to be changed during the loop's execution, there is no necessity to access the length property every time. Take out the length in a variable and use in every iteration:

var items = []; // Suppose a very long array 
var count = items.length;
for(var i=0; i<count; ++i)
; // Some actions

Tuesday, January 8, 2008

Make web.config work in Volta

Ever wondered how to make web.config work in Volta first CTP release? Simply add a web.config file and add content to it? Unfortunately this is not the case in Volta at least in the first CTP. Five steps to get it done:

  1. Add a web.config file
  2. Add content to simply by copying from other web.config file
  3. Right click on web.config from the Solution Explorer and then Properties
  4. Choose Build Action to Embedded Resource
  5. In your Volta Page Designer CS file, add the following line of code:
[assembly: VoltaFile("web.config")]



 

ASP.NET AJAX Best Practices: Avoid getters, setters

Make minimum use of setters and getters if possible. Such accessors look like .NET like kind of beautiful properties, but these create new more scopes for JavaScript interpreter to deal with. If applicable, try directly setting/getting the private variable itself rather implementing methods for getters, setters.

Monday, January 7, 2008

ASP.NET AJAX Best Practices: Reduce scopes

It's not pretty common. But, if you ever encounter such code, be sure it's a very bad practice. Introducing more scopes is a performance issue for JavaScript interpreter. It adds a new scope in the ladder. See the following sample scope:

function pageLoad()
{
scope1();
function scope1()
{
alert('scope1');
scope2();

function scope2()
{
alert('scope2');
}
}
}



Introducing more scopes enforces the interpreter to go through new more sections in the scope chain that it maintains for code execution. So, unnecessary scopes reduce performance and it's a bad design too.

Sunday, January 6, 2008

ASP.NET AJAX Best Practices: Avoid using your own method while there is one

Avoid implementing your own getElementById method that will cause script to DOM marshalling overhead. Each time you traverse the DOM and look for certain HTML element requires the JavaScript interpreter to marshalling script to DOM. It's always better to use getElementById of document object. So, before you write a function, make sure similar functionality can be achieved from some other built-in functions.

Friday, January 4, 2008

ASP.NET AJAX Best Practices: Careful with DOM element concatenation

It's a very common bad practice. We often iterate through array, build HTML contents and keep on concatenating into certain DOM element. Every time you execute the block of code under the loop, you create the HTML markups, discover a div, access the innerHTML of a div, and for += operator you again discover the same div, access its innerHTML and concatenate it before assigning.

function pageLoad()
{
var links = ["microsoft.com", "tanzimsaqib.com", "asp.net"];

$get('divContent').innerHTML = 'The following are my favorite sites:'

for(var i=0; i<links.length; ++i)
$get('divContent').innerHTML += '<a href="http://www.' + links[i] + '">http://www.' + links[i] + '</a><br />';
}



However, as you know accessing DOM element is one the costliest operation in JavaScript. So, it's wise to concatenate all HTML contents in a string and finally assign to the DOM element. That saves a lot of hard work for the browser.



function pageLoad()
{
var links = ["microsoft.com", "tanzimsaqib.com", "asp.net"];
var content = 'The following are my favorite sites:'

for(var i=0; i<links.length; ++i)
content += '<a href="http://www.' + links[i] + '">http://www.' + links[i] + '</a><br />';

$get('divContent').innerHTML = content;
}

Thursday, January 3, 2008

ASP.NET AJAX Best Practices: Use more "var"

Less use of "var" can result into wrong calculation as well as mistake in logic control. And also JavaScript interpreter finds it hard to determine the scope of the variable if var is not used. Consider the following simple JavaScript code:

function pageLoad()
{
i = 10;
loop();
alert(i); // here, i = 100
}

function loop()
{
for(i=0; i<100; ++i)
{
// Some actions
}
}



Here you see, the loop uses the variable i used before in pageLoad. So, it brings a wrong result. Unlike .NET code, in JavaScript variables can go along with the method calls. So, better not confuse the interpreter by using more "var" in your code:



function pageLoad()
{
var i = 10;
loop();
alert(i); // here, i = 10
}

function loop()
{
for(var i=0; i<100; ++i)
{
// Some actions
}
}

Make HTML controls discoverable in Volta Control

 

When a Volta control is rendered, the ID attribute of the generated HTML is changed to something like _vcId_1_DivName which is inconvenient to find from code. But the ID attribute stays the same in case of Volta Page, so it is discoverable by ID like this:

Div divContent = Document.GetById<Div>("divContent");


However, if you add HTML controls to the control like the following, the ID is not changed during the rendering:



public VoltaControl1() : base("VoltaControl1.html")
{
InitializeComponent();

Button btnClick = new Button();
btnClick.InnerText = "Click!";
btnClick.Id = "btnClick";
this.Add(btnClick);
}


If you don't prefer this way and seriously want to write your own HTML in the control's html page, you might find the following snippet useful. But, remember in this case you will use name attribute of the html element instead of ID.



// Usage: var element = GetElementByName(Document.GetElementsByTagName("div"), "divWidget");
private HtmlElement GetElementByName(HtmlElementCollection elements, string name)
{
foreach (var element in elements)
{
DomAttribute nameAttribute = element.Attributes.GetNamedItem("name");
if (nameAttribute != null)
if (nameAttribute.Value == name)
return element;
}

return null;
}

Wednesday, January 2, 2008

Making cross domain AJAX call using Volta

Making a cross domain AJAX call in Volta is piece of cake. Volta compiler generates necessary client codes to make it work. Here is a snippet that can make an AJAX call to some Url and fetch data:

public void DownloadPhotos()
{
IHttpRequest request = HttpRequestFactory.Create();
request.AsyncSend("POST", URL, string.Empty,
delegate(string response)
{
OnPhotosLoaded(new PhotosLoadedEventArgs(response));
});
}



Both IHttpRequest and HttpRequestFactory classes can be found in the Microsoft.LiveLabs.Volta.MultiTier namespace. AsyncSend method performs the asynchronous call and calls back the delegate defined where OnPhotosLoaded event is fired to notify the subscriber of this event that the data has just arrived.

Tuesday, January 1, 2008

Namespace Alias Qualifier - to get rid of crazy coding

Let us say somebody in your company loves crazy coding and really do not bother about his/her codes affect others. (S)He put class name System and a constant Console and now wondering how come a simple Console.WriteLine does not compile:

class System
{
int Console = 10;

static void Main(string[] args)
{
Console.WriteLine("Hello World!"); // Compile time error
System.Console.WriteLine("Hello World!"); // Compile time error
}
}


Making use of global::System.Console must solve your problem:



class System
{
int Console = 10;

static void Main(string[] args)
{
global::System.Console.WriteLine("Hello World!");
global::System.Console.WriteLine("Hello World!");
}
}

However, ever thought of a scenario where there can be same class name under two different namespaces? Here comes the role of Namespace Alias Qualifier. In namespace declaration by "using", aliases can be assigned to namespaces so that they might be useful in later part of the code as shorthand and most importantly will solve the problem of ambiguity:

using sys = System;
using mine = MyProject.Process;
...
...
sys.Console.WriteLine(mine.Console["width"]);