Intro

If you are a SharePoint Developer or Administrator you know it can take a long time to deploy SharePoint 2010 and difficult to remember everything that you have to configure in SharePoint 2010. A easy deployment solution is available to quickly deploy SharePoint 2010.

The Solution

The easy deployment solution is the CodePlex project namely AutoSPInstaller. This project use the power of PowerShell to automate deployment and configuration of SharePoint 2010. The PowerShell scripts allows you to deploy SharePoint 2010 with prerequisites, service packs and updates, Forefront Security, Language Packs, Office Web Apps and PDF iFilter with icon.

For a detailed guide to use the AutoSPInstaller scripts go to Tobias Lekman post: http://blog.lekman.com/2010/11/automated-sharepoint-2010-installations.html

These scripts allow you to configure SharePoint 2010 once according to your requirements and use it over and over again.

Developer Setup

What about a basic SharePoint 2010 setup for development? Well, Microsoft created a easy setup script to setup the following on a machine or VM:

  • SharePoint Server 2010 + Pre-requisites (Standalone)
  • Visual Studio 2010 Ultimate Edition
  • Silverlight 4 Tools for Visual Studio
  • Expression Studio 4 Ultimate
  • Open XML SDK
  • Visual Studio SDK
  • Visual Studio SharePoint Power Tools
  • Office 2010 Professional Plus
  • SharePoint Designer 2010
  • Visio 2010

The download location for SharePoint 2010 Easy Setup Script: http://www.microsoft.com/download/en/details.aspx?id=23415

Hope these scripts make your life easier as a SharePoint 2010 Developer and Administrator.

Cheerio!

Denali_thumb
Intro
Today I’m looking at the new TSQL function IIF that is available in SQL Server 2011 ”Denali”. IIF stand for “Inline IF”. IIF is a shorthand way for writing a CASE statement in TSQL.
MSDN Description
Returns one of two values, depending on whether the Boolean expression evaluates to true or false.
Syntax Format: IIF ( boolean_expression, true_value, false_value )
The first parameter is a Boolean expression e.g. Age > 21. The second parameter is return value if the expression is true and the third parameter is the return value if the expression is false.
TSQL Example
Basic example for IIF function:
DECLARE @age INT = 18;
SELECT IIF(@age > 21,'Allowed','Not allowed');

You can also have nested IIF. Nested IIF can only be nested up to a maximum level of 10. This is because the CASE statement has the same limitation.
DECLARE @age INT = 28,
        @firsttime BIT = 1;
SELECT IIF(@age > 21,
           IIF(@firsttime = 1, 'Free drink', 'No free drink'),
           'Not allowed');
 

Practical Example

My practical examples are using the AdventureWorks2008R2 database for Denali which is available here. Let look at a practical example for IIF function:
SELECT pdt.Name,
       IIF(SUM(piy.Quantity) < pdt.SafetyStockLevel, 'Low Stock', 'Normal Stock') AS 'Stock Status'
  FROM [AdventureWorks2008R2].[Production].[Product] pdt
  LEFT OUTER JOIN 
       [AdventureWorks2008R2].[Production].[ProductInventory] piy ON pdt.ProductID = piy.ProductID
 GROUP BY pdt.Name, pdt.SafetyStockLevel

As you can see the IIF function makes it allot easier to use than the CASE statement and helps to make your TSQL code smaller and more understandable. Let me know what you think about this function addition in SQL Server 2011 Denali or if you have any questions.

Cheerio!

Denali
Intro
As we all are waiting in excitement for the final release of SQL Server “Denali” I’m going to look at the new TSQL functions it brings for developers. The first function is EOMONTH. EOMONTH stand for “End Of MONTH”.
MSDN Description
Returns the last day of the month that contains the specified date, with an optional offset.
Syntax Format: EOMONTH ( start_date [, month_to_add ] )
Start date specify the date for which to return the last day of the same month. Month to add is an optional parameter to add months to the specified start date.
TSQL Example
Basic example for EOMONTH function:
DECLARE @start_date DATETIME = '08/10/2011'; 
SELECT EOMONTH (@start_date) AS Result; --Result: 2011-08-31 00:00:00.000

As you can see when I specify the 10 August 2011 the last date of the month is 31 August 2011. You can specify the start date as DATETIME or as a VARCHAR value. EOMONTH will do a implicit conversion of the VARCHAR to DATETIME.

Example to get next month last date:
DECLARE @start_date DATETIME = '08/10/2011'; 
SELECT EOMONTH (@start_date, 1) AS Result; --Result: 2011-09-30 00:00:00.000

Now by adding the optional parameter of 1 to the EOMONTH function you ask for the next month last date. As the example shows the result is 30 September 2011 for the input of 10 August 2011. If I change it to 2 then I will get the second month after the specified date.

Example to get previous month last date:
DECLARE @start_date DATETIME = '08/10/2011'; 
SELECT EOMONTH (@start_date, -1) AS Result; --Result: 2011-07-31 00:00:00.000

This example show how to get the previous month last date from the specified input date. You simply make the optional parameter minus. The input was again 10 August 2011 and the result for EOMONTH function was 31 July 2011.

Practical Example

My practical examples are using the AdventureWorks2008R2 database for Denali which is available here. Let look at a practical example for EOMONTH function:
DECLARE @begin_date DATETIME = '01/01/2007',
        @end_date DATETIME;
SELECT  @end_date = EOMONTH(@begin_date, 3); --Result: 2007-04-30 00:00:00.000

SELECT SUM(TotalDue) AS 'First Quarter Purchase Total' --Result: 145291.7959 
  FROM [AdventureWorks2008R2].[Purchasing].[PurchaseOrderHeader]
 WHERE OrderDate > @begin_date
   AND OrderDate < @end_date

In this example I calculate the first quarter purchase total for the year 2007. I specify a start date and use the EOMONTH function to calculate the last date of the quarter. Now it is easy to just sum up the TotalDue column for records between the begin date and end date.

As you can see the EOMONTH function makes it allot easier to get the last day of a month and it helps to make your TSQL code smaller and more understandable. Let me know what you think about this function addition in SQL Server 2011 Denali or if you have any questions.

Cheerio!

I found the perfect solution for UI mock-ups namely PowerMockup.There is a whole bunch of offline and online tools to create UI mock-ups. Using these tools are great, but clients are not always happy with these tools. Clients usually consider its something new they have to learn.

logo

With PowerMockup everything happens inside Microsoft PowerPoint. Client is happy because its a tool they know. You design inside PowerPoint and then just send the slides to clients and then they can view your designs. The clients can also move the elements around with or without having PowerMockup installed, but if they don’t have PowerMockup they cannot add new element.

The Toolset

Let me show you a couple of screenshots of PowerMockup. Firstly you need to go and download PowerMockup. You do get a trial period and the cost as of 27 August 2011 to buy is:

  • 1 User = $39.95
  • 5 Users = $119.95
  • 10 Users = $199.90

When you install PowerMockup you will see a new menu option in PowerPoint.

Menu

The Show Stencil Library button you have to click to view all the stencil items you can use for your UI mock-ups. The stencil library will appear on your right hand side of PowerPoint.

Stencils

The stencil library contains various items to use. The items are grouped as Custom Shapes, Containers, Graphics, Icons, Markup, Navigation and Text. You can create your own item and add it to the stencil library which you can export and import. PowerMockup also provides a nifty little search box at the top to quickly search for a specific item.

Here is an rough example design I did to show some of PowerMockup items.

BlogDesignExp

If you have a change, take a look at this great add on. You will definitely enjoy it and I had clients that enjoyed the experience of viewing their UI designs inside PowerPoint.

Cheerio!

trinity-logo

I found myself wondering around the Microsoft Research website the other day and found the Trinity project. Trinity is a graph database and computation platform over distributed memory cloud. Trinity currently have a release package of version 0.2 available for download. If you want to find out some more details visit the official site: http://research.microsoft.com/en-us/projects/trinity/default.aspx 

Key features of Trinity

  • Use Hypergraph Data Model
  • Distributed: Deployed to one more machines
  • Memory-based graph store with rich database features.
  • Highly Concurrent Online Query processing
  • ACI transaction support
  • Parallel graph processing system

The Interest

I never really took notice until I read through the documentation and understand the practical usage. Graph databases are used by all the big companies like Microsoft, Google and Facebook. A Graph database is a database that uses graph structures with nodes, edges and properties to represent and store information.

GraphDatabase_PropertyGraph

On Wikipedia you will find a whole list of Graph databases implementations: http://en.wikipedia.org/wiki/Graph_database

Where to know

You might ask yourself why do I write this entry about Trinity. Well, for starters all developers that use any Microsoft technology should take note of Trinity. Trinity can help to better structure information on your next project. I can see how Trinity can be used along side SharePoint and SQL Server. It would be interesting to see how you may be able to take existing SQL Server data to Trinity. I could also see how BI developers can take various data sources and structure the data inside Trinity to do their magic easier. I also believe that Trinity might be the solution service on Azure for graph database usage.

Since I’m a service integration person, I find it difficult to think how this will benefit any service integration project or be used along side BizTalk. I might be wrong?

I think Trinity has a bright future ahead. Hope Microsoft does not kill off this research project. Let me know what you think about graph databases and specifically the Trinity project.

 

Cheerio!

Denali

My latest presentation was at Microsoft Devs4Devs event. I discussed some of the new T-SQL functions that is available in SQL Server 2011 “Denali”. Firstly I want to thank Dave Russell from Microsoft for the opportunity. Also thanks must go to all the community members who attended the day and for your feedback.

My Slides is available here.

My T-SQL Code snippets here.

See you guys at the next community event.

Cheerio!

I had a very interesting experience try to setup Lync Server 2010 for my latest project. Using Lync Server 2010 with SharePoint 2010 environment. Very powerful combination for user collaboration.

Anyway, I got a very strange error in publishing my Lync Topology:

The existing topology identifies serverA.domain as the Central Management Store, but the topology that you are trying to publish identifies serverB.domain as the Central Management Store. The Central Management Stores must match before the topology can be published.

The white blotches on the images are server FQDN that I needed to hide.

Deployment Error

The reason for this might be that you specified the wrong FQDN and try to publish the Lync topology before. Firstly make sure you use the correct FQDN for the server that is hosting the Central Management Store.

Open up Lync Server Management Shell and type the following command to get the currently registered Central Management Store location:

Get-CsConfigurationStoreLocation

Command Get Store

To remove the registered Central Management Store location type in the following command:

Remove-CsConfigurationStoreLocation

Command Remove Store

After running these commands you can attempt to publish your Lync Topology again and see the success message.

Publish Success

Hope this help you with fixing Lync Topology publishing problems.

Cheerio!

Node.js has released an exe version that can be run on Windows to execute JavaScript on the server. The current version is 0.5.2. On the internet there are various articles on what Node.js is and how to use it.

Microsoft has recently partnered with Joyent to get Node.js running on Windows. Maybe in the future to run on the Microsoft Azure platform (My speculation).

To get started head over to the Node.js site and download the exe. Create an directory to put the exe.

Folder Directory

Create a folder next to the exe for you first Node.js project like Hello. Inside the Hello folder you create an server.js file to create your first web server application.

Here is an very basic example

Server Code

To run the Node.js example open up PowerShell and run the following command:

.\node.exe .\Hello\server.js

Run in Powershell

Now open your favourite browser and browse to the URL you specified in the source file.

Browser Display

That is it! Your first Node.js application running from Windows. Very easy.

Here are some additional links to read:

http://nodebeginner.org/

http://howtonode.org/

http://nodeguide.com/

https://github.com/joyent/node/wiki/

I will later follow up with a full blow HTML application.

Cheerio!

Here are a couple of tips and tricks for Oracle VirtualBox. At the time of writing this blog the latest version of VirtualBox is 4.1. The VirtualBox commands that follow is run in Windows but is more or less the same in Linux environments.

Tip 1 – Clone existing Virtual disk

In command line go to the path of where VirtualBox is installed because you are going to need access to the program VBoxManage. Windows command to change path: cd "C:\Program Files\Oracle\VirtualBox"

Command to clone an existing vdi file:

VBoxManage clonehd "path_to_source_image.vdi" "path_to_destination_image.vdi"

From version 4.1 there is an menu option to clone existing images in the VirtualBox Manager GUI.

Tip 2 – Clone existing Virtual disk to raw format

Cloning an existing image file to raw makes an copy of the image without any compression to the vdi format.

Command to clone to raw file:

VBoxManage clonehd --format RAW “path_to_source_image.vdi” “path_to_destination_image.raw”

Tip 3 – Convert raw image to vdi format

If you have an raw image you can convert it to the compressed vdi file format.

Command to convert from raw to vdi:

VBoxManage convertfromraw --format VDI “path_to_source_image.vdi” “path_to_destination_image.raw”

Tip 4 – Recompress an vdi file

After using an image file for an while it will keep allocating disk space if dynamic allocation is used, but you can reclaim some of that used disk space.

Inside the image you need to remove unused data and use an tool like CCleaner in Windows images to clear temp files.

Then you need to defrag the hard drive inside the image. Do defrag the hard drive twice. I use the tool Defraggler which is much better that the default defrag tool of Windows.

After that you can use the CCleaner drive wipe tool to clear free space only or use an similar tool. This will replace all clear space with 0 bits.

After that shutdown the image file and run the following command to recompress the vdi file: VboxManage modifyhd “path_to_target_image.vdi” compact

This command will remove all those 0 bits from the image file. After running the above command you will see that your vdi disk allocation has reduced. I have successfully shrink an vdi file from 70GB to 41GB.

Tip 5 – Host VMWare images

Firstly on you host you need to uninstall all VMWare tools.

Inside VirtualBox mount the VMWare image (.vmdk file) as an IDE, not SATA (the default when using the VirtualBox wizard).

Make sure the .vmdk filename does not contain '.' suffices.

After that you should be able to run the virtual disk.

 

Hope all these tips help you to run VirtualBox more efficiently and make you life easier. If you use any other VirtualBox command regularly let me know!

Cheerio!

I had the pleasure of presenting at the SA Dev workshop on 27 July 2011.Thanks to everyone who attended and please continue to support the local communities.

My discussion was based on Web application UI testing with open source tools and Visual Studio Testing tools. More specific I was covering AutoTest.Net for C# unit testing, QUnit for JavaScript testing and Web Tests and Load Tests in Visual Studio 2010.

Here is the links to my slides and demo application:

Web UI Testing Slides

World of Football Demo Application

Any feedback is welcome and thanks again!

Cheerio!

I recently worked on a project where I developed an WCF web services that is exposed via  https and net.tcp. The web services does all the database calls and business logic. Here are some lessons or tips for developing in WCF with net.tcp.

When I use the net.tcp channel to communicate to the web service I often get the following error:

The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:01:00'.

This is the default error message that you will get from WCF if something went wrong but the WCF channel does not know exactly what error occurred.

What to do when you receive this message.

  1. Increase the ReceiveTimeout and SendTimeout on the binding that is used by your web service to 5 min. The other timeout properties default values are 99.9% always correct. The 5 min is very big and will usually eliminate actual timeout issues.
  2. The usual suspects for the error is MaxBufferSize, MaxBufferPoolSize, MaxReceivedMessageSize, MaxArrayLength default size (65536) is to small. Increase them by adding a 0 at the end (655360) to see if the error disappear. Then by small increments reduce the size to find the correct value.
  3. If your web service method returns an big message or generic array or list with a big items count and big object graph you have to increase the value of the property MaxItemsInObjectGraph in the dataContractSerializer element. You need to create a Service Behaviour (Service side) and Endpoint Behaviour (Client side) with the dataContractSerializer element. You will then see the MaxItemsInObjectGraph property.
  4. To increase your communication between service and client you can switch the TransferMode in the binding from Buffered to Streamed.
  5. When you call the web service from an .Net client please avoid calling your code inside a using statement. Rather use try-finally where you can call close on the client object and if the communication channel faulted you can call abort on the client object.

These lessons will hopefully make your net.tcp development experience easier. If you have any further tips or lessons please leave a comment.

Cheerio!

Here is my talk that I presented at Devs4Devs. Thanks to Microsoft for the opportunity and thanks to the guys who attended.

At the bottom is the links to my presentation and sample code. Before you can use the sample you need to setup Windows Server AppFabric Cache and use the AdventureWorks Database.

Another note that the web service gets deployed to IIS when you build the project.

Here the links:

AdventureWorks Database

http://msftdbprodsamples.codeplex.com/

Sample Application

http://dl.dropbox.com/u/4406115/Devs4Devs%202011/AdventureService.zip

Presentation Slides

http://dl.dropbox.com/u/4406115/Devs4Devs%202011/WCF%20AppFabric%20Cache.pptx

Any feedback or questions are welcomed. Until the next talk!

I was busy going through some SQL Adapter walkthroughs in BizTalk 2010 and something very strange happened.

I was setting up my SQL Adapter, not the WCF SQL Adapter, and I come to the wizard screen where I need to select between Select or Stored procedure options.

I happily decided on the Select option. In the next screen you get a text area to write your SQL statement. Mine was just a simple select from a small table. I click the Next button and bang the whole wizard dialog disappeared. No SQLService.xsd was generated. I was so confused and though I did something wrong. I went over the steps like a crazy man.

Finally I restarted Visual Studio 2010 IDE and try again. After the restart everything just worked. I still don’t know why this happens. I don’t know if it is Visual Studio IDE or the SQL Adapter Wizard issue.

Lessons Learned – If something goes wrong in Visual Studio IDE for no reason then just restart the IDE and try again!

Cheerio!

With all the controversy about Reflector going commercial. People had to start looking for other solutions. You know that RedGate tools are very expensive.

So, the first tool I want to show you is ILSpy from SharpDevelop. This is a new tool and that means development is in progress. It follow Reflector’s principle of keeping it simple. Here is a screenshot:

ILSpy

ILSpy Features

  • Assembly Browsing
  • IL Disassembly
  • Saving of resources
  • Search for types/methods/properties
  • Hyperlink-based type/method/property navigation
  • Base/Derived types navigation
  • Navigation History

ILSpy Roadmap

  • Improve the decompiler
  • Assembly Lists
  • Find References
  • Improve search performance
  • Save Assembly as C# Project
  • Debugger
  • Bookmarks
  • Extensibility via addins
  • Find Usage (of type/method/property)

The only thing that is short coming is all those nice plugins that Reflector had. I believe the authors of those plug-ins will soon port them over to ILSpy. As soon as ILSpy provide the extensibility via addins. Please download the tool and provide feedback.

On another note if you are using Resharper. In Resharper 6 that is currently under development you will get a decompiler. Go and check out the following article on JetBrains site: Resharper 6 Bundles Decompiler, Free Standalone Tool to Follow

Cheerio!

I’m going tell you the perfect developer  laptop environment setup. Does not matter if you are a Linux or Windows developer. Maybe Mac user can also find this interesting.

I’m mainly a .Net developer. Sometimes I dabble in Node.js (So Cool!) on Ubuntu. I also use Ubuntu for using Git. My main technology focus for this year is ASP.Net MVC, SharePoint 2010 and Node.js. I sometimes use BizTalk as well.

Firstly let’s discuss my hardware requirements. I need the following laptop spec:

I’m going to use this laptop to setup the minimum Ubuntu OS with VMWare or Virtualbox on top. I’m going to create development VM images on the external hard-drive that will use that nice USB 3.0 performance.

Hardware in Detail

Intel Core i7 logo as of 2009

The Intel Core i7 CPU has great benefits such as 4 Cores with Hyper Threading. Including Intel Turbo Boost Technology and Intel HD Graphics. This CPU will provide enough horsepower to run multiple VMs.

The 12GB Ram is necessary for being able to run multiple VMs. As an example, SharePoint 2010 requires 8GB Ram to run and running Visual Studio for development can easily consume 4GB Ram.

The 32GB SSD Drive is the main hard drive on which Ubuntu and the virtual applications are running. SSD gives you very high IOPS. You don’t need allot of capacity to run the minimum installation of Ubuntu.

The 17” Screen resolution is just a personal choice that give you enough screen real estate for development.

The Nvidia Geforce 500M GPU provide some nice features where certain graphic intense application will benefit from. The virtual applications started to support 3D acceleration inside the VMs. A side interest for me is also the ability to develop with Nvidia Cuda and OpenCL. Nvidia also provide the Optimus technology that intelligently optimize the GPU for better battery life.

500GB plug & play

The Lacie 500GB USB 3.0 drive is where all my VMs will be saved. The drive run at 7200 rpm and can deliver a transfer rate of up to 110MB/s via the USB 3.0 interface. The 500GB size is more than enough. You can get an additional external 2TB drive to backup the VMs.

Minimum Ubuntu Installation

Firstly go and download the minimum Ubuntu installation CD ISO. You can install it via cd-rom or USB device. Install Ubuntu as required by following the installation guide. At the end of the installation you be given the command prompt inside Ubuntu. Make sure you are connected to the internet via a network cable. This is required to download the additional packages.

Now we want to enable some of the fancy Ubuntu features, because we don’t want to be in the command prompt for ever. So here is what we are going to install.

  • Minimal Gnome
  • Wireless Networking
  • Chrome Browser
  • Ubuntu Theme
  • VirtualBox
  • VMware Player

Type the following to install the minimal gnome. This will install a graphical environment.

sudo apt-get install gnome-panel gdm gnome-terminal

Type the following to install Wireless Networking. This will give you a battery monitor and a icon to configure wireless networking. Additionally you get the hibernate button on the shutdown menu.

sudo apt-get install network-manager network-manager-gnome gnome-power-manager hibernate

Let’s install the Chrome browser. Type the following.

sudo apt-get install chromium-browser flashplugin-installer

The default gnome theme looks ugly. Type the following to enable the Ubuntu theme.

sudo apt-get install ubuntu-artwork

After that I would recommend you run the following commands.

sudo apt-get update

sudo reboot

The laptop will firstly make sure it has all the updates and after that we reboot the laptop. After the reboot you will get the login dialog that you use to login to the desktop. Once you on the desktop open a terminal so that we can install Virtualbox or VMWare player. The virtualization technology that you choose is up to you. I will show both.

Firstly I will recommend that you install the Build Essentials on Ubuntu. Some packages require that the build essentials are install before the packages can install.

sudo apt-get install gcc build-essentials

Lets Virtualize – VirtualBox

VirtualBox

Type the following to install VirtualBox. In the terminal type:

sudo gedit /etc/apt/source.list

Add the following VirtualBox repository

deb http://download.virtualbox.org/virtualbox/debian maverick contrib

Now let’s add the public key of VirtualBox to the system.

wget –q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc –O- | sudo apt-key add -

Update the package database.

sudo aptitude update

Now to install VirtualBox type the following magic command.

sudo apt-get install virtualbox-4.0

After VirtualBox has installed you might want to install the dkms package.

sudo apt-get install dkms

Lets Virtualize – VMware Player

Image representing VMware as depicted in Crunc...Lets start installing VMWare Player. Go to the VMware player download page and download the binaries. Now we need to give the VMWare bundle executable privileges. Type the following in terminal where the VMware bundle exist.

chmod +x VMware-Player*.bundle

gksudo bash ./VMware-Player*.bundle

After these commands a installer window will pop up. Just follow the wizard guide. This was now quick and easy.

Installation Summary

At this stage all the required software is installed for you to get started with creating VMs. Here is a list of some additional software you might want to install on Ubuntu.

  • 7-zip
  • Blowfish
  • VLC-Player

Just remember that you want to keep your Host OS as small as possible and to use as little possible memory. I would recommend that you format the external drive to ext 4 file system. It provides very nice performance for the VMs.

I hope you see the benefits of using VMs for your development environments. With this setup you can go crazy with creating VMs and you can easily backup your VMs en data. If you have any suggestions to improve this setup, please let me know.

Cheerio!

Enhanced by Zemanta

Today I want to show you how to develop an authentication system with JSON using ASP .Net MVC 3.0 and WCF.
Firstly I want to discuss the system design. I’m using Asp .Net MVC 3.0 framework to publish Html views to the client browser and for defining pretty URLs in the project. There is no business logic in the MVC framework. I define all my business rules in an WCF Service that only respond to JSON requests.  I’m using JQuery from my Html pages to query the web services for my business data.
Asp MVC WCF Json Design
You might ask yourself why I would decide on this design?
Benefits:
  • Clear separation of concern.
  • Smaller request and response data transfers.
  • Very Scalable  System ( Quickly move web services to Windows Azure ).
  • Dynamic Html Views Design.
Negatives:
  • JSON data is plain text.
  • Require SSL for secure and encrypted data transfer.
I hope you understand the benefits and negatives. There are more benefits and negatives that can be added. SSL is very important to be enabled when using this approach for authentication. Otherwise the user’s password will be available when transferred to the server.
Authentication Code Example:
Download the code to follow: JQueryLogin.zip
I created the default Asp .Net MVC 3.0 project with unit test project included. For this blog I did not do any unit testing. A nice challenge for you! In the solution there are three projects. JqueryLogin project, is the MVC web project. JqueryLogin.WebService project, is the WCF service that will handle JSON requests and business logic. JqueryLogin.Contracts project, is the WCF contracts that is defined for each request and response.
In the MVC web project I use the Razor View Engine. I also clean-up the code to just provide Views with no logic in the controllers. Especially go and look at the AccountController. I did not touch any views that is created by the template. Also I added the necessary JavaScript files that is required to do Ajax requests and create dynamic Html views.
In the _Layout.cshtml I added a bit of special JavaScript resolve function to help resolve URLs in my other JavaScript files. I got the original code from another blog (Forgot where?) but fixed it for MVC 3.0.
<script type="text/javascript">
    Url = function () { }

    Url.prototype =
        {
            _relativeRoot: "@Url.Content("~/")",

            resolve: function (relative) {
                var resolved = relative;
                if (relative.charAt(0) == '~') resolved = this._relativeRoot + relative.substring(2);
                return resolved;
            }
        }

    $Url = new Url();
</script>

With this function I will be able to resolve URLs like this in JavaScript:
window.location.href = $Url.resolve("~/Home/Index");

Now lets look at the other JavaScript files. The file ajax.js has some infrastructure and setup code that will help me with the requests to the web service. The User.js file is where the main logic is to register, sign-in and sign-out users to the web application. 

At the top of the file I define two object Register and SignIn with properties that match the same properties as the RegisterRequest Contract and SignInRequest Contract. The code that follow is where I define where the web service is that needs to be called. The web service methods are called after the Html form pass validation on the submit. Here is the code for sign-in and sign-out of the user:
/// <reference path="jquery-1.4.1.js" />
/// <reference path="jquery.validate.min.js" />
/// <reference path="ajax.js" />

var SignIn = {
    UserName : '',
    Password : '',
    RememberMe: ''
};

var UserServiceURL = "../WebService/UserService.svc/";
var UserServiceProxy = new serviceProxy(UserServiceURL);

$(document).ready(function () {
    $("#loginForm").validate({ submitHandler: function (form) {
        SignInUser();
    }
    });

    $('a[href="/Account/LogOff"]').click(function () {
        SignOutUser();

        return false;
    });

});

function SignInUser() {
    blockDisplay();
    var request = BuildSignInRequest();

    UserServiceProxy.invoke({
        serviceMethod: "SignIn",
        data: { request: request },
        callback: function (response) {
            $('#status').empty().html("<strong>Success: " + response.Message + "</strong>");

            $.unblockUI();
            window.location.href = $Url.resolve("~/Home/Index");       
        },
        error: function (xhr, errorMsg, thrown) {
            OnPageError(xhr, errorMsg, thrown);

            $.unblockUI();
        }
    });

    return false;
}

function SignOutUser() {
    blockDisplay();

    UserServiceProxy.invoke({
        serviceMethod: "SignOut",
        data: null,
        callback: function (response) {
            $('#status').empty().html("<strong>Success: " + response.Message + "</strong>");

            $.unblockUI();
            window.location.href = $Url.resolve("~/Home/Index");       
        },
        error: function (xhr, errorMsg, thrown) {
            OnPageError(xhr, errorMsg, thrown);

            $.unblockUI();
        }
    });

    return false;
}

function BuildSignInRequest() {
    SignIn.UserName = $('input[name="UserName"]').val();
    SignIn.Password = $('input[name="Password"]').val();
    SignIn.RememberMe = $('input[name="RememberMe"]').val();

    if (SignIn.RememberMe === "on") {
        SignIn.RememberMe = true;
    }
    else {
        SignIn.RememberMe = false;
    }

    return SignIn;
}

WCF Web Service Setup

Now let look at the web service to get all of this working. Firstly you define you web service in an interface file like IUserService.cs. In the UserService.svc mark up you have to change the factory to be able to handle JSON requests and responses.
<%@ ServiceHost Language="C#" Service="JqueryLogin.WebService.Service.UserService" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>

In the UserService implementation you have to set AspNetCompatibilityRequirements attribute. This will allow for cookies to be set for when the user is authenticated.
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
public class UserService : IUserService
{
    // Implementation Code
}

The implementation of the user authentication can be found in the AccountMembershipManager class and the FormsAuthenticationManager class.  The last part that is required for the service is the service binding. For this service I use wsHttpBinding.
<services>
  <service behaviorConfiguration="DefaultBehavior" name="UserService">
    <endpoint binding="wsHttpBinding" contract="JqueryLogin.WebService.IUserService" />
    <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
  </service>
</services>

Show Some Results

I use Chrome Browser for debugging. I will show you the traced request and response output for Sign-in and Sign-out of user. I underline in red in the images the important details to notice.

SignIn Request – Click to Enlarge

SignInRequest

SignIn Response – Click to Enlarge

SignInResponseContent


SignOut Request – Click to Enlarge

SignOutRequest

SignOut Response – Click to Enlarge

SignOutResponseContent

Summary

The original idea of using JSON and WCF for creating a responsive website came when I read Chris Love blog entries “Creating a WCF Service for JSON” and “WCF and JQuery Using JSON”. The original JavaScript infrastructure come from him. As you can see that JSON allow for speedy web development and responsive web pages. JQuery just make it so easy to create Ajax calls.

Hope you enjoy this entry and any feedback or questions are welcome.

Cheerio!

Disqus