You are currently browsing the tag archive for the ‘.NET’ tag.

How to: DISTINCT, SUM, COUNT the DataTable for a given XML?



Couple of articles ago, I did write about new dynamic type feature in C# 4.0

Some what related to the dynamic is the ExpandoObject.

Yep I know it does sound like an eight armed sea creature, but don’t worry – its just an object that comes with .NET 4.0. Dont know what Microsofties think whenever they come up with such a name.

MSDN says, ExpandoObject “represents an object whose members can be dynamically added and removed at run time.”

Add a property:

sampleObject.test = "Dynamic Property";


// This code example produces the following output:
// Dynamic Property
// System.String

Remove a property:

dynamic employee = new ExpandoObject();
employee.Name = "John Smith";

Associate and call events dynamically:

class Program
static void Main(string[] args)
dynamic sampleObject = new ExpandoObject();

// Create a new event and initialize it with null.
sampleObject.sampleEvent = null;

// Add an event handler.
sampleObject.sampleEvent += new EventHandler(SampleHandler);

// Raise an event for testing purposes.
sampleObject.sampleEvent(sampleObject, new EventArgs());

// Event handler.
static void SampleHandler(object sender, EventArgs e)
Console.WriteLine("SampleHandler for {0} event", sender);


// This code example produces the following output:
SampleHandler for System.Dynamic.ExpandoObject event.

Cons of ExpandoObject:

Though the usage of dynamic keyword seems quite interesting; but it could become very hard to catch errors. For instance, a new developer who doesn’t know much about it adds a property that was not required; the compiler won’t show an error – only, probably runtime will. This means, typos won’t be picked up during compile time because you can just declare about anything anywhere.

Plus, this code gives an impression that it type-safe, which it clearly, is not.

Even the tools like ReSharper would not be able to grab that error.

Anyway, you may also want to look into the unusual uses of Expando Object.

Happy coding!

Why JSON? One word: Simplicity.

More than one word: JSON is simple, simpler than Xml. I am big fan of Xml, and that makes me a love XPath as well. But this does not mean that I am against JSON or otherwise. While on my way back home the other evening, this guy who works in the IT department of one of my customer, where I was for the project implementation. While he was talking to “his” friend(colleague, probably) about JSON and Xml, and if Xml is there then why would you ever use JSON? I was sort of magnet’ically attracted to the discussion and eventually became the part of it. So, here I thought I should add my two cents of understanding of how and where I would use either and what would be the use case.

Xml is more “documented” oriented, has its own defined format; you may use Xml in cases where structured documents are required – and that with a large amount of data. While JSON is more like “data oriented”, which makes JSON handy for lightweight data exchange.

The purpose of JSON and Xml may be the same – data exchange; but the usage scenario for each is different. For instance you cant use a mallet, where a ball-pein is required.

“Ajax is a technique used for building interactive web applications that provide a snappier user experience through the use of out-of-band, lightweight calls to the web server in lieu of full-page postbacks. These asynchronous calls are initiated on the client using JavaScript and involve formatting data, sending it to a web server, and parsing and working with the returned data. While most browsers can construct, send, and parse XML, JavaScript Object Notation (or JSON) provides a standardized data exchange format that is better-suited for Ajax-style web applications.” – Writes Atif Aziz

JSON is more handy as compared to Xml when there a browser-server communication is desired. Plus its,

  • Similar to Xml human/machine readable format
  • Both has support for Unicode
  • “Self-documenting format” – that is, describes the structure, fields, and values
  • Mostly used in cases where you have to maintain a list, or arrays, or trees, or records.

This could mean that JSON may become handy and lightweight way of data exchange if you think about how to use it within jQuery based AJAJ calls.

Some of the things that people take as benefit is that, to avoid name conflicts Xml has namespaces; while in JSON there is no namespace; but the URI that you fetch the JSON document from implicitly becomes the namespace.

With Xml you have to think ‘DOM’ically; but in JSON the data is plain, simple and easy to read.

JSON does not support data validation; is not extensible.

So, conclusion – whatever you use, choose it based upon what is required.

“JSON is a data format, but one which is more naturally fit for browser data consumption. JSON is a subset to JavaScript, and by structuring a data payload as a JSON response, you are effectively bypassing the need to parse an XML document in a browser to get to the actual data. JSON uses a stripped-down syntax compliant with the native JavaScript interpreter provided on all browsers. Access and navigation to JSON data is done through the same standard JavaScript notation used to access string, array or hashtable values in a typical JavaScript application” – Regina Lynch

So if you do like JSON, chances are that you would like the YAML.

References and must reads:

Should you come across a scenario where you want spawn threads from within a loop, you can have a quick two options:

Option 1: Use ThreadPool.QueueUserWorkItem

Depending on the size of the job to be processed, I always admire the ThreadPool.QueueUserWorkItem; this creates a handy pool of threads, and execute the process upon request whenever a thread is idle’ly availble in the pool.

using System;
using System.Threading;
public class Example {
public static void Main() {
// Queue the task.
ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc));

Console.WriteLine("Main thread does some work, then sleeps.");
// If you comment out the Sleep, the main thread exits before
// the thread pool task runs. The thread pool uses background
// threads, which do not keep the application running. (This
// is a simple example of a race condition.)

Console.WriteLine("Main thread exits.");

// This thread procedure performs the task.
static void ThreadProc(Object stateInfo) {
// No state object was passed to QueueUserWorkItem, so
// stateInfo is null.
Console.WriteLine("Hello from the thread pool.");

Option 2: Implement a custom ThreadPool using BackgroundWorker, incase you are hating ThreadPool for whatever reason.

The main worker object.

/// The core entity that handles

public class CWorkers
public int _nIndex { get; private set; }
public BackgroundWorker bgWorker { get; private set; } //the main "culprit"

public CWorkers(int nIndex)
_nIndex = nIndex;
bgWorker = new BackgroundWorker();

The manager class that manages the worker thread.

/// Manages the worker thread.

public class CWorkerManager
private List _lstWorkers;//list of worker threads
private const int MAXWORKERS = 5;//Max workers you want; change/update or pull it from app.config.

public CWorkerManager()

/// Initializes the thread pool - sorta customized threadpool

private void Initialize()
_lstWorkers = new List();//initialize the list

for (int i = 0; i < MAXWORKERS; i++)
_lstWorkers.Add(CreateAWorker(i)); //inits a worker objects and adds to list

/// Looks for a free thread

/// Returns the thread if found, else nothing.
public CWorkers RequestForWorker()
foreach (var theWorker in _lstWorkers)
if (!theWorker.bgWorker.IsBusy)
return theWorker;

return null;

/// Emulate the BCL's .WaitOne()

public void WaitAndSignalWhenFree()
while (true)
//Loop through the list to find an idle thread
foreach (var theWorker in _lstWorkers)
if (!theWorker.bgWorker.IsBusy)
Thread.Sleep(1);//This may be a hack; not really recommended as a production code.

/// Inits a CWorker object; adds the

private static CWorkers CreateAWorker(int nIndex)
var theWorker = new CWorkers(nIndex);

theWorker.bgWorker.DoWork += (sender, e) => ((Action)e.Argument).Invoke();
theWorker.bgWorker.RunWorkerCompleted += (sender, e) => Console.WriteLine("Finished worker number:[" + theWorker._nIndex + "]");

return theWorker;

The test program:

class Program
private static List _lstWorkers;
private const int MAXWORKERS = 5;
static CWorkerManager theManager;

static void Main(string[] args)
theManager = new CWorkerManager();


/// Simulator that request the Manager for worker threads

private static void ProcessJobs(int nMaxTime)
Random rndRandom = new Random();
DateTime dteStart = DateTime.Now;

//Run till the max time.
while (DateTime.Now - dteStart < TimeSpan.FromMilliseconds(nMaxTime))
var theWorker = theManager.RequestForWorker();//Request for a worker

if (theWorker != null)
theWorker.bgWorker.RunWorkerAsync(new Action(() =>
rndRandom.Next(1500, 2500)//Generate somethign random
Console.WriteLine("All busy, lets wait...");

/// Actual method that processes the job.

static void ProcessThis(int nIndex, int nTimeout)
Console.WriteLine("Worker {1} starts to work for {0} ms", nTimeout, nIndex);


Happy threading.

So can you use EF v4.0 with .NET 3.5 in Visual Studio 2008?

Short answer, No!

Longer answer… is still no, but lets go through the reasons it is so;

Similar to, that you can’t use C# v2.0 features without at least VS 2005 and you can’t use C# v3.0 features without VS 2008; it is that you cannot use EF v4.0 in VS2008.

Still a ‘why’? Follow on…

Because EF v4.0 requires .NET v4.0(that is, 4.0.30319.1 to be exact – stable release out in the market, approximately 38 days ago); which means EF v4.0 is actually a part of .NET v4.0 – And, 3.5 uses CLR v2.0 while 4.0 uses CLR v4.0.

Among its many new improvements, Visual Studio 2010 introduces the much-awaited Entity Framework 4.0 and WCF Data Services 4.0 (formerly ADO.NET Data Services), which together simplify how you model, consume and produce data.

So to conclude: This means,

  • .NET v1.0 was used with VS2002 having C# v1.0
  • .NET v1.1 – VS2003
  • .NET v2.0 – VS2005 – C# v2.0
  • .NET v3.0 – Useable in VS2005
  • .NET v3.5 – Visual Studio 2008 having C# v3.0
  • .NET v4.0 – Visual Studio 2010 with C# v4.0

And according to Jon Skeet on SO, C# v5.0 contains major new speculated features such as Meta Programming.

If this makes you interested more in v4.0, checkout what’s new in the latest .NET Framework.

1. C# Indepth by Jon Skeet:
2. .NET Frameworks:
3. Microsoft .NET Framework:
4. .NET Framework Evolution Map:

Best DI Framework: Microsoft Unity Application Block

Change is the lifeblood of a software. So how does DI helps in that? For that you may would want to learn about the history of DI.

Dependency Injection is a design pattern use frequently utilized plug in/component based software architecture. Or in cases where intention is to reusing existing components and wiring together “disparate” components to form a cohesive software architecture. DI is a type of inversion of control; in which the flow of control is “inverted” to the users’ or more specifically, to the frameworks’ end. And framework decides what to call and what not to call.

You may have been writing manual DI unknowingly; the hand crafted factory pattern (adapter factory, method factory). Its simple, not much is the learning curve, no dependencies, no reflection, everyone knows what calls what in the code. The two types of injections are:

1. Setter injection
2. Constructor injection

With a dependency injection framework there is consistency, which means, on a large team you can actually “push” the team to do things in consistent manner; mostly because of the consistent nature of frameworks. Can define functional scope, rules of instantiation, etc, easily understandable and changeable.

Well, I would, instead writing just another essay repeating `No Silver Bullet` “for the 18,000th time“, suggest you read more about the DI framework basic concept and provided by Martin Fowler at his Bliki.

Microsoft Unity Application Block, that now comes with the Enterprise Library as well.
Pico Container
Google’s GUICE

Happy injecting dependencies! (0:

How to merge multiple Excel workbooks into one?

Only if you want something like: Merge(@”E:\Test”, @”E:\FinalDestination.xls”);

Use following code.

private void Merge(string strSourceFolder, string strDestinationFile)
//1. Validate folder,
//2. Instantiate excel object
//3. Loop through the files
//4. Add sheets
//5. Save and enjoy!

object missing = System.Reflection.Missing.Value;
Microsoft.Office.Interop.Excel.ApplicationClass ExcelApp = new Microsoft.Office.Interop.Excel.ApplicationClass();
ExcelApp.Visible = false;

//Create destination object
Microsoft.Office.Interop.Excel.Workbook objBookDest = ExcelApp.Workbooks.Add(missing);

foreach (string filename in Directory.GetFiles(strSourceFolder))
if (File.Exists(filename))
//create an object
Microsoft.Office.Interop.Excel.Workbook objBookSource = ExcelApp.Workbooks._Open
(filename, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing
, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);

//Browse through all files.
foreach (Microsoft.Office.Interop.Excel.Worksheet sheet in objBookSource.Worksheets)
sheet.Copy(Type.Missing, objBookDest.Worksheets[objBookSource.Worksheets.Count]);

objBookSource.Close(Type.Missing, Type.Missing, Type.Missing);
objBookSource = null;

objBookDest.SaveAs(strDestinationFile, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, XlSaveAsAccessMode.xlNoChange, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
objBookDest.Close(Type.Missing, Type.Missing, Type.Missing);

objBookDest = null;
ExcelApp = null;

catch (System.Exception e)

Btw, this was in response to a post on StackOverflow.

DISCO(read: Discovery) is a technology that Microsoft introduced in .NET, that facilitates the capability of Web service clients to discover Web services and their associated WSDL(read: wizdil) files.

.DISCO and .VSDISCO files are separate from UDDI. While UDDI, which is rapidly evolving as a standard, goes beyond DISCO by defining how to interact with a full-fledged Web Service information repository, DISCO files are still in use.

The .DISCO and .VSDISCO files provide alternative ways to discover Web services that preclude the use of UDDI. And if UDDI is in use then .DISCO and .VSDISCO files do not.

When a web service publishes a static discovery (.DISCO) file, it enables the programmatic discovery of that web service. A .DISCO file defines what files to search; whereas a dynamic discovery (.VSDISCO) file simply identifies which directories to skip.

This results in the discovery of the services that exists at that level and below the virtual directory that contains the document. A .VSDISCO file is intended to be requested by the browsers/clients over the web. If you do not want unintended-clients to be able to discover services that are not implemented for them, then do not place .VSDISCO files in the webservices directory.

Sample DISCO file implementation

Using .VSDISCO files benefits developers in several ways. These files contain only a small amount of data and provide up-to-date information about a server’s available Web services. However, .VSDISCO files generate more overhead (i.e., require more processing) than .DISCO files do, because a search must be performed every time a .VSDISCO file is accessed.

A typical WSDL implementation

Well, eventually it seems like there is no purpose to DISCO/VSDISCO files since UDDI is not actually used. Probably initially it did sound like a good idea but is not “that” usable; because actually or usually a client learn that a web service exists is because someone tells the URL of the WSDL file.

It may probably help in future when everything is going to be service oriented. Every functionality over the web; when Grid Computing would be a smalltalk in IT industry.

An in-depth article by Aaron Skonnard is worth the time. And also, he tells about the reason why did .VSDISCO documents stop working with the final release of the Microsoft .NET Framework. This tutorial is an excerpt directly from Dietel.

Seldom we find a need to dig inside a technology and know it inside out. HttpHandlers is a similar interesting topic and I plan to write about in future; lets see when it comes out.

Stay tuned (0:

Problem Installing MCMS SP1a, J# 3.0 is required!

There is no un/official semi-ignored release of J# 3.0 or J# 3.5; then why Microsoft Content Management Server SP1a Installation Wizard asks for it? Moreover, during Microsoft Content Management Server SP1a installation it does not install Site Manager; and if you go through custom installation, the Site Manager check box is disabled.

It appears that MCMS tries to find the J# for the most recent .NET framework you have installed. It actually looks for J# 3.0, or whatever latest version of the framework that is installed on the system.

So, there are three solutions that I was able to find:

1. Manual
– Uninstall your 3.0,
– Install MCMS, you will see the enabled check boxes for Site Manager
– And reinstall you latest .NET framework

2. Registry
– Open HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\ in regedit
– Rename v3.0 subkey(folder) to whatever(for instance ‘zero’)
– Install MCMS, you will see the enabled check boxes for Site Manager

3. Alternatively
– Install J# 2.0 SE from msdn site
– Create an empty file(VJSLIB.DLL) at the following location C:\WINDOWS\Microsoft.NET\Framework\v3.0\vjslib.dll
– Install MCMS, you will see the enabled check boxes for Site Manager

Quick note, in cryptography services; .NET framework provides with the ability to securely transmit sensitive data. It implements a streams-based encryption layer, which allows data streams to be routed through encryption objects to produce encrypted output streams.

It supports the following symmetric(use the same key for encrypt/decryption) algorithms:
DES, 3DES, RC2, Rijndael/AES.

Following asymmetric(two keys, one for encryption and one for decryption) algorithms, called public and private keys:

Following are the hashing algos supported by .NET:

hashing algorithms; MD5(now getting obsolete), SHA1, SHA256, SHA384, SHA512, along with the support of X.509 certificates.

Also, .NET’s SecureString is worth looking into.

%d bloggers like this: