Comments

IEnumerable and IEnumerator in C#

Many junior C# developers find the two IEnumerable and IEnumerator interfaces confusing. In fact, I was one of them when I first started learning C#! So, in this post, I’m going to explore these two interfaces in detail.

I’ll start by giving you a quick answer if you’re too busy to read the rest of the post, and then I’ll get into the details.

IEnumerable and IEnumerator in a Nutshell

IEnumerable and IEnumerator are implementation of the iterator pattern in .NET. I’ll explain the iterator pattern and the problem it aims to solve in detail shortly. But if you’re looking for a quick, pragmatic tip, remember that when a class implements IEnumerable, it can be enumerated. This means you can use a foreach block to iterate over that type.

In C#, all collections (eg lists, dictionaries, stacks, queues, etc) are enumerable because they implement the IEnumerable interface. So are strings. You can iterate over a string using a foreach block to get every character in the string.

Iterator Pattern

Consider the following implementation of a List class. (This is an over-simplified example and not a proper/full implementation of the List class).

public class List
{
    public object[] Objects;

    public List()
    {
        Objects = new object[100];
    }

    public void Add(object obj)
    {
        Objects[Objects.Count] = obj;
    }
}

The problem with this implementation is that the List class is exposing its internal structure (object[]) for storing data. This violates the information hiding principle of object-oriented programming. It gives the outside world intimate knowledge of the design of this class. If tomorrow we decide to replace the array with a binary search tree, all the code that directly reference the Objects array need to modified.

So, objects should not expose their internal structure. This means we need to modify our List class and make the Objects array private:

public class List
{
    private object[] _objects;

    public List()
    {
        _objects = new object[100];
    }

    public void Add(object obj)
    {
        _objects[_objects.Count] = obj;
    }
}

Note that I renamed Objects  to _objects because by convention private fields in C# should be named using camel notation prefixed with an underline.

So, with this change, we’re hiding the internal structure of this class from the outside. But this leads to a new different problem: how are we going to iterate over this list? We no longer have access to the Objects array, and we cannot use it in a loop.

That’s when the iterator pattern comes into the picture. It provides a mechanism to traverse an object irrespective of how it is internally represented.

IEnumerable and IEnumerator interfaces in .NET are implementations of the iterator pattern. So, let’s see how these interfaces work, and how to implement them in our List class here.

IEnumerable interface represents an object that can be enumerated, like the List class here. It has one method:

public interface IEnumerable
{
    IEnumerator GetEnumerator();
}

The GetEnumerator method here returns an IEnumerator object, which can be used to iterate (or enumerate) the given object. Here is the declaration of the IEnumerator interface:

public interface IEnumerator
{
    bool MoveNext();
    object Current { get; }
    void Reset();
}

With this, the client code can use the MoveNext() method to iterate the given object and use the Current property to access one element at a time. Here is an example:

var enumerator = list.GetEnumerator();
while (enumerator.MoveNext())
{
      Console.WriteLine(enumerator.Current);
}

Note that with this interface, the client of our class no longer knows about its internal structure. It doesn’t know if we have an array or a binary search tree or some other data structure in the List class. It simply calls GetEnumerator, receives an enumerator and uses that to enumerate the List. If we change the internal structure, this client code will not be affected whatsoever.

So, the iterator pattern provides a mechanism to iterate a class without being coupled to its internal structure.

Implementing IEnumerable and IEnumerator

So, now let’s see how we can implement the IEnumerable interface on our List class. First, we need to change our List class as follows:

public class List : IEnumerable
{
    private object[] _objects;

    public List()
    {
        _objects = new object[100];
    }

    public void Add(object obj)
    {
        _objects[_objects.Count] = obj;
    }

    public IEnumerator GetEnumerator()
    {
    } 
}

So I added the IEnumerable interface at the declaration of the class and also created the GetEnumerator method. This method should return an instance of a class that implements IEnumerator. So, we’re going to create a new class called ListEnumerator.

public class List : IEnumerable
{
    private object[] _objects;

    public List()
    {
        _objects = new object[100];
    }

    public void Add(object obj)
    {
        _objects[_objects.Count] = obj;
    }

    public IEnumerator GetEnumerator()
    {
        return new ListEnumerator();
    } 

    private class ListEnumerator : IEnumerator 
    {
    }
}

So, I modified the GetEnumerator method to return a new ListEnumerator. I also declared the ListEnumerator class, but I haven’t implemented the members of the IEnumerator interface yet. That will come shortly.

You might ask: “Mosh, why are you declaring ListEnumerator as a nested private class? Aren’t nested classes ugly?” The ListEnumerator class is part of the implementation of our List class. As you’ll see shortly, It’ll have intimate knowledge of the internal structure of the List class. If tomorrow I replace the array with a binary search tree, I need to modify ListEnumerator to support this. I don’t want anywhere else in the code to have a reference to the ListEnumerator; otherwise, the internals of the List class will be leaked to the outside again.

Alright, so let’s quickly recap up to this point. I implemented IEnumerable on our List class and defined the GetEnumerator method. This method returns a new ListEnumerator that the clients will use to iterate the List. I declared ListEnumerator as a private nested class inside List.

Now, it’s time to complete the implementation of ListEnumerator. It’s pretty easy:

public class ListEnumerator : IEnumerator
{
    private int _currentIndex = -1; 

    public bool MoveNext()
    {
        _currentIndex++;

        return (_currentIndex < _objects.Count); 
    }

    public object Current
    { 
        get 
        {
            try
            {
                return _objects[_currentIndex];
            }
            catch (IndexOutOfRangeException)
            {
                throw new InvalidOperationException();
            }
    }

    public void Reset()
    {
        _currentIndex = -1;
    }
}

Let’s examine this class bit by bit.

The _currentIndex field is used to maintain the position of the current element in the list. Initially, it is set to -1, which is before the first element in the list. As we call the MoveNext method, it is incremented by one.

The MoveNext method returns a boolean value to indicate if we’ve reached the end of the list or not. Note that here in the MoveNext method, we have a reference to _objects. This is why I told our ListEnumerator has intimate knowledge of the internal structure of the List. It knows we’re using an object[] there. If we replace the array with a binary search tree, we need to modify the MoveNext method. There are different traversal algorithms for trees.

The Current property returns the current element in the list. I’ve used a try/catch block here, incase the client of the List class tries to access the Current property before calling the MoveNext method. In this case, _currentIndex will be -1 and accessing _objects[-1] will throw IndexOutOfRangeException. I’ve caught this exception and re-thrown a more meaningful exception (InvalidOperationException). The reason for that is because I don’t want the clients of the list to know anything about the fact that we’re using an array with an index. So, IndexOutOfRange is too detailed for the clients of the List class to know and should be replaced with InvalidOperationException.

And finally, in the Reset method, we set _currentIndex back to -1, so we can re-iterate the List from the beginning, if we want.

So, let’s review. I modified our List class to hide its internal structure by making the object[] private. With this, I had to implement the IEnumerable interface so that the clients of the List could enumerate it without knowing about its internal structure. IEnumerable interface has only a single method: GetEnumerator, which is used by the clients to enumerate the List. I created another class called ListEnumerator that knows how to iterate the List. It implements a standard interface (IEnumerator) and hides the details of how the List is enumerated.

The beauty of IEnumerable and IEnumerator is that we’ll end up with a simple and consistent mechanism to iterate any objects, irrespective of their internal structure. All we need to is:

var enumerator = list.GetEnumerator();
while (enumerator.MoveNext())
{
      Console.WriteLine(enumerator.Current);
}

Any changes in the internals of our enumerable classes will be protected from leaking outside. So the client code will not be affected, and this means: more loosely-coupled software.

Generic IEnumerable<T> and IEnumerator<T>

In the examples in this post, I showed you the non-generic versions of these interfaces. These interfaces were originally added to .NET v1, but later Microsoft introduced the generic version of these interfaces to prevent the additional cost of boxing/unboxing. If you’re not familiar with generics, check out my video on YouTube.

Misconception about IEnumerable and Foreach

A common misconception about IEnumerable is that it is used so we can iterate over the underlying class using a foreach block. While this is true on the surface, the foreach block is simply a syntax sugar to make your code neater. IEnumerable, as I explained earlier, is the implementation of the iterator pattern and is used to give the ability to iterate a class without knowing its internal structure.

In the examples earlier in this post, we used IEnumerable/IEnumerator as follows:

var enumerator = list.GetEnumerator();
while (enumerator.MoveNext())
{
      Console.WriteLine(enumerator.Current);
}

So, as you see, we can still iterate the list using a while loop. But with a foreach block, our code looks cleaner:

foreach (var item in list)
{
     Console.WriteLine(item);
}

When you compile your code, the compiler translates your foreach block to a while loop like the earlier example. So, under the hood, it’ll use the IEnumerator object returned from GetEnumerator method.

So, while you can use the foreach block on any types that implements IEnumerable, IEnumerable is not designed for the foreach block!

Wrapping it Up

In this post, you learned that IEnumerable and IEnumerator are used to enumerate (or iterate) a class that has a collection nature. These interfaces are the implementation of the iterator pattern. They aim to provide a mechanism to iterate an object without knowing its internal structure.

If you enjoyed this post, please share it and leave your comment below. If you have any questions, feel free to post them here. I’ll answer every question.

Tags: , ,
Comments

5 C# Collections that Every C# Developer Must Know

Finding the right collection in .NET is like finding the right camera in a camera shop! There are so many options to choose from, and each is strong in certain scenarios and weak in others. If looking for a collection in .NET has left you confused, you’re not alone.

In this post, which is the first in the series on .NET collections, I’m going to cover 5 essential collection types that every C# developer must know. These are the collections that you’ll use 80 – 90% of the time, if not more. In the future posts in this series, I’ll be covering other collection types that are used in special cases, where performance and concurrency are critical.

So, in this post, I’m going to explore the following collection types. For each type, I’ll explain what it is, when to use and how to use it.

  • List
  • Dictionary
  • HashSet
  • Stack
  • Queue

List<T>

Represents a list of objects that can be accessed by an index. <T> here means this is a generic list. If you’re not familiar with generics, check out my YouTube video.

Unlike arrays that are fixed in size, lists can grow in size dynamically. That’s why they’re also called dynamic arrays or vectors. Internally, a list uses an array for storage. If it becomes full, it’ll create a new larger array, and will copy items from the existing array into the new one.

These days, it’s common to use lists instead of arrays, even if you’re working with a fixed set of items.

To create a list:

var list = new List<int>();

If you plan to store large number of objects in a list, you can reduce the cost of reallocations of the internal array by setting an initial size:

// Creating a list with an initial size
var list = new List<int>(10000);

Here are some useful operations with lists:

// Add an item at the end of the list
list.Add(4);

// Add an item at index 0
list.Insert(4, 0);

// Remove an item from list
list.Remove(1);

// Remove the item at index 0
list.RemoveAt(0);

// Return the item at index 0
var first = list[0];

// Return the index of an item
var index = list.IndexOf(4);

// Check to see if the list contains an item
var contains = list.Contains(4);

// Return the number of items in the list 
var count = list.Count;

// Iterate over all objects in a list
foreach (var item in list)
    Console.WriteLine(item);

Now, let’s see where a list performs well and where it doesn’t.

Adding/Removing Items at the Beginning or Middle

If you add/remove an item at the beginning or middle of a list, it needs to shift one or more items in its internal array. In the worst case scenario, if you add/remove an item at the very beginning of a list, it needs to shift all existing items. The larger the list, the more costly this operation is going to be. We specify the cost of this operation using Big O notation: O(n), which simply means the cost increases linearly in direct proportion to the size of the input. So, as n grows, the execution time of the algorithm increases in direct proportion to n.

Adding/Removing Items at the End

Adding/removing an item at the end of a list is a relatively fast operation and does not depend on the size of the list. The existing items do not have to be shifted. This is why the cost of this operation is relatively constant and is not dependent on the number of items in the list. We represent the execution cost of this operation with Big O notation: O(1). So, 1 here means constant.

Searching for an Item

When using methods that involve searching for an item(e.g. IndexOf, Contains and Find), List performs a linear search. This means, it iterates over all items in its internal array and if it finds a match, it returns it. In the worst case scenario, if this item is at the end of the list, all items in the list need to be scanned before finding the match. Again, this is another example of O(n), where the cost of finding a match is linear and in direct proportion with the number of elements in the list.

Accessing an Item by an Index

This is what lists are good at. You can use an index to get an item in a list and no matter how big the list is, the cost of accessing an item by index remains relatively constant, hence O(1).

List in a Nutshell

So, adding/removing items at the end of a list and accessing items by index are fast and efficient operations with O(1). Searching for an item in a list involves a linear search and in the worst case scenario is O(n). If you need to search for items based on some criteria, and not an index (e.g. customer with ID 1234), you may better use a Dictionary.

 

Dictionary<TKey, TValue>

Dictionary is a collection type that is useful when you need fast lookups by keys. For example, imagine you have a list of customers and as part of a task, you need to quickly look up a customer by their ID (or some other unique identifier, which we call key). With a list, looking up a customer involves a linear search and the cost of this operation, as you learned earlier, is O(n) in the worst case scenario. With a dictionary, however, look ups are very fast with O(1), which means no matter how large the dictionary is, the look up time remans relatively constant.

When storing or retrieving an object in a dictionary, you need to supply a key. The key is a value that uniquely identifies an object and cannot be null. For example, to store a Customer in a Dictionary, you can use CustomerID as the key.

To create a dictionary, first you need to specify the type of keys and values:

var dictionary = new Dictionary<int, Customer>();

Here, our dictionary uses int keys and Customer values. So, you can store a Customer object in this dictionary as follows:

dictionary.Add(customer.Id, customer);

You can also add objects to a dictionary during initialization:

var dictionary = new Dictionary<int, Customer>
{
     { customer1.Id, customer1 },
     { customer2.Id, customer2 }
}

Later, you can look up customers by their IDs very quickly:

// Return the customer with ID 1234 
var customer = dictionary[1234];

You can remove an object by its key or remove all objects using the Clear method:

// Removing an object by its key
dictionary.Remove(1);

// Removing all objects
dictionary.Clear();

And here are some other useful methods available in the Dictionary class:

var count = dictionary.Count; 

var containsKey = dictionary.ContainsKey(1);

var containsValue = dictionary.ContainsValue(customer1);

// Iterate over keys 
foreach (var key in dictionary.Keys)
     Console.WriteLine(dictionary[key]);

// Iterate over values
foreach (var value in dictionary.Values)
     Console.WriteLine(value);

// Iterate over dictionary
foreach (var keyValuePair in dictionary)
{
     Console.WriteLine(keyValuePair.Key);
     Console.WriteLine(keyValuePair.Value);
}

So, why are dictionary look ups so fast? A dictionary internally stores objects in an array, but unlike a list, where objects are added at the end of the array (or at the provided index), the index is calculated using a hash function. So, when we store an object in a dictionary, it’ll call the GetHashCode method on the key of the object to calculate the hash. The hash is then adjusted to the size of the array to calculate the index into the array to store the object. Later, when we lookup an object by its key, GetHashCode method is used again to calculate the hash and the index. As you learned earlier, looking up an object by index in an array is a fast operation with O(1). So, unlike lists, looking up an object in a dictionary does not require  scanning every object and no matter how large the dictionary is, it’ll remain extremely fast.

So, in the following figure, when we store this object in a dictionary, the GetHashCode method on the key is called. Let’s assume it returns 1234. This hash value is then adjusted based on the size of the internal array. In this figure, length of the internal array is 6. So, the remainder of the division of 1234 by 6 is used to calculate the index (in this case 4). Later, when we need to look up this object, its key used again to calculate the index.

Hashtable in C#

Now, this was a simplified explanation of how hashing works. There is more involved in calculation of hashes, but you don’t really need to know the exact details at this stage (unless for personal interests). All you need to know as a C# developer is that dictionaries are hash-based collections and for that reason lookups are very fast.

 

HashSet<T>

A HashSet represents a set of unique items, just like a mathematical set (e.g. { 1, 2, 3 }). A set cannot contain duplicates and the order of items is not relevant. So, both { 1, 2, 3 } and { 3, 2, 1 } are equal.

Use a HashSet when you need super fast lookups against a unique list of items. For example, you might be processing a list of orders, and for each order, you need to quickly check the supplier code from a list of valid supplier codes.

A HashSet, similar to a Dictionary, is a hash-based collection, so look ups are very fast with O(1). But unlike a dictionary, it doesn’t store key/value pairs; it only stores values. So, every objects should be unique and this is determined by the value returned from the GetHashCode method. So, if you’re going to store custom types in a set, you need to override GetHashCode and Equals methods in your type.

To create a HashSet:

var hashSet = new HashSet<int>();

You can add/remove objects to a HashSet similar to a List:

// Initialize the set using object initialization syntax 
var hashSet = new HashSet<int>() { 1, 2, 3 };

// Add an object to the set
hashSet.Add(4);

// Remove an object 
hashSet.Remove(3);

// Remove all objects 
hashSet.Clear();

// Check to see if the set contains an object 
var contains = hashSet.Contains(1);

// Return the number of objects in the set 
var count = hashSet.Count;

HashSet provides many mathematical set operations:

// Modify the set to include only the objects present in the set and the other set
hashSet.IntersectWith(another);

// Remove all objects in "another" set from "hashSet" 
hashSet.ExceptWith(another);

// Modify the set to include all objects included in itself, in "another" set, or both
hashSet.UnionWith(another);

var isSupersetOf = hashSet.IsSupersetOf(another);
var isSubsetOf = hashSet.IsSubsetOf(another);
var equals = hashSet.SetEquals(another);

 

Stack<T>

Stack is a collection type with Last-In-First-Out (LIFO) behaviour. We often use stacks in scenarios where we need to provide the user with a way to go back. Think of your browser. As you navigate to different web sites, these addresses that you visit are pushed on a stack. Then, when you click the back button, the item on the stack (which represents the current address in the browser) is popped and now we can get the last address you visited from the item on the stack. The undo feature in applications is implemented using a stack as well.

Here is how you can use a Stack in C#:

var stack = new Stack<string>();
            
// Push items in a stack
stack.Push("http://www.google.com");

// Check to see if the stack contains a given item 
var contains = stack.Contains("http://www.google.com");

// Remove and return the item on the top of the stack
var top = stack.Pop();

// Return the item on the top of the stack without removing it 
var top = stack.Peek();

// Get the number of items in stack 
var count = stack.Count;

// Remove all items from stack 
stack.Clear();

Internally, a stack is implemented using an array. Since arrays in C# have a fixed size, as you push items into a stack, it may need to increase its capacity by re-allocating a larger array and copying existing items into the new array. If re-allocation doesn’t need to happen, push is O(1) operation; otherwise, if re-allocation is required, assuming the stack has n elements, all these elements need to be copied to the new array. This leads to runtime complexity of O(n).

Pop is an O(1) operation.

Contains is a linear search operation with O(n).

 

Queue<T>

Queue represents a collection with First-In-First-Out (FIFO) behaviour. We use queues in situations where we need to process items as they arrive.

Three main operations on queue include:

  • Enqueue: adding an element to the end of a queue
  • Dequeue: removing the element at the front of the queue
  • Peek: inspecting the element at the front without removing it.

Here is how you can use a queue:

var queue = new Queue<string>();

// Add an item to the queue
queue.Enqueue("transaction1");

// Check to see if the queue contains a given item 
var contains = queue.Contains("transaction1");

// Remove and return the item on the front of the queue
var front = queue.Dequeue();

// Return the item on the front without removing it 
var top = queue.Peek();
            
// Remove all items from queue 
queue.Clear();

// Get the number of items in the queue
var count = queue.Count;

 

Summary

Lists are fast when you need to access an element by index, but searching for an item in a list is slow since it requires a linear search.

Dictionaries provide fast lookups by key. Keys should be unique and cannot be null.

HashSets are useful when you need fast lookups to see if an element exists in a set or not.

Stacks provide LIFO (Last-In-First-Out) behaviour and are useful when you need to provide the user with a way to go back.

Queues provide FIFO (First-In-First-Out) behaviour and are useful to process items in the order arrived.

 

Love your feedback!

If you enjoyed this post, please share it and leave a comment. If you got any questions, feel free to post them here. I’ll answer every question.

Tags: , ,
Comments

Difference between string and String in C#

One of the questions that many novice C# programmers ask is: “What is the difference between string and String?”

In C#, string is an alias for the String class in .NET framework. In fact, every C# type has an equivalent in .NET. As another example, short and int in C# map to Int16 and Int32  in .NET.

So, technically there is no difference between string and String, but it is common practice to declare a variable using C# keywords. I’ve hardly seen anyone declaring an integer with Int32!

The only tiny difference is that if you use the String class, you need to import the System namespace on top of your file, whereas you don’t have to do this when using the string keyword.

Many developers prefer to declare a string variable with string but use the String class when accessing one of its static members:


String.Format()

The examples on MDSN also follow this convention.

Tags:
Comments

Using SqlBulkCopy for fast inserts

Problem

You need to insert large number of records into one or more tables in a SQL Server database. By large, I mean several hundred thousands or even millions. The source of the data can be another database, an Excel spreadsheet, CSV files, XML and literally anything. Writing one record at a time using a SqlCommand along with a INSERT INTO statement is very costly and slow. You need an efficient solution to insert large number of records quickly.

Solution

In .NET, we have a class called SqlBulkCopy to achieve this. You give it a DataTable, or an array of DataRows, or simply an instance of a class that implements IDataReader interface. Next, you tell it the name of the destination table in the database, and let it do its magic.

I used this in one of my projects were I had to migrate data from one of our legacy models into a new model. First, I used a plain SqlCommand to insert one record at a time (totally about 400,000 records). The process took 12 minutes to run. With SqlBulkCopy, I reduced the data migration time to 6 seconds!

How to use it?

Here is the most basic way to use SqlBulkCopy. Read the code first and then I’ll explain it line by line.

var dt = new DataTable();
dt.Columns.Add("EmployeeID");
dt.Columns.Add("Name"); 

for (var i = 1; i < 1000000; i++)    
    dt.Rows.Add(i + 1, "Name " + i + 1);

using (var sqlBulk = new SqlBulkCopy(_connectionString))
{
    sqlBulk.DestinationTableName = "Employees";
    sqlBulk.WriteToServer(dt);
}

In this example I’m assuming we have a table in the database called Employees with two columns: EmployeeID and Name. I’m also assuming that the EmployeeID column is marked as IDENTITY.

In the first part, we simply create a DataTable that resembles the structure of the target table. That’s the reason this DataTable has two columns. So, to keep things simple, I’m assuming that the source DataTable and the target table in the database have identical schema. Later in this post I’ll show you how to use mappings if your source DataTable has a different schema.

The second part is purely for demonstration. We populate this DataTable with a million record. In your project, you get data from somewhere and put it into a DataTable. Or you might use an IDataReader for more efficient reads.

And finally, we create a SqlBulkCopy and use it to write the content of our DataTable to the Employees table in the database.

Pretty simple, right? Let’s take this to the next level.

Using the identity values from the source

In the above example, I assumed that the EmployeeID column is marked as IDENTITY, hence the values are generated by the Employees table. What if you need to use the identity values in the source? It’s pretty simple. You need to use the KeepIdentity option when instantiating your SqlBulkCopy.

using (var sqlBulk = new SqlBulkCopy(_connectionString, SqlBulkCopyOptions.KeepIdentity))

With this option, the EmployeeID in our DataTable will be used.

Transactions

You can wrap all your inserts in a transaction, so either all will succeed or all will fail. This way you won’t leave your database in an inconsistent state. To use a transaction, you need to use a different constructor of SqlBulkCopy that takes a SqlConnection, options (as above) and a SqlTransaction object.

using (var connection = new SqlConnection(_connectionString))
{
    connection.Open();

    var transaction = connection.BeginTransaction();

    using (var sqlBulk = new SqlBulkCopy(connection, SqlBulkCopyOptions.KeepIdentity, transaction))
    {
        sqlBulk.DestinationTableName = "Employees";
        sqlBulk.WriteToServer(dt);
    }
}

Note that here we have to explicitly create a SqlConnection object in order to create a SqlTransaction. That’s why this example is slightly more complicated than the previous ones where we simply passed a string (connection string) to SqlBulkCopy. So here we need to manually create the connection, open it, create a transaction, and then pass the connection and the transaction objects to our SqlBulkCopy.

Batch size

By default, all the records in the source will be written to the target table in one batch. This means, as the number of records in the source increases, the memory consumed by SqlBulkCopy will increase. If you have memory limitations, you can reduce the number of records written in each batch. This way, SqlBulkCopy will write smaller batches to the database, hence it will consume less memory. Since there are multiple conversations with the database, this will have a negative impact on the performance. So, based on your circumstances, you may need to try a few different batch sizes and find a number that works for you.

To set the batch size:

using (var sqlBulk = new SqlBulkCopy(_connectionString))
{
    sqlBulk.BatchSize = 5000;
    sqlBulk.DestinationTableName = "Employees";
    sqlBulk.WriteToServer(dt);
}

This example writes 5000 records in each batch.

Notifications

You might need to write a message in the console or a log as records are written to the database. SqlBulkCopy supports notifications. So it has an event to which you can subscribe. You set the number of records to be processed before a notification event is generated.

using (var sqlBulk = new SqlBulkCopy(_connectionString))
{
    sqlBulk.NotifyAfter = 1000;
    sqlBulk.SqlRowsCopied += (sender, eventArgs) => Console.WriteLine("Wrote " + eventArgs.RowsCopied + " records.");
    sqlBulk.DestinationTableName = "Employees";
    sqlBulk.WriteToServer(dt);
}

In this example, once every 1000 records are processed, we get notified and display a message on the Console. Note that SqlRowsCopied is the event name and here we use a lambda expression to create an anonymous method as the event handler. If you’re not familiar with lambda expressions, delegates and event handlers, check out my C# Advanced course.

Column mappings

In all the examples so far, I assumed our DataTable has the exact same schema as the target table. What if the “Name” column in the source is actually called “FullName”. Here is how we can create a mapping between the columns in the source and target table.

using (var sqlBulk = new SqlBulkCopy(_connectionString))
{
    sqlBulk.ColumnMappings.Add("FullName", "Name");
    sqlBulk.DestinationTableName = "Employees";
    sqlBulk.WriteToServer(dt);
}

How about multiple tables?

So far we’ve only inserted records into one table: the Employees table. What if we wanted to populate Employees and their Timesheet? With our DataTables as the data source, we can have two data tables, one for Employees, one for Timesheets. Then, we use our SqlBulkCopy to populate one table at a time:

using (var sqlBulk = new SqlBulkCopy(_connectionString))
{
    sqlBulk.DestinationTableName = "Employees";
    sqlBulk.WriteToServer(dtEmployees);

    sqlBulk.DestinationTableName = "Timesheets";
    sqlBulk.WriteToServer(dtTimesheets);
}

 

I hope you enjoyed this post and learned something new. If you enjoy my teaching style and like to learn more from me, subscribe to my blog. Also, check out my courses for more substantial learning.

 

Tags: , , ,
Comments

Do Microsoft certificates make you a better developer?

I was having a chat with one of my students on Udemy. He mentioned that he took one of Microsoft tests but he didn’t pass. He heard from a friend that many of the questions were tricky and were not indicative of a how good a programmer one was.

I can’t agree more! Ten years ago, it was a big deal to have Microsoft exams. It was kind of a new concept and as a 20-something year old, I thought I was so excited to get all these certificates! I passed many exams and became a Microsoft Certified Technology Specialist (MCTS), Application Developer (MCAD) and Professional (MCP). But do Microsoft certificates really make one a good programmer?

First, we need to define what a good programmer is. Is there an official way to define a “good programmer”? Scott Hanselman has a list of what makes good .NET developers. But with all respect for Scott, in my opinion, that is purely his personal opinion. This is very subjective and if you ask ten developers the same question, each would probably come up with a different list of criteria! Many of the criteria in his list really depend on the kind of applications a programmer has been exposed to.

But what is a good programmer? In my humble opinion:

  • A good programmer spends enough time to really understand the problem he is solving. Einstein said: “If I’m given 60 minutes to solve a problem, I’ll spend 55 minutes understanding it and 5 minutes solving it.” I can’t agree more! In my 14 years of programming career, I’ve seen several examples of programmers solving the wrong problem. Sometimes the problem they themselves liked to solve, not the problem the business wanted them solved! I’ve seen developers debating for months whether they should use SOA, BPMS, REST, RCP, etc and no one really knew what the actual business problem was!
  • A good programmer solves a problem with a simple elegant solution without over-engineering. The old saying: keep it simple, stupid! Leonardo Da Vinci said: “Simplicity is the ultimate sophistication.”
  • A good programmer does enough design without modelling the entire universe. It’s easy to spend several months drawing 100+ pages of UML diagrams but not producing working software!
  • A good programmer writes clean, maintainable code that tells a story. That means small (10 lines or less), perfectly named methods, each responsible for one thing. Small classes with high cohesion and lose coupling. And no hacks!
  • A good programmer cares about the long-term maintainability of software rather than a quick fix: Sometimes we just want a quick fix. But all these quick fixes or patches in long term lead to rotten software. That kind of software needs to be re-written from scratch. Isn’t it better to spend a bit more time coming up with a clean elegant solution that having to re-write the entire software?
  • A good programmer writes tests before writing production code: this one is subjective and you may disagree with me. Some hate TDD, some love it. When I first learned about TDD, I loved the philosophy of it. Then I tried to apply it and didn’t have much success. So I hated it. A while later, I tried again and I got the hang of it. Then I fell in love with it. These days, I’m neither a TDD evangelist nor an anti-TDD. I take the middle ground. TDD is great for two reasons: you write enough code to make sure the actual problem is solved. Nothing more, nothing less. Also, you end up with tests that don’t lie. If you write tests after writing the production code, it’s possible that sometimes your tests pass while the production code is not working. These tests lie! They’re not trustworthy. And it’s better to delete them than write them in the first place. TDD helps you write tests that tell the truth at all times! But is TDD good at all times? I personally prefer to use TDD as much as I can. But if I find situations where it is affecting my productivity and wasting my time, I’m happy to break the rules.
  • A good programmer thinks of edge cases: The code he writes doesn’t blow up in rare circumstances.

So, that is my personal definition of what makes one a good programmer. Being a good programmer shouldn’t be tied to .NET or any specific platforms. Good programmers can write good code on any platform. There are a million little details in .NET that you may need to know (if ever) once in a blue moon! Same goes for all other platforms out there. And interestingly, we’re getting more programming languages, frameworks and libraries every single day. Who can know all these languages in full detail these days anyway?

Comments

Hierarchical Views with Backbone

One of the common questions amongst developers starting with Backbonejs is: how should I implement hierarchical (master-detail / parent-child) views with Backbone? So, in this post I’m going to present a clean solution that gets you started.

If you want to jump to the final solution, here is the JSBin link:

http://jsbin.com/sozaxejoji/1/edit?js

If you prefer a step-by-step approach, it’s best to have a look at the link above to have an idea what we’ll be building.

Ok now, let’s get started.

Step 1: Add the containers in the HTML

We need two separate containers in our HTML: one for the master view, the other for the child view.

 <div id="container">
    <div id="masters" class="region"></div>
    <div id="detail" class="region"></div>
  </div>

We use the “masters” container to render a list of master items. When the user clicks on a master, the “detail” container will be refreshed showing the details for that master.

Step 2: Add a bit of CSS

To make our containers more visible, let’s add the following CSS:

.region {
  width: 50%;
  float: left;
  border: 1px solid #ccc;
  box-sizing: border-box;
  padding: 30px;
}

Step 3: Render the list of masters

To render the list of master items, we need a model, a view for rendering each model, a collection and a view to render the collection.

First, let’s start by creating a namespace for our app.

var App = App || {};

From this point, any model, collection or view we define, will hang off this App object, rather than the global window object.

Now, let’s define a model and a view for our masters:

App.Master = Backbone.Model.extend();

App.MasterView = Backbone.View.extend({
  render: function(){
    this.$el.html("<a href='#'>" + this.model.get("name") + "</a>");
    
    return this;
  }
});

Next, to render a list of masters, we need a collection and a collection view:

App.Masters = Backbone.Collection.extend({
  Model: App.Master
});

App.MastersView = Backbone.View.extend({
  render: function(){
    this.collection.each(function(p){
      var masterView = new App.MasterView({ model: p });
      this.$el.append(masterView.render().$el);
    }, this);
    
    return this; 
  }
});

Nothing fancy yet. In the MastersView, we simply iterate the collection, encapsulate each model inside a MasterView, rendering the view and appending it to the collection’s DOM element.

Finally, to wire all these together:

$(document).ready(function(){
  var masters = new App.Masters([
    new App.Master({ name: "Master 1" }),
    new App.Master({ name: "Master 2" })
  ]);
  
  var mastersView = new App.MastersView({ collection: masters });
  $("#masters").html(mastersView.render().$el); 
});

So at this point we should be able to see a list of master items in the container on the left.

Step 4: Handle the click event of masters

Now, we’re going to take our code one step further and handle the click event of master items. At this point we don’t care about showing the details in the detail container. All we want to do is to handle the click event and make sure the plumbing code is working. Remember: baby steps, always!

So, we change our MasterView to the following:

App.MasterView = Backbone.View.extend({
  events: {
    "click": "onClick"
  },
  
  onClick: function(){
     alert(this.model.get("name"));
  },

  render: function(){
    this.$el.html("<a href='#'>" + this.model.get("name") + "</a>");
    
    return this;
  }
});

Note that all I did here was adding the events hash and the onClick handler. Run the application and make sure that when you click a master item, you see the alert.

All good? Ok, let’s go to the next step.

Step 5: Display the details when a master is clicked

We’re almost there! All we need to do now is to display the details in the detail container, instead of showing an alert. So how can we do this? We use events for that. Here is the idea: The master view publishes an event to which the detail view is listening. This is how you can collaborate between various views without coupling them to each other.

To get this to work we need an event aggregator or event bus. It’s best to define it right after declaring the App namespace.

App.eventBus = _.extend({}, Backbone.Events);

The eventBus object here derives from Backbone.Events, which provides the base functionality for publishing and subscribing to events.

Now, we need to change our MasterView to publish an event instead of displaying an alert:

  onClick: function(){
    App.eventBus.trigger("master:select", this.model);
  }

The first argument to the trigger method is the name of the event. You can use any name you would like, but it’s a good idea to use namespaces (note the colon). In this case, “master:select” simply means master is selected.

The second argument to this method is the data we would like to publish. The subscriber will then receive this data with the event.

Now, who is the subscriber? The detail view, which we haven’t implemented yet. So here is our DetailView:

App.DetailView = Backbone.View.extend({
  initialize: function(){
    App.eventBus.on("master:select", this.onMasterSelected, this);
  },
  
  onMasterSelected: function(master){
    this.model = master;
    this.render();
  },
  
  render: function(){
    if (!this.model) {
      this.$el.html("Please select an item from the master list.");
    } else {
      this.$el.html(this.model.get("name"));
    }
     
    return this;
  }
});

Let’s see what is happening here. In the initialise method, we use the on() method to subscribe to the “master:select” event. The second argument to this method is the name of our event handler (onMasterSelected in this case). The third argument is the context. When the handler (onMasterSelected) is called, by default, “this” has a different meaning. We want “this” in that method to refer to the view itself. That’s why we pass “this” (a pointer to the view itself) as the third argument to this method.

The onMasterSelected method has an argument, which is the data that we published with the event. In this case, it’s going to be a master model. So we can keep a reference to this object in our view’s model.

Finally, in the render() method we have a simple conditional statement to display a default message if no master is selected, or the name of the currently selected master. Rather than just displaying the name, you may want to display a whole heap of data. Or you may want to fetch another model or collection based on that and populate the detail view. It’s up to you and the project you’re working on.

One last step: we defined the DetailView but we never rendered it! So, in our (document).ready(), after reading masters, we need to render the DetailView as well:

  var detailView = new App.DetailView();
  $("#detail").html(detailView.render().$el);
});

And that brings us to the end of this post! You can see the completed solution at:

http://jsbin.com/sozaxejoji/1/edit?js

Tags:
Comments

Tell, Don’t Ask, the Pragmatic Way

Tell, Don’t Ask is a design guideline that helps us adhere to the encapsulation principle of object-oriented programming: data and functions that operate on the data belong to the same class. Dismissing this guideline often leads to an anaemic domain model like the following:

public class Account
{
    public int Id { get; set; }
    public float Balance { get; set; }
}

And the domain logic appearing in a service class like here:

public class AccountService
{
    public void Widthraw(int accountId, float amount)
    {
        var account = _repository.GetById(accountId);

        if (account.Balance < amount)
            throw new InvalidOperationException();

        account.Balance -= amount;

        _repository.SaveChanges();
    }
}

While this is not a serious problem in this example, often, in the real-world, such service classes quickly become fat, unmaintainable and hard to unit test. You’ll also often find the same logic appearing in various methods of a service class or even across different service classes.

In this example, checking the balance of an account against the given amount and updating the balance is the responsibility of the Account class itself, not AccountService.

Tell, Don’t ask suggests: Tell your objects what to do, don’t ask them questions. With Tell, Don’t Ask, this example can be modified to something like the following:

public class Account
{
    public int Id { get; set; }
    private float _balance;

    public void Withdraw(float amount)
    {
        if (_balance < amount)
            throw new InvalidOperationException();

        _balance -= amount;
    }
}

public class AccountService
{
    public void Widthraw(int accountId, float amount)
    {
        var account = _repository.GetById(accountId);
        account.Withdraw(amount);
        _repository.SaveChanges();
    }
}

Note that I’ve made the balance a private field of the Account class (as opposed to a public property with a getter/setter). So, as you see, with Tell, Don’t Ask, our objects become more about “behaviors”, as opposed to property bags. Our service layer also becomes more slim. It’s purely responsible for orchestration, which is the actual responsibility of a service. It fetches a domain object from the repository, invokes some operation on it, and then persists it.

While this principle looks great on the surface, there are situations where it becomes impractical. For example, if you’re using Entity Framework as your O/RM, you should probably know that Entity Framework cannot map a private field to a column in the database. Even if you’re not using Entity Framework and you manually persist or hydrate your domain objects from the database, you’ll still have difficulty working with a private field in this case.

So, here is one pragmatic way to put Tell, Don’t Ask into practice:

Having a public property with a getter is not necessarily a bad thing. Sometimes (at least for display reasons) we certainly need to get the value of a field and display it. It is the public setter that is the evil. When we expose a public setter, the client of that class can set any value, and this causes two issues:

1- The client (e.g. the service class) becomes responsible for controlling the business logic.

2- If the client forgets to check the business logic, the object goes into an invalid state and will eventually gets persisted. This is how we create bugs in our programs!

So, with a public property with a public getter and a private setter, we can prevent these two issues from happening, yet have the flexibility of using an O/RM. We will still adhere to Tell, Don’t Ask (partially) because we’ll tell our object what to do.

Our code will eventually look like this:

public class Account
{
    public int Id { get; set; }
    public float Balance { get; private set; }

    public void Withdraw(float amount)
    {
        if (Balance < amount)
            throw new InvalidOperationException();

        Balance -= amount;
    }
}

public class AccountService
{
    public void Widthraw(int accountId, float amount)
    {
        var account = _repository.GetById(accountId);
        account.Withdraw(amount);
        _repository.SaveChanges();
    }
}
Comments

Hello world!

Yes, a true Hello World! As much as this title sounds cliche, it’s so true about me. For years, I’ve wanted to have a blog, but always something got in the way and prevented me from doing this. Finally, I’ve made the commitment to keep this space updated on a regular basis.

I’m gonna keep my posts, short and pragmatic with useful content.

Hope you enjoy this blog! 🙂

%d bloggers like this: