Enhanced Monotouch.Dialog FloatElement

For those .NET developers that aren’t yet familiar with the Monotouch framework and are looking to write iOS apps using your existing C#/.NET skill set, you need to check it out. it’s a great platform, especially if you don’t want to deal with the hassle of leaning Objective-C.

Monotouch.Dialog is a specific framework of classes designed to ease the pain usually required in writing the rather repetitive bits of code needed to get tables and other UI paradigms working in iOS. It’s a relatively new addition to the Monotouch API, so there are still some rough edges.

The basic UI range slider (FloatElement) provided by the framework seems to be missing some obvious items, such as the ability to get an event or indication that the value has changed.

I created my own “extended” version with a few more features that I needed for a project I was working on.

Features
  • Ability to ’Lock/Unlock’ the range slider using provided UI images.
  • Inline caption that can be tied directly to the value of the slider
  • Callback functionality when the value of the element is changing
Looking ahead – Future Enhancements

The lock/unlock feature was the precursor for an additional feature I plan on building – a FloatElementGroup. The idea here is that multiple sliders are present and are used to split a given ‘whole’ value by various percentages (think audio equalizers). The sliders would auto-adjust as needed to compensate for any changes to a single slider and the user could optionally lock certain sliders on fixed values.

 

Here’s the code. Hopefully this will come in handy to someone else looking to use a FloatElement.

Source Code
public class FloatElementEx : Element
{
    static NSString skey = new NSString("FloatElementEx");
    const float LockImageWidth = 32.0f;
    const float LockImageHeight = 32.0f;

    /// <summary>
    /// Set a string to reserve a certain amount of space for the 
    /// caption used in the FloatElement. Useful when there is no
    /// initial caption to show - allows space to be reserved for 
    /// when it will be set.
    /// </summary>
    public string ReserveCaptionPlaceholderString { get; set; }
    /// <summary>
    /// Returns the locked status
    /// </summary>
    public bool IsLocked { get { return _valueLocked; } }
    public bool ShowCaption { get; set; }
    /// <summary>
    /// Ties the displayed caption to the value of the slider
    /// </summary>
    public bool UseCaptionForValueDisplay { get; set; }
    public bool Continuous { get; set; }
    public int MinValue { get; set; }
    public int MaxValue { get; set; }
    public int Value { get; private set; }
    public UIImage LockImage { get; set; }
    public UIImage UnlockImage { get; set; }

    private UIButton _lockImageView;
    private UISlider _slider;
    private Action<int> _valueChangedCallback;
    private bool _valueLocked;
    private bool _lockable = false;


    public FloatElementEx(int value, Action<int> valueChanged = null, bool continuous = true, bool lockable = false)
        : base(null)
    {
        MinValue = 0;
        MaxValue = 100;
        Value = value;
        Continuous = continuous;
        _lockable = lockable;
        _valueChangedCallback = valueChanged;
    }

    protected override NSString CellKey { get { return skey; } }

    public override UITableViewCell GetCell(UITableView tv)
    {
        var cell = tv.DequeueReusableCell(CellKey);
        if (cell == null) {
            cell = new UITableViewCell(UITableViewCellStyle.Default, CellKey);
            cell.SelectionStyle = UITableViewCellSelectionStyle.None;
        }
        else
            RemoveTag(cell, 1);

        SizeF captionSize = new SizeF(0, 0);
        if (ShowCaption && (Caption != null || ReserveCaptionPlaceholderString != null || UseCaptionForValueDisplay)) {
            if (Caption == null) {
                if (UseCaptionForValueDisplay)
                    captionSize = cell.TextLabel.StringSize(MaxValue.ToString(), 
                        UIFont.FromName(cell.TextLabel.Font.Name, UIFont.LabelFontSize));
                else if (!string.IsNullOrEmpty(ReserveCaptionPlaceholderString))
                    captionSize = cell.TextLabel.StringSize(ReserveCaptionPlaceholderString, 
                        UIFont.FromName(cell.TextLabel.Font.Name, UIFont.LabelFontSize));
            }
            else {
                captionSize = cell.TextLabel.StringSize(Caption, UIFont.FromName(cell.TextLabel.Font.Name, UIFont.LabelFontSize));
            }

            captionSize.Width += 10; // Spacing

            if (Caption != null)
                cell.TextLabel.Text = Caption;
        }

        var lockImageWidth = _lockable ? LockImageWidth : 0;

        if (_slider == null) {
            _slider = new UISlider(new RectangleF(10f + captionSize.Width, 12f, 280f - captionSize.Width - lockImageWidth, 7f)) {
                BackgroundColor = UIColor.Clear,
                MinValue = this.MinValue,
                MaxValue = this.MaxValue,
                Continuous = this.Continuous,
                Value = this.Value,
                Tag = 1
            };
            _slider.ValueChanged += delegate {
                Value = (int)_slider.Value;
                if (UseCaptionForValueDisplay) {
                    Caption = Value.ToString();
                    // force repaint/redraw
                    if (GetContainerTableView() != null) {
                        var root = GetImmediateRootElement();
                        root.Reload(this, UITableViewRowAnimation.None);
                    }
                }
                if (_valueChangedCallback != null)
                    _valueChangedCallback(Value);
            };
        }
        else {
            _slider.Value = Value;
        }

        if (_lockable){
            if (_lockImageView == null)
                _lockImageView = new UIButton(new RectangleF(_slider.Frame.X + _slider.Frame.Width, 2f, lockImageWidth, LockImageHeight));
            
            _lockImageView.SetBackgroundImage((_valueLocked) ? LockImage : UnlockImage, UIControlState.Normal);
            _lockImageView.TouchUpInside += (object sender, EventArgs e) => {
                _valueLocked = !_valueLocked;
                _lockImageView.SetBackgroundImage((_valueLocked) ? LockImage : UnlockImage, UIControlState.Normal);
                if (_valueLocked)
                    _slider.Enabled = (!_valueLocked);
            };
            cell.ContentView.AddSubview(_lockImageView);
        }
        cell.ContentView.AddSubview(_slider);
        return cell;
    }

    public override string Summary()
    {
        return Value.ToString();
    }

    protected override void Dispose(bool disposing)
    {
        if (disposing)
        {
            if (_slider != null)
            {
                _slider.Dispose();
                _slider = null;
            }
        }
    }

    public void SetValue(int f)
    {
        if (!IsLocked)
            _slider.SetValue(f, false);
    }

    public void SetCaption(string caption)
    {
        Caption = caption;
        // force repaint/redraw
        if (GetContainerTableView() != null) {
            var root = GetImmediateRootElement();
            root.Reload(this, UITableViewRowAnimation.None);
        }
    }
}

Usage
// Basic slider with callback [0..50], not lockable, Caption displayed as slider value
var elem = new FloatElemenEx(50, lockable: false, valueChanged: (val) => DoSomething())
{
    ShowCaption = true,
    UseCaptionForValueDisplay = true
}

// Basic slider with callback [0..50], lockable (can click image), Caption displayed as slider value
var elem = new FloatElemenEx(50, lockable: true, valueChanged: (val) => DoSomething())
{
    ShowCaption = true,
    UseCaptionForValueDisplay = true,
    LockImage = UIImage.FromBundle("images/lock.png"),
    UnlockImage = UIImage.FromBundle("images/unlock.png"),
}

// Basic slider with callback [0..50], not lockable, Caption set explicitly, space reserved
var elem = new FloatElemenEx(50, lockable: false, valueChanged: (val) => DoSomething( DoSomeCalc(elem.SetCaption(val); )))
{
    ShowCaption = true,
    UseCaptionForValueDisplay = false,
    ReserveCaptionPlaceholderString = "XXX", // calculates space based on this string            
}

// Sets the value of the slider and kicks off needed repaints internally (ie. if caption is tied to value, etc)
elem.SetValue(12);

// Sets the sliders caption. Need to use SetCaption() and not the Caption proprerty. Formers refreshes the element, latter doesn't.
elem.SetCaption("Hi!");

C# Version of DocRank

Recently I’ve been exploring Machine Learning concepts (clustering, link analysis, etc.) and discovered a great resource – Algorithms of the Intelligent Web.

The book gives a good introduction and treatment of applied Machine Leaning, especially for someone new to the topic. The book is particularly good in my opinion since it talks about machine learning concepts with an emphasis on how they can be applied to various areas – improving search, providing recommendations, fraud detection, etc. The only drawback to this book as a .NET developer is that the code samples in the book and the source code are all in Java.

Not a big deal, since Java and C# and very close as far as syntax and API structure. I performed a quick and dirty translation of the code from Java to C# if anyone is interested. I haven’t yet made a pass through the code to clean it up or make it more efficient – perhaps in the near future I’ll release an updated version of the source. The C# source for the converted Java code can be downloaded here.

Caching in a Service Oriented Architecture (SOA) – Part 2

This is the second part of a an article on designing a cache system for a service oriented architecture. The first part of the article dealt with the design considerations and potential approaches. This part will look at an implementation of a cache system.

 

System Overview

Before designing any system, its a good idea to fully understand what needs the system should satisfy. For the specific project I was working on some of the requirements that drove the design decisions were:

  • Low latency for returning data

Access to the data, no matter where it was located must be fast. The system would not allow for large amounts of latency in retrieving data. Every item type put in the cache must perform under representative usage patterns.

  • Remove load on the database

The environment we were working in was a typical SQL based RDBMS. We had an existing codebase and little ability to leverage some of the newer technologies such as NOSQL – of which many providers support features such as sharding – to help in scaling out the data layer.

  • Consuming layers should not need to know about the implementation

We decided that users of the system – in our case, business logic developers – should not know or care that something resides in the cache or not. In fact the developers of this layer shouldn’t even know if we have a cache. We were striving to put in place a set of design patterns that would completely abstract away where data was located.

  • Must work in a multi-tenant application layer (i.e.. shared web servers) and web-farm

Initially our approach was to leverage the .NET Runtime cache on the application server, but we quickly abandoned this once the requirement for a web-farms and multi-tenant application servers was added. A local cache on the server meant that a users request that added something to the cache on server ‘A’, and a subsequent request from that user, served by server ‘B’ would generate cache misses.

Two possible solutions remained – move to a distributed off-server cache, or synchronize changes to local caches across the set of servers serving requests. Of these two approaches we chose the lesser complex of the two, which was to move to a distributed cache server.

Cache Selection

With the choice of a web server resident cache out of the running due to the need to support web-farms, for our specific implementation we chose to go with the Windows Server AppFabric distributed cache.

We chose this since it had some nice features we would like to include in the future – such as locking and versioning of cache items. Also, given that it was a Microsoft product – this helped to simplify our setup, deployment and supportability concerns. There are a lot of good alternatives in this space, especially MemCache which is something we had considered and make look at leveraging in the future.

Another driver of our decision was the availability of a few features such as notification based expiration for local caches, and that the cache could be configured to use portions of the web servers available memory, so as not to need a dedicated cache server for smaller deployments.

 

Implementation Details

Factory pattern and Interfaces over the Cache Layer

One of the early choices made in the design of the system, was to be as defensive as possible. For that reason, we chose to implement a factory pattern around the cache creation. We felt that the factory pattern paired with some interface based programming would be able to provide a nice layer of abstraction of the cache details, from our development team.

This will allow us to swap out providers without needing to change all of the touch points in the business logic code, since we aren’t coupled directly to the cache implementation.

Some thought needs to be given to designing the proper cache interface for your system. You probably do not want to directly mirror the operations available on any given cache implementation. Remember the goal is to abstract away from the details, so think about how you will actually be using the cache and what the additional requirements you have around it are. In our case we wanted to store some ‘metadata’ with each cache item (explained below), so we made sure to build that into our abstraction layer interfaces.

Interacting with the cache

Figuring out how the business logic code you write will interact with the caching layer is likely the most important piece of the design. The decisions here will drive much of the overall design of the system and will either allow for some flexibility or conversely limit your ability to perform certain operations.

There are a few techniques you can choose, and I encourage you to explore them before settling on a choice.

We chose the Cache Aside pattern. This is where the code that is about to request data checks for the availability of the data in the cache. If the cache has the data, the data is returned. If the cache does not have the data, the data is loaded from the data store, added to the cache and then returned.

Our system had in place a Domain Model based design, which allowed us to fit the cache aside pattern nicely into the foundational mechanisms of which each domain object was built from. This let us keep the details of looking items up in the cache and the updating of items compartmentalized to a few key classes that most developers never directly interacted with.

For handling staleness of data, we were in luck due to the singular path we had to get out all data, which was though our domain objects. Due to this we were able to tap into the updates of objects and determine in the base implementation whether that object had data (direct or related) in the cache and invalidate/remove it if needed. This was a big win for us allowing to keep the complexity of dealing with the cache contained to a few classes and out of the minds of most of the business logic developers.

Store Cache Metadata

For each item in the cache we setup a custom system of storing a set of metadata along side the cached value. This metadata contains key system information such as related domain objects, creation times, and other keys to aid in cross-item or aggregate lookups in the cache.

Regions per customer

Our system can be multi-tenant, so we needed a way to distinguish the data of one customer from another. This can be accomplished a few ways in AppFabric:

  • Part of the cache key
  • Separate region per tenant
  • Separate cache per tenant

We decided against identifying the tenant in the cache key, as this would not allow us to administer a ‘tenant’ without affecting others. The reason we chose the separate region per tenant over the separate cache per tenant was because of the overhead of needing to maintain each cache, which we viewed as much heavier weight than a region. It is also possible to programmatically administer regions, while it is not possible at the cache level.

The only downside with storing a tenant per region is that currently, AppFabric will not distribute a region across multiple server, which it does with a separate cache. Right now this serves our needs, but we may re-evaluate this going forward.

Administrative cache

We envision a need for our operations staff to be able to administer the cache and troubleshoot issues. To that end we built a mechanism into the cache access layer that gave us visibility into what regions were active and created at any time. Ironically, Microsoft did not include a programmatic way to retrieve a list of named regions from the cache.

We came up with a simple approach of creating an administrative cache area, separate from the main cache. Each time a region is added or removed we maintain a key in the administrative region with that regions information.  This allows us to provide a real-time view of what regions are active and then query the regions for the contained cache items if needed.

Lessons Learned

Lack of programmatic administrative capabilities stinks

Not sure what the reason behind some of the gaps in the AppFabric API are, but I must say that the lack of a full-featured administrative API available from code stinks. Most of the administrative functions are only available via PowerShell. I can understand that this is likely targeted at the maintainers of the system, but quite honestly how is someone supposed to create an application that can be used to administer the cache without the need for PowerShell.

Local Cache with Notification based expiration – Not good enough

One of the features we were hoping to leverage with AppFabric, was the local cache option. This allows for a local cache to be constructed from where the client is accessing the cache (i.e. web server in our case). If the main cache gets updated the local cache would get invalidated. This seemed like a great way to boost performance while still maintaining our support for web-farm operation.

Unfortunately the design of this feature doesn’t work quite the way as we expected. The local cache option requires a polling of the main cache to find out if its items are invalid. This would allow for too much latency (or too high of network traffic is polling time decreased) in the system, so we had to nix this. Too bad – if only the implementers had used some type of event driven system so the latency was lower – that would have been great.

Your assumptions will be wrong – Test, Test, Test.

There is absolutely no way to know ahead of time whether a particular strategy will work without empirical testing.

Put some data in the database that represents your usage and test the scenarios – direct to database and data from cache. Make sure that your data access patterns return timings that are favorable to using the cache, otherwise don’t use it.

This was especially true in the system I worked on, where we were storing a complete set of an entity type in the cache, but when clients required the data they only ever needed a subset. We had assumed that this would likely be a poor candidate for caching in its entirety and were considering caching the subset variations. The tests we ran refuted this assumption and showed that the overhead of de-serializing the entire list of entity types and then pruning them in memory with LINQ, was fairly per formant and allowed for less burden of having to manage subset pieces of the entity type in cache.

 

References

Domain Models

http://msdn.microsoft.com/en-us/magazine/ee236415.aspx  (Employing the Domain Model Pattern)

http://martinfowler.com/eaaCatalog/domainModel.html  (P of EAA: Domain Model)

Windows Server AppFabric

http://msdn.microsoft.com/en-us/windowsserver/ee695849

Caching Patterns

http://www.alachisoft.com/resources/articles/domain-objects-caching-pattern.html  (Distributed Caching and Domain Objects Caching Pattern for .NET)

http://www.ibm.com/developerworks/webservices/library/ws-soa-cachemed/  (Cache mediation pattern specification: an overview)

http://ljs.academicdirect.org/A08/61_76.htm  (Caching Patterns and Implementation)

Error returning a DataTable from a WCF service call

Not debating the merits of whether its appropriate or sensible to return a DataTable as a response from a web service, but if you receive an error like I did make sure to check the following:

1. The DataTable needs to be named.

    Before (causes an error):

public DataTable ExecuteDataTable()
{
    return new DataTable();
}

    After (no error):

public DataTable ExecuteDataTable()
{
    return new DataTable(“Test”);
}

 

2. Ensure that the packet sizes configured for WCF are large enough to accommodate a serialized DataTable with the data content you have in it. Serializing a DataTable to XML results in some fairly large XML documents and can easily surpass the packet limits in the default WCF configuration.

Binding DataGrid columns to DataContext items

The WPF DataGrid is a fantastic component. It’s flexible and can serve in a lot of different scenarios. Recently I was using the DataGrid in a project and had the need to dynamically hide certain columns via a users selection in a configuration section.

The idea is the user would check a box to hide a corresponding column represented in the DataGrid and the column would not appear.

 image image

I implemented the XAML as below, binding the columns visibility flag to the checkboxes IsChecked value with the appropriate converter. I was surprised to find that this didn’t work at all. Toggling the checkbox did nothing.

image

Looking in the IDE’s Output window I noticed the following binding error message:

System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=IsChecked; DataItem=null; target element is ‘DataGridTextColumn’ (HashCode=19699911); target property is ‘Visibility’ (type ‘Visibility’)

After some digging around, I found that the DataGrid columns aren’t actually in the Visual Tree of the window as you can see from the screenshot below.

image

This is the source of our error, as the DataGridTextColumn we are trying to bind to things isn’t participating in the Visual Tree.  To work around this we can implement the DataGrid’s DataContextChanged event and “forward’ the DataContext to the individual columns.

This would allow the individual columns to be able to be bound to items within the DataContext. Below is a quick and dirty implementation of how to perform this forwarding.

A more elegant way to do this would be to override the DataContextChanged property on all Datarid’s using  a metadata override.

Here’s the code behind to handle the DataContextChanged event and forward the context to the columns. Notice that I’ve also implemented a property with change notification to be used by the checkbox to store it’s “IsChecked” value so that it can be retrieved via the DataContext.

image

And here’s the modified XAML showing the Checkbox now bound to the new property and the DataGrid’s first column bound to the same value using the DataContext.

image

Because the DataGridTextColumn is not part of the visual tree I don’t believe there is any way to perform direct element bindings.

If someone figures this out, drop me a note.

WPF – Overlapped Image Control

I was recently trying to find a way to display some data that gave the user a relatively easy way to gauge the number of items in a dataset as well as still being functional enough to interact with those data items.

An example of this need might be a user administration module for an application where a user can have several states:

  • Suspended Logins
  • Awaiting approval
  • Requiring password resets

So I thought an interesting and appealing way to display this information would be a in a graph format where each category was a point on the axis and the data points (users) would be represented graphically in a overlapped manner (think similar to a stack of dimes).

My requirements were simply to allow items to overlap by a certain percentage of their width or height.

Here’s is what I ended up with.

image 

My first attempt at creating the overlapped effect was to use a translation render transform. This didn’t work because the translation needed to be performed by more than a fixed amount per item.

So I decided to create my own custom panel to handle the placement of items.

I chose the StackPanel as a basis for my custom panel. Breaking it down there were 3 steps involved in creating the custom panel:

  • Add a dependency property to allow setting the percentage of overlap
  • Define the MeasureOverride() logic needed
  • Define the ArrangeOverride() logic needed

 

Add the Dependency Property

I started by creating the dependency property to allow the percentage of overlap to be specified by a caller.

/// <summary>
/// Percentage of overlap
/// </summary>
public int OverlapPercentage
{
   get { return (int)GetValue(OverlapPercentageProperty); }
   set { SetValue(OverlapPercentageProperty, value); }
}

public static readonly DependencyProperty OverlapPercentageProperty =
   DependencyProperty.Register("OverlapPercentage", typeof(int), typeof(OverlapImageStackPanel), 
   new FrameworkPropertyMetadata(0, FrameworkPropertyMetadataOptions.AffectsMeasure | FrameworkPropertyMetadataOptions.AffectsArrange));
 

Define MeasureOverride Logic

The last step is to implement our overrides for the MeasureOverride() and ArrangeOverride().  For the measurement pass we sum up the widths or heights (depending on the orientation) taking into account the percentage of overlap requested.

protected override Size MeasureOverride(Size availableSize)
{
    Size infinietSize = new Size(double.PositiveInfinity, double.PositiveInfinity);
    double overlapPercentageDouble = (double)OverlapPercentage / 100.0;
    double unoverlapPercentageDouble = 1.0 - ((double)OverlapPercentage / 100.0);

    double totalWidths = 0;
    double totalHeights = 0;

    // get total widths of each child                       
    foreach (UIElement child in Children)
    {
        child.Measure(availableSize);
        if (child.Visibility != Visibility.Collapsed)
        {
            if (Orientation == Orientation.Horizontal)
            {
                totalWidths += child.DesiredSize.Width * overlapPercentageDouble;
                totalHeights = (child.DesiredSize.Height > totalHeights) ? child.DesiredSize.Height : totalHeights;
            }
            else
            {
                totalHeights += child.DesiredSize.Height * overlapPercentageDouble;
                totalWidths = (child.DesiredSize.Width > totalWidths) ? child.DesiredSize.Width : totalWidths;
            }
        }
    }

    if (Orientation == Orientation.Horizontal)
        totalWidths += unoverlapPercentageDouble;
    else
        totalHeights += unoverlapPercentageDouble;


    Size resultSize = new Size();
    resultSize.Width = double.IsPositiveInfinity(availableSize.Width) ? totalWidths : availableSize.Width;
    resultSize.Height = double.IsPositiveInfinity(availableSize.Height) ? totalHeights : availableSize.Height;
                
    return resultSize;
}

Define ArrangeOverride Logic 

For the Arrange layout pass we layout the children taking into account the percentage of overlap for each child item as set on the control.

protected override Size ArrangeOverride(Size finalSize)
{
    if (Children.Count == 0)
        return finalSize;

    double availableWidth = finalSize.Width;
    double availableHeight = finalSize.Height;
    double totalWidths = 0;
    double totalHeights = 0;

    double overlapPercentageDouble = (double)OverlapPercentage / 100.0;
    double unoverlapPercentageDouble = 1.0 - overlapPercentageDouble;

    // get total widths of each child                        
    foreach (UIElement child in Children)
    {
        if (child.Visibility != Visibility.Collapsed)
        {
            if (Orientation == Orientation.Horizontal)
            {
                totalWidths += child.DesiredSize.Width * overlapPercentageDouble;
                totalHeights = finalSize.Height;
            }
            else
            {
                totalHeights += child.DesiredSize.Height * overlapPercentageDouble;
                totalWidths = finalSize.Width;
            }
        }
    }

    if ((Orientation == Orientation.Horizontal && (availableWidth < 0 || totalWidths <= 0)) ||
        (Orientation == Orientation.Vertical && (availableHeight < 0 || totalHeights <= 0)))
    {
        return finalSize;
    }

    if (Orientation == Orientation.Horizontal)
        totalWidths += unoverlapPercentageDouble;
    else
        totalHeights += unoverlapPercentageDouble;

    double left = 0;
    double top = 0;
    foreach (UIElement child in Children)
    {
        child.Arrange(new Rect(new Point(left, top), new Size(child.DesiredSize.Width, child.DesiredSize.Height)));
        if (Orientation == Orientation.Horizontal)
        {
            left += child.DesiredSize.Width * unoverlapPercentageDouble;
        }
        else
        {
            top += child.DesiredSize.Height * unoverlapPercentageDouble;
        }
    }

    return new Size(totalWidths, totalHeights);//finalSize;
}
That sums up what’s needed to implement a fairly handy overlapped StackPanel. Similar logic can be used for a WrapPanel as well.

Constructing a DAL in C#

I’ve been asked many times why someone might write their own DAL given that there exist so many frameworks that can provide this functionality already.

And the truth is – you may or may not need to.

There are a lot of available DAL’s and frameworks out there that can provide many of the common features of database platform and type abstraction, but you (as I did) may have certain needs that aren’t fulfilled by these frameworks.

Examples of some custom tasks that aren’t readily covered by some frameworks might be:

  • Parsing and insertion of custom data in SQL statements
  • Syntax abstraction in SQL statements
  • Auto-population of application specific data (e.g. DAL knowledge of some schema or application mechanisms)
    While any of the above, or other, tasks can likely be accomplished with most frameworks through some level of effort, the number one reason in my mind to write your own DAL at some point is to gain a deeper understanding of what is happening beneath the covers when using a DAL – either through a framework or perhaps an existing for a project you are working on.
    Gaining a thorough understanding of the tasks required to be abstracted and performed by the DAL on your behalf help developers to know how to effectively use a DAL, as well as increase their own knowledge in this area.

And that is what this post is about – the creation of a simple DAL as a learning tool. I’ll talk about some of the work we’ve put into a DAL for a project I’ve worked on and some of the useful features that you may want consider to make your life a bit easier should you choose to write your own DAL.

I thought that giving some sample code of connecting to a SQL and Oracle database from the same client would serve as a nice reference point to start out discussion.

Here are the current hoops we have to jump through to read some simple data and get a DataTable back.

string sqlConnectionString = "Data Source=localhost;Initial Catalog=DALDatabase;Integrated Security=SSPI";
string oraConnectionString = "Data Source=localhost;User Id=DALDatabase;Password=password;Integrated Security=no;";

string sqlCommandText =
    "SELECT Department.Name " +
    "FROM Employee " +
    " INNER JOIN EmployeeToDepartment ON Employee.ID = EmployeeToDepartment.EmployeeID " +
    " INNER JOIN Department ON EmployeeToDepartment.DepartmentID = Department.ID " +
    "WHERE " +
    "Employee.LastName = @LastName";

string oraCommandText =
    "SELECT * " +
    "FROM Employee " +
    " INNER JOIN EmployeeToDepartment ON Employee.ID = EmployeeToDepartment.EmployeeID " +
    " INNER JOIN Department ON EmployeeToDepartment.DepartmentID = Department.ID " +
    "WHERE " +
    "Employee.LastName = :LastName";

//SQL
DataSet ds = new DataSet();
SqlConnection sqlConn = new SqlConnection(sqlConnectionString);
SqlCommand sqlCmd = new SqlCommand(sqlCommandText, sqlConn);
SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
param.Value = "Adams";
sqlCmd.Parameters.Add(param);
SqlDataAdapter sqlDA = new SqlDataAdapter(sqlCmd);
sqlDA.Fill(ds);
DataTable dt = ds.Tables[0];

//Oracle

...
      One of the first steps to tackle is to design a way to abstractly create the different types of objects we will need such as a Connection, Command, etc.
    So let’s get started…

     

    For our purposes we will create a set of database specific providers to mange the creation of objects specific to that database. We will also abstract the creation of these database specific providers through the use a factory pattern to create the concrete implementations of our database specific providers.

     

    Abstracting the Database Specific Providers

    We’ll start by defining an abstract class to represent the set of services that each database provider will need to implement

    internal abstract class BaseDBProvider
    {
        // Connections
        internal abstract IDbConnection CreateConnection(
            string connectionString);
    
        // Commands
        internal abstract IDbCommand CreateCommand(IDbConnection conn);
        internal abstract IDbCommand CreateCommand(IDbConnection conn,
            string commandText);
    
        // Parameters
        internal abstract IDbDataParameter CreateParameter(
            string name, object value);
        internal abstract IDbDataParameter CreateParameter(
            string name, DALDbType dataType, object value);
        internal abstract IDbDataParameter CreateParameter
            (string name, DALDbType dataType, int size, object value);
    
        internal abstract IDbDataAdapter CreateDataAdapter();
        internal abstract IDbDataAdapter CreateDataAdapter(IDbCommand cmd);
    
        ...
    }

     

    Our base class is fairly simple at this point defining a way to create a connection which accepts a provider specific connection string to connect to the database with and returns a IDbConnection interface. The IDbConnection interface is defined within the ADO.NET framework and all ADO.NET based database connection objects must implement it.

    By returning the interface we can separate the implementation details from the set of common services that each connection object provides.

    Similarly, we have a set of methods that allow the construction of command objects, which accept an IDbConnection and optionally the SQL command text to execute. Both these methods return an IDbCommand interface, again defined by Microsoft, which all database specific ADO.NET commands must implement. This again separates the caller from the provider specific objects.

    The next step is to provide concrete implementations of the BaseDBProvider class for each database platform we want to support. Below is an example for SQL Server.

    internal class SQLDbProvider : BaseDBProvider
    {
        internal SQLDbProvider() { }
    
        internal override IDbConnection CreateConnection(string connectionString)
        {
            return new SqlConnection(connectionString);
        }
    
        internal override IDbCommand CreateCommand(IDbConnection conn,
            string commandText)
        {
            if (!(conn is SqlConnection))
                throw new ArgumentException("SqlConnection required", "conn");
    
            IDbCommand cmd = conn.CreateCommand();
            cmd.CommandText = commandText;
            return cmd;
        }
    
        internal override IDbDataAdapter CreateDataAdapter(IDbCommand cmd)
        {
            return new SqlDataAdapter(cmd as SqlCommand);
        }
    
        internal override IDbDataParameter CreateParameter(string name, DALDbType dataType, int size, object value)
        {
            SqlParameter p = new SqlParameter(name, GetDataTypeFor(dataType), size);
            p.Value = value;
            return p;
        }
    
        ...
    
        private SqlDbType GetDataTypeFor(DALDbType dbType)
        {
            SqlDbType ret = SqlDbType.NVarChar;
            switch (dbType)
            {
                case DALDbType.DateTime:
                    ret = SqlDbType.DateTime;
                    break;
                case DALDbType.Int:
                    ret = SqlDbType.Int;
                    break;
                case DALDbType.NText:
                    ret = SqlDbType.NText;
                    break;
                case DALDbType.NVarChar:
                    ret = SqlDbType.NVarChar;
                    break;
                default:
                    throw new NotSupportedException("Data type not supported");
            }
            return ret;
        }
    }

     

    One thing of interest to note is the way the SqlCommand gets created. We use the CreateCommand() method of the IDbCommand interface which at this point is implemented on a SqlConnection.

    The same would be done for each database platform you intend to support.

    internal class OracleDbProvider : BaseDBProvider
    {
        ...
    }


    We also defined a set of database neutral types that can be used across providers and mapped to whatever database specific types we desire. For instance we can have a DALDbType.Bit map to a SQL Server ‘Bit’ or ‘TinyInt’ or ‘Char’ if we wanted. This opens up additional flexibility in mapping logical types to the physical types for each database platform.

    public enum DALDbType
    {
        DateTime,
        Int,
        NVarChar,
        NText,
    
        ...
    }

     

    Implementing the Provider Factory

    At this point we have encapsulated all the database specific code in each provider class, but we haven’t solved how to not specifically instantiate these providers from our client code.

    To do this we will use the concept of a class factory. This is a pattern that allows a caller to create concrete implementations of objects by providing some identification of what type of object to create without the need for referencing the type directly.

    Here’s the implementation of the class factory that will create and return concrete providers for our specified database.

    public enum SupportedDatabases { SQLServer, Oracle }
    
    internal static class DbProviderFactory
    {
        public static BaseDBProvider GetDbProvider(SupportedDatabases databaseType)
        {
            BaseDBProvider provider = null;
            switch (databaseType)
            {
                case SupportedDatabases.SQLServer:
                    provider = new SQLDbProvider();
                    break;
    
                case SupportedDatabases.Oracle:
                    provider = new OracleDbProvider();
                    break;
    
                default:
                    break;
            }
            return provider;
        }
    }

     

    We define an enumeration that contains identifiers for the different databases we will support in our DAL and then through the use of the factory, we specify which type of database we would like a provider for and the factory creates the correct database specific provider for us.

    The key to the decoupling process is that the factory returns the abstract BaseDBProvider class to the caller. This allows the caller to use the same object and methods regardless of the type of database being used.

     

    Simplifying Usage

    One important goal for me when creating a DAL is to have it simplify the way I use and interact with the database. By creating a DAL on top of what ADO.NET provides we can start to simplify patterns of usage that normally occurs.

    For instance, instead of setting up command parameters using the typical Create/Set/Add pattern:

    SqlCommand cmd = new SqlCommand(cmdText, conn);
    SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
    param.Value = "Smith";
    cmd.Parameters.Add(param);


    We can simplify this into a single method, cleaning up the caller code as well as (in my opinion) making the code more readable.

    DALCommand cmd = new DALCommand(conn, cmdText);
    cmd.AddParameter("LastName", DALDbType.NVarChar, 32, "Smith");

     

    Other simplifications that I would include in a DAL is the use of DataAdapters to retrieve DataTable and DataSet objects. Wouldn’t it be nice to have a command object that supports ExecuteDataTable() or ExecuteDataSet() methods?

    Seems like I am very often working with DataTable’s, and using ADO.NET to retrieve these seems a bit more work than it should otherwise be. I would love to be able to just do something like this to simplify my client code:

    DataTable dt = cmd.ExecuteDataTable();

     

    Summary

    To recap what we accomplished:

    • We now have a basic DAL framework to abstract the use of database specific objects to more abstract ones. No more dealing with XXXCommand or XXXXConnection per database platform.
    • We’ve also managed to abstract ourselves from database specific types as well as added the flexibility to represent data in any way we want on the database platform (Ex: Bit ==> TinyInt, etc.)
    • We’ve simplified usage of the DAL so we can write less code and let it be more readable.
    Additional Features

    While we haven’t yet completely abstracted ourselves from the differences in databases completely – most notably the connection string and SQL syntax (parameter syntax differences, etc) – at this point I will leave the implementation of these to the reader.

    On projects I’ve worked on we have successfully abstracted away the connection strings as well as a lot of the SQL syntax of queries across the providers we are using.

    If there’s any interest in these techniques, drop me some comments and perhaps I’ll do a follow-up post.

    By the way, here’s the relevant portion of the client code from the beginning of the post with the DAL improvements we’ve implemented compared to the original:

    ADO.NET:

    DataSet ds = new DataSet();
    SqlCommand cmd = new SqlCommand(cmdText, conn);
    SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
    param.Value = "Smith";
    cmd.Parameters.Add(param);
    SqlDataAdapter da = new SqlDataAdapter(cmd);
    da.Fill(ds);
    DataTable dt = ds.Tables[0];


    DAL:

    DALCommand cmd = new DALCommand(conn, cmdText);
    cmd.AddParameter("LastName", DALDbType.NVarChar, 32, "Smith");
    DataTable dt = cmd.ExecuteDataTable();


    Seems like a winner to me!

    Technorati Tags: ,,,,
    Digg This