Nuggets from Interviewing – Tips for Developers

Having participated in numerous interviews for various companies filling positions, I’ve seen some really great candidates come through, and some not so great. There have also been those candidates that appeared to be great, but made some classic mistakes.

With a good sampling of interviews under my belt I thought it would be helpful to list a few observations in the form of tips – mostly not to do – for interviewing. I hesitated to write this as a lot of what I’m listing feels like it should be obvious to most people, but from what I’ve seen, that’s not always the case.


  1. Know what’s on your resume

A big red flag goes up when a candidate cannot speak about technologies or projects they’ve listed on their resumes. If you list it, make sure you can talk about it. Better yet, list your role on the project. What? You don’t want to list that you had the merest of any participation or a limited role on the project. Probably a good indication it doesn’t belong on your resume.

Interviewers aren’t impressed or looking for buzzwords for every technology out there. We want to see projects you had a significant role or involvement in and we want to know in-depth what you did. If you didn’t have a hand in something other than having been on a team that performed some work in that area, or you didn’t actually do any of the work – don’t list it!

I’ve had interviews where I’ve paraphrased a question based on a statement on a candidates resume, asking them if they’ve done this before and the answer was “no”. Seriously? If you list it, make sure you have done it.

2. Read the job description and ensure you meet the basic qualifications

Not knowing exactly what technologies are being asked for in a position – both required and preferred/desired – leads us to believe you are mass emailing out resumes and applying to anything you can find. Normally a Human Resources or Recruiting department will screen candidates out if they don’t meet the job qualifications, but they shouldn’t have to – don’t apply for positions you don’t meet required qualifications for.

It’s disheartening when we hear someone say, “I didn’t know you were looking for that skill”, or “I’m not as strong in X” when the job positing clearly lists it in the context of “Proficient in X”. If you don’t know what “proficient” or “demonstrated ability” means, look it up.

Make sure you meet all of the basic requirements of the position before applying.

3. Don’t pretend to have experience in an area

This ties in with #1 above – if you list it or say you know something, make sure you know it.

We’ve had candidates list newer technologies like WPF or WCF on their resume which were requirements for some of the positions we’ve had and when asked to tell us what they’ve done with either technology the response is along the lines of “I’ve downloaded some of the samples and played around with them”. This is fine if the position only asked for familiarty, but isn’t normally going to suffice if it’s listed as “Demonstrable experience” or “Proficient in”.

It’s perfectly acceptable to talk about the fact that in your current position you haven’t yet used a technology and that you are immersing yourself in it on your own. This is great and shows that you can learn on your own and attempt to keep abreast of new technologies, but if the technology is a requirement of the position sample work will more than likely not be enough.

More egregious is the candidate who says they’ve spent some number of years working with a technology or had a substantial project they’ve worked on using a technology and then can’t answer basic questions about it.

Think about it this way – my job is to determine what your skill level is in any of the technologies that we might be using for our position. Given that objective, any attempt to pretend to know a technology will be exposed during the interview. It’s designed that way.

Just don’t do it. It’s embarrassing for you and us.

4. Know your skill level in different technologies

This is one of those self-reflection type of items. In preparation for the interviewing process you should have taken some time to assess your skill level in different areas. We all have stronger and weaker areas. Know what yours are – it helps.

In the interview process I will usually ask a candidate to rate themselves in a few of the skills we are focusing on in the interview. Scale of 1 to 10, 10 being an expert.

The answer to this question gives me some insight into (a) is how you rated yourself consistent with what I see on your resume, (b) is it aligned with the requirements of the position and (c) gives me a sense of what level of questions I should ask you.

(A) Rating yourself inconsistently with what’s on your resume leads me to ask you more
about your past experiences.

If you rated yourself lower than your presented experience we may need to explore past project work. What was the role on the project? Did you use the skill on that project? To what level of depth?

If the rating is much higher than on the resume I want to know why. Was the role you had more substantial than you presented? Some resumes have so many projects listed that they all explanations are brief. This may indicate that your resume isn’t presenting yourself as well as it could for the position.

(B) If the position requirement lists “Proficient in writing SQL queries” and you rate yourself a 2 in SQL, we have a problem. In either case – rating yourself higher or lower than required for the position – means we need to explore further.

(C) Knowing where you think you’re at experience-wise with a technology gives me a good sense of where to start asking questions. Someone rating themselves a 5 or below will get more basic questions to start with, whereas someone rating themselves higher will start with more difficult questions. We will still circle back to some basics to ensure a good foundation but expect to be asked more detailed questions as to why/how things work.

We aren’t necessarily looking for syntax in these questions as opposed to knowledge of concepts and how things work, tradeoffs, etc. As a developer myself there are a ton of times I can’t remember the syntax for something and need to look it up or use Intellisense to help.

Reflect on your skills and “where you’re at”.

5. Don’t be afraid to say you don’t know

One common mistake during interviews is the failure to say “I don’t know”. It is perfectly valid to tell an interviewer that you aren’t familiar with a concept or technology. Constant change is a fact in our industry. There’s the possibility that given the breadth of technologies we employ and the depth of each, that there will likely be corners of a technology that you’ve used that you are unfamiliar with or even a full technology stack for that matter.

If the case occurs that the interview leads down one of these less traveled paths, it’s always preferable to an interviewer to hear “I don’t know”, rather than have a candidate try to guess, make things up or tell the interviewer what you think they want to hear. The former leaves the impression of honesty, while the latter leads to a perception of lack of knowledge or worse – of deception.

Most interviewers that attempt to be fair and find good candidates won’t immediately count an “I don’t know” against a candidate unless its in a core area that is going to be required of the position. Even then I try to weigh whether the item not understood is a conceptual issue or something that as a practitioner that you might use reference material (ie. Google, Books, etc) to normally accomplish.

When you don’t know, be honest. Don’t make things up, guess or try to give the interviewer what they want to hear.

6. Don’t talk badly about past people or positions

An obviously bad idea is to trash talk about people you’ve worked with or companies that have employed you. This immediately leaves an interviewer to believe that you are the type of person to tear people down rather than help lift the entire group.

I honestly believe most people know they shouldn’t talk badly about past employers and coworkers in an interview, but for some reason there is a subset of people that once they get in front of you can’t help but tear people down. I’m not sure if its due to nervousness, the question asked (e.g. tell me about a challenge…), or if they just really had a terrible experience with an employer.

But think about it from this perspective. You are interviewing for a position with a new company.  If for some reason that company makes you an offer and further down the road things don’t work out, the interviewer is likely getting a good sense of how you are going to describe them or their company on your next interview. I can guarantee that the thought going through the interviewer’s mind is – PASS.

Keep it positive. Talk about challenges, not how much you can’t stand a company or coworker.

7. Don’t make assumptions

This is a very broad statement, but its applicable from the standpoint of, as soon as you start making assumptions there exists the possibility of error. Making assumptions can take many forms. Perhaps you think that for a certain line of questioning by the interviewer that they are looking for a certain answer. This might apply more often for a behavioral style of interview question, where you assume that the interviewer is trying to probe for whether a certain set of traits or behaviors from you. These type of assumptions can quickly lead you astray if you make the wrong assumption.

If in doubt it’s always better to ask for clarification. The interviewer won’t mind paraphrasing or explaining something a bit more clearly, and as a bonus you will appear more engaged than someone who makes an assumption.

Another assumption I’ve seen made is one of a candidate believing they are a perfect fit for the position and therefore not doing any “selling” of themselves. A candidate that feels that based on the experiences listed on their resume or previous positions held that they assume you that the interviewer will “obviously see” that they are a perfect fit, is assuming a few things.

The first thing that they assume is that the interviewer is valuing something on their resume or in their background the same way that they do. Maybe – maybe not.

Another assumption being made is that because of the “obviousness” of the assumption being made, that there is no need to sell themselves or their skills to the interviewer. Remember that the candidates main objective is to show the interviewer that they are qualified for the position and how they can add value to the organization. I can’t imagine someone failing worse than someone who is qualified and assumes that the interviewer see this and therefore doesn’t close the deal by explaining how they see themselves fitting into the position. Not to mention it gives a perception of overconfidence on the candidates part.

Assume nothing.

8. Be prepared to demonstrate your knowledge

There are several interviewing techniques that are employed, but when interviewing I usually insist on having the candidate demonstrate their ability in one way or another.

This can mean reading and discussing some code. Or it can be a whiteboard design problem that you are asked to work through. It can even be writing some code to demonstrate your ability to design something or solve a specific problem.

Now, unless you’ve really done your homework on a company and their hiring practices it may be difficult to determine what to be prepared for. The point is that if you are interviewing for a development position you should be able to demonstrate that you can develop something.

Many times the reason behind the exercise is not to determine if you read or write some code correctly or designed a system to a given requirement. Rather it’s usually tied to how you get to the result, and not the correctness of the result. It’s more helpful for me to see how you think – do you ask questions, do you make assumptions, what tradeoffs are being considered, etc – than whether you know how to solve a single specific problem.

Rather than tell me, show me.

9. Do share your non-work activities that are directly related to the position

Many candidates tend to try and avoid talking about non-work activities. In most cases I would agree. As an interviewer I don’t particularly care that you enjoy playing sports or sailing.

BUT, if the activity is related to the position you are applying for or technologies being used by all means let the interviewer know. In fact, its one of those “key indicators” that can be looked for to show some additional level of motivation or passion from the candidate. For a developer this might range from the Exchange server you run at home to learn with, or the game you wrote, or the books/clubs/organizations/usergroups you read and/or are a part of.

So if it relates to the job or technology bring it up. If it doesn’t don’t.

Good luck and I hope these tips help.

Related: Nuggets from Interviewing – Interviewer Techniques


Binding DataGrid columns to DataContext items

The WPF DataGrid is a fantastic component. It’s flexible and can serve in a lot of different scenarios. Recently I was using the DataGrid in a project and had the need to dynamically hide certain columns via a users selection in a configuration section.

The idea is the user would check a box to hide a corresponding column represented in the DataGrid and the column would not appear.

 image image

I implemented the XAML as below, binding the columns visibility flag to the checkboxes IsChecked value with the appropriate converter. I was surprised to find that this didn’t work at all. Toggling the checkbox did nothing.


Looking in the IDE’s Output window I noticed the following binding error message:

System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=IsChecked; DataItem=null; target element is ‘DataGridTextColumn’ (HashCode=19699911); target property is ‘Visibility’ (type ‘Visibility’)

After some digging around, I found that the DataGrid columns aren’t actually in the Visual Tree of the window as you can see from the screenshot below.


This is the source of our error, as the DataGridTextColumn we are trying to bind to things isn’t participating in the Visual Tree.  To work around this we can implement the DataGrid’s DataContextChanged event and “forward’ the DataContext to the individual columns.

This would allow the individual columns to be able to be bound to items within the DataContext. Below is a quick and dirty implementation of how to perform this forwarding.

A more elegant way to do this would be to override the DataContextChanged property on all Datarid’s using  a metadata override.

Here’s the code behind to handle the DataContextChanged event and forward the context to the columns. Notice that I’ve also implemented a property with change notification to be used by the checkbox to store it’s “IsChecked” value so that it can be retrieved via the DataContext.


And here’s the modified XAML showing the Checkbox now bound to the new property and the DataGrid’s first column bound to the same value using the DataContext.


Because the DataGridTextColumn is not part of the visual tree I don’t believe there is any way to perform direct element bindings.

If someone figures this out, drop me a note.

Nuggets from Interviewing – Interviewer Techniques

Interviewing candidates for an open job position is hard. Period.

No matter what process or methodology you use, in the end it comes down to trying to summarize a persons skills, personality, work ethic and team fit within a relatively small allotted time. Not an easy task.

As an interviewee I’ve participated in many different interview styles that companies had ranging from a brief single discussion with the hiring manager to the grueling eight hour interview with standardized testing, five one on one  interviews and a case study.

What I’ve found is that, along with most things in life, a style somewhere in between both extremes works best. Keeping in mind the damage hiring the “wrong” person can cause – lower productivity, morale problems, inter-team social issues, etc – we can immediately see that any extra effort needed upfront to setup a process that works will yield greater results in finding the “right people” and pay us back in the long run.

The Interview Process

There are many ingredients you can choose from when thinking about what to include in your interview process. The list below contains some of the techniques I use in interviews I participate in as well as some that others might be familiar with.

  • One on one interview

A individual discussion between the candidate and a key team member. To be most effective this should be someone at least technical enough to distinguish between someone who knows the technology or area being discussed and someone who is good at talking around an area.

My opinion is that the interview must be with at least one individual that would be considered a “peer” or have a close working relationship with whoever is hired into the position.


  – Lower stress than a group interview

  – Can get multiple reads on the same areas from a candidate


  – Time can be a factor if there are several individuals on the team you want the candidate 
    to speak with

  – Need to reconcile each interviewers perception of the candidate. More difficult when
    there are discrepancies in perceptions of the candidate.

  • Group interview

The group interview is one of those techniques that if done correctly works well and if not done correctly is a complete disaster. The idea is to have several key team members in the interview at once all taking turns interacting with and asking the candidate questions.


  – Allows multiple team members to evaluate the same answers given by the candidate

  – Can leverage the experience and specialties of many individuals

  – Allows those not currently interacting with the candidate to focus more on the responses
    from the candidate

  – Less chance of difficult reconciliations since all interviewers hear the same response to


  – Higher stress environment for the candidate

  – Can feel like a “firing squad” or inquisition if not done right

  • Standardized testing

Standardized testing usually takes the form of a non-technical assessment of a candidate usually focusing in on behavioral, reasoning and logic questions.


  – Easy to compare results across candidates

  – Useful if your team believes there is a direct correlation between the success in the job
    position and the generalized knowledge being tested


  – Some people aren’t good at taking tests, especially under pressure. May result in loss of
    qualified candidates

  • Written or Verbal technical assessments

Written or verbal quizzes on the applicable technology areas your team uses. Can cover a range of topics and skills levels from basic knowledge to advanced concepts.

This should not devolve into a syntax assessment, as most programmers rely heavily on documentation, Intellisense and reference material for syntax, although the more familiar with the syntax a candidate is, the likelier they are to have been using the technology more recently and frequently.


  – Easy to determine whether the candidate has used the technology and has a firm
    understanding of the underpinnings


  – None

  • Reviewing of code

The intent of having the candidate read code is to determine a few things:

      – Do they understand the programming language being used.

      – Can they understand someone else’s code. A good portion of your product is
        likely written already. If the candidate can’t read and understand this existing code
        that may be a problem. Most companies can ill-afford the employee who needs to
        rewrite / refactor every area they work in so that they can properly understand it.

      – Can they intelligently talk about a piece of code and what some of the characteristics
        of it are – Usage patterns, good things, things to be improved, etc.


  – Good read on the candidates ability to understand code. Something not obtained
    through verbal/written technical assessments.


  – Stressful for the candidate as they will feel pressured and “on-stage”

  • Spot the bug / Troubleshooting Exercises

This technique usually involves having the candidate review a piece of code or application that has a bug in it. This exercise can be tailored to be very light or more complex.

A light version of this would be to have the candidate review a set of short code blocks with issues or bugs in them. These could be highlight core mistakes in the understanding of a concept to a more esoteric ones. These also should not include syntax problems that would be easily picked up by a compiler as most developers tend to rely on this.

For a more complex exercise, the candidate could review the companies actual product where bugs have been introduced into the code. Since the code would need to run to be worked on by the candidate these bugs would not be syntax related.


  – You get the candidate doing exactly what you are hiring them for- working with code.

  – Provides a good insight into a candidates ability to troubleshoot, identify and correct
    code. What a concept!


  – Candidates might feel stressed having to work with code in the interview. This can
    be alleviated by leaving the room, or by interacting with the candidate as they work –.
    answering questions and guiding them (although not directly to the answer!).

  – Depending on the bug introduced and the subtleties / complexities of the scenario the
    candidate might not be able to identify this. Make sure the bug is relatively apparent, or
    better yet have multiple bugs ranging from easy to more complex.

  • Design Problems

The technique referred to here are programming design problems, not esoteric ones. Usually involves describing a hypothetical system or problem that needs to be designed by the candidate giving them just enough details to start – barely.

The exercise usually involves the candidate coding the problem on a provided computer.

The premise behind this technique is that you want to be able to gauge how well a candidate can take a set of instructions and break them down into a well designed program. The main takeaway from this technique is the thought process and decisions that the candidate makes along the way and not the final outcome.

In fact we usually don’t have the candidate finish whatever problem was chosen. Observe things like:

     – Did the candidate ask clarifying questions on the requirements/premise of the problem

     – How did the candidate approach the problem – dive right in coding, draw it out, ask

     – Did the candidate consider the appropriate tradeoffs in design – speed, performance,

Remember that the focus is not on the end result but on how the candidate approaches the problem and works through it.


  – Provides insight into how a candidate thinks about problems and design considerations

  – Can be used as a gauge of experience, familiarity with tools, language/project choices, etc


  – Pick an area that is unfamiliar to the candidate. Make sure the problem is a fairly well know
    concept so it doesn’t require and explanation aside from the intended requirements.

  – More stressful than direct interview discussions since the candidate will feel like they are
    “on-stage”. Be sure to explain the intent of the exercise is to see their approach to the

  • “Try before you buy”

Pairing the candidate up with a team member for a time period on a current project and let them code together. This appears to be more applicable when the team is following an Agile methodology.


  – Useful to see the candidate in action. Coding style, thought processes, etc.

  – Able to get a rough gauge on team fit, personality.


  – Time. This requires a long time commitment by the candidate

  – May not work well with a candidate that is not familiar with the Agile methodology. Most
    non Agile shops don’t have programmers pairing together.

  – May be difficult for candidate to participate and demonstrate anything useful due to lack
    of knowledge of the feature, code, etc.


In the end what you choose to be a part of your process must be based on what works for your team and business culture and the amount of time you can invest in each interview.

Some of the techniques I find the most useful in interviews I participate in are the “hands-on” techniques. If I am hiring for a developer, then I want someone who can read and write code. If I’m in the interview, you will need to convince me you can code.

Let me hear from you if there are techniques you find work better than others for your team, or if you have any ones I haven’t listed here.

Related: Nuggets from Interviewing – Tips for Developers and just about everyone

WPF – Overlapped Image Control

I was recently trying to find a way to display some data that gave the user a relatively easy way to gauge the number of items in a dataset as well as still being functional enough to interact with those data items.

An example of this need might be a user administration module for an application where a user can have several states:

  • Suspended Logins
  • Awaiting approval
  • Requiring password resets

So I thought an interesting and appealing way to display this information would be a in a graph format where each category was a point on the axis and the data points (users) would be represented graphically in a overlapped manner (think similar to a stack of dimes).

My requirements were simply to allow items to overlap by a certain percentage of their width or height.

Here’s is what I ended up with.


My first attempt at creating the overlapped effect was to use a translation render transform. This didn’t work because the translation needed to be performed by more than a fixed amount per item.

So I decided to create my own custom panel to handle the placement of items.

I chose the StackPanel as a basis for my custom panel. Breaking it down there were 3 steps involved in creating the custom panel:

  • Add a dependency property to allow setting the percentage of overlap
  • Define the MeasureOverride() logic needed
  • Define the ArrangeOverride() logic needed


Add the Dependency Property

I started by creating the dependency property to allow the percentage of overlap to be specified by a caller.

/// <summary>
/// Percentage of overlap
/// </summary>
public int OverlapPercentage
   get { return (int)GetValue(OverlapPercentageProperty); }
   set { SetValue(OverlapPercentageProperty, value); }

public static readonly DependencyProperty OverlapPercentageProperty =
   DependencyProperty.Register("OverlapPercentage", typeof(int), typeof(OverlapImageStackPanel), 
   new FrameworkPropertyMetadata(0, FrameworkPropertyMetadataOptions.AffectsMeasure | FrameworkPropertyMetadataOptions.AffectsArrange));

Define MeasureOverride Logic

The last step is to implement our overrides for the MeasureOverride() and ArrangeOverride().  For the measurement pass we sum up the widths or heights (depending on the orientation) taking into account the percentage of overlap requested.

protected override Size MeasureOverride(Size availableSize)
    Size infinietSize = new Size(double.PositiveInfinity, double.PositiveInfinity);
    double overlapPercentageDouble = (double)OverlapPercentage / 100.0;
    double unoverlapPercentageDouble = 1.0 - ((double)OverlapPercentage / 100.0);

    double totalWidths = 0;
    double totalHeights = 0;

    // get total widths of each child                       
    foreach (UIElement child in Children)
        if (child.Visibility != Visibility.Collapsed)
            if (Orientation == Orientation.Horizontal)
                totalWidths += child.DesiredSize.Width * overlapPercentageDouble;
                totalHeights = (child.DesiredSize.Height > totalHeights) ? child.DesiredSize.Height : totalHeights;
                totalHeights += child.DesiredSize.Height * overlapPercentageDouble;
                totalWidths = (child.DesiredSize.Width > totalWidths) ? child.DesiredSize.Width : totalWidths;

    if (Orientation == Orientation.Horizontal)
        totalWidths += unoverlapPercentageDouble;
        totalHeights += unoverlapPercentageDouble;

    Size resultSize = new Size();
    resultSize.Width = double.IsPositiveInfinity(availableSize.Width) ? totalWidths : availableSize.Width;
    resultSize.Height = double.IsPositiveInfinity(availableSize.Height) ? totalHeights : availableSize.Height;
    return resultSize;

Define ArrangeOverride Logic 

For the Arrange layout pass we layout the children taking into account the percentage of overlap for each child item as set on the control.

protected override Size ArrangeOverride(Size finalSize)
    if (Children.Count == 0)
        return finalSize;

    double availableWidth = finalSize.Width;
    double availableHeight = finalSize.Height;
    double totalWidths = 0;
    double totalHeights = 0;

    double overlapPercentageDouble = (double)OverlapPercentage / 100.0;
    double unoverlapPercentageDouble = 1.0 - overlapPercentageDouble;

    // get total widths of each child                        
    foreach (UIElement child in Children)
        if (child.Visibility != Visibility.Collapsed)
            if (Orientation == Orientation.Horizontal)
                totalWidths += child.DesiredSize.Width * overlapPercentageDouble;
                totalHeights = finalSize.Height;
                totalHeights += child.DesiredSize.Height * overlapPercentageDouble;
                totalWidths = finalSize.Width;

    if ((Orientation == Orientation.Horizontal && (availableWidth < 0 || totalWidths <= 0)) ||
        (Orientation == Orientation.Vertical && (availableHeight < 0 || totalHeights <= 0)))
        return finalSize;

    if (Orientation == Orientation.Horizontal)
        totalWidths += unoverlapPercentageDouble;
        totalHeights += unoverlapPercentageDouble;

    double left = 0;
    double top = 0;
    foreach (UIElement child in Children)
        child.Arrange(new Rect(new Point(left, top), new Size(child.DesiredSize.Width, child.DesiredSize.Height)));
        if (Orientation == Orientation.Horizontal)
            left += child.DesiredSize.Width * unoverlapPercentageDouble;
            top += child.DesiredSize.Height * unoverlapPercentageDouble;

    return new Size(totalWidths, totalHeights);//finalSize;
That sums up what’s needed to implement a fairly handy overlapped StackPanel. Similar logic can be used for a WrapPanel as well.

Constructing a DAL in C#

I’ve been asked many times why someone might write their own DAL given that there exist so many frameworks that can provide this functionality already.

And the truth is – you may or may not need to.

There are a lot of available DAL’s and frameworks out there that can provide many of the common features of database platform and type abstraction, but you (as I did) may have certain needs that aren’t fulfilled by these frameworks.

Examples of some custom tasks that aren’t readily covered by some frameworks might be:

  • Parsing and insertion of custom data in SQL statements
  • Syntax abstraction in SQL statements
  • Auto-population of application specific data (e.g. DAL knowledge of some schema or application mechanisms)
    While any of the above, or other, tasks can likely be accomplished with most frameworks through some level of effort, the number one reason in my mind to write your own DAL at some point is to gain a deeper understanding of what is happening beneath the covers when using a DAL – either through a framework or perhaps an existing for a project you are working on.
    Gaining a thorough understanding of the tasks required to be abstracted and performed by the DAL on your behalf help developers to know how to effectively use a DAL, as well as increase their own knowledge in this area.

And that is what this post is about – the creation of a simple DAL as a learning tool. I’ll talk about some of the work we’ve put into a DAL for a project I’ve worked on and some of the useful features that you may want consider to make your life a bit easier should you choose to write your own DAL.

I thought that giving some sample code of connecting to a SQL and Oracle database from the same client would serve as a nice reference point to start out discussion.

Here are the current hoops we have to jump through to read some simple data and get a DataTable back.

string sqlConnectionString = "Data Source=localhost;Initial Catalog=DALDatabase;Integrated Security=SSPI";
string oraConnectionString = "Data Source=localhost;User Id=DALDatabase;Password=password;Integrated Security=no;";

string sqlCommandText =
    "SELECT Department.Name " +
    "FROM Employee " +
    " INNER JOIN EmployeeToDepartment ON Employee.ID = EmployeeToDepartment.EmployeeID " +
    " INNER JOIN Department ON EmployeeToDepartment.DepartmentID = Department.ID " +
    "WHERE " +
    "Employee.LastName = @LastName";

string oraCommandText =
    "SELECT * " +
    "FROM Employee " +
    " INNER JOIN EmployeeToDepartment ON Employee.ID = EmployeeToDepartment.EmployeeID " +
    " INNER JOIN Department ON EmployeeToDepartment.DepartmentID = Department.ID " +
    "WHERE " +
    "Employee.LastName = :LastName";

DataSet ds = new DataSet();
SqlConnection sqlConn = new SqlConnection(sqlConnectionString);
SqlCommand sqlCmd = new SqlCommand(sqlCommandText, sqlConn);
SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
param.Value = "Adams";
SqlDataAdapter sqlDA = new SqlDataAdapter(sqlCmd);
DataTable dt = ds.Tables[0];


      One of the first steps to tackle is to design a way to abstractly create the different types of objects we will need such as a Connection, Command, etc.
    So let’s get started…


    For our purposes we will create a set of database specific providers to mange the creation of objects specific to that database. We will also abstract the creation of these database specific providers through the use a factory pattern to create the concrete implementations of our database specific providers.


    Abstracting the Database Specific Providers

    We’ll start by defining an abstract class to represent the set of services that each database provider will need to implement

    internal abstract class BaseDBProvider
        // Connections
        internal abstract IDbConnection CreateConnection(
            string connectionString);
        // Commands
        internal abstract IDbCommand CreateCommand(IDbConnection conn);
        internal abstract IDbCommand CreateCommand(IDbConnection conn,
            string commandText);
        // Parameters
        internal abstract IDbDataParameter CreateParameter(
            string name, object value);
        internal abstract IDbDataParameter CreateParameter(
            string name, DALDbType dataType, object value);
        internal abstract IDbDataParameter CreateParameter
            (string name, DALDbType dataType, int size, object value);
        internal abstract IDbDataAdapter CreateDataAdapter();
        internal abstract IDbDataAdapter CreateDataAdapter(IDbCommand cmd);


    Our base class is fairly simple at this point defining a way to create a connection which accepts a provider specific connection string to connect to the database with and returns a IDbConnection interface. The IDbConnection interface is defined within the ADO.NET framework and all ADO.NET based database connection objects must implement it.

    By returning the interface we can separate the implementation details from the set of common services that each connection object provides.

    Similarly, we have a set of methods that allow the construction of command objects, which accept an IDbConnection and optionally the SQL command text to execute. Both these methods return an IDbCommand interface, again defined by Microsoft, which all database specific ADO.NET commands must implement. This again separates the caller from the provider specific objects.

    The next step is to provide concrete implementations of the BaseDBProvider class for each database platform we want to support. Below is an example for SQL Server.

    internal class SQLDbProvider : BaseDBProvider
        internal SQLDbProvider() { }
        internal override IDbConnection CreateConnection(string connectionString)
            return new SqlConnection(connectionString);
        internal override IDbCommand CreateCommand(IDbConnection conn,
            string commandText)
            if (!(conn is SqlConnection))
                throw new ArgumentException("SqlConnection required", "conn");
            IDbCommand cmd = conn.CreateCommand();
            cmd.CommandText = commandText;
            return cmd;
        internal override IDbDataAdapter CreateDataAdapter(IDbCommand cmd)
            return new SqlDataAdapter(cmd as SqlCommand);
        internal override IDbDataParameter CreateParameter(string name, DALDbType dataType, int size, object value)
            SqlParameter p = new SqlParameter(name, GetDataTypeFor(dataType), size);
            p.Value = value;
            return p;
        private SqlDbType GetDataTypeFor(DALDbType dbType)
            SqlDbType ret = SqlDbType.NVarChar;
            switch (dbType)
                case DALDbType.DateTime:
                    ret = SqlDbType.DateTime;
                case DALDbType.Int:
                    ret = SqlDbType.Int;
                case DALDbType.NText:
                    ret = SqlDbType.NText;
                case DALDbType.NVarChar:
                    ret = SqlDbType.NVarChar;
                    throw new NotSupportedException("Data type not supported");
            return ret;


    One thing of interest to note is the way the SqlCommand gets created. We use the CreateCommand() method of the IDbCommand interface which at this point is implemented on a SqlConnection.

    The same would be done for each database platform you intend to support.

    internal class OracleDbProvider : BaseDBProvider

    We also defined a set of database neutral types that can be used across providers and mapped to whatever database specific types we desire. For instance we can have a DALDbType.Bit map to a SQL Server ‘Bit’ or ‘TinyInt’ or ‘Char’ if we wanted. This opens up additional flexibility in mapping logical types to the physical types for each database platform.

    public enum DALDbType


    Implementing the Provider Factory

    At this point we have encapsulated all the database specific code in each provider class, but we haven’t solved how to not specifically instantiate these providers from our client code.

    To do this we will use the concept of a class factory. This is a pattern that allows a caller to create concrete implementations of objects by providing some identification of what type of object to create without the need for referencing the type directly.

    Here’s the implementation of the class factory that will create and return concrete providers for our specified database.

    public enum SupportedDatabases { SQLServer, Oracle }
    internal static class DbProviderFactory
        public static BaseDBProvider GetDbProvider(SupportedDatabases databaseType)
            BaseDBProvider provider = null;
            switch (databaseType)
                case SupportedDatabases.SQLServer:
                    provider = new SQLDbProvider();
                case SupportedDatabases.Oracle:
                    provider = new OracleDbProvider();
            return provider;


    We define an enumeration that contains identifiers for the different databases we will support in our DAL and then through the use of the factory, we specify which type of database we would like a provider for and the factory creates the correct database specific provider for us.

    The key to the decoupling process is that the factory returns the abstract BaseDBProvider class to the caller. This allows the caller to use the same object and methods regardless of the type of database being used.


    Simplifying Usage

    One important goal for me when creating a DAL is to have it simplify the way I use and interact with the database. By creating a DAL on top of what ADO.NET provides we can start to simplify patterns of usage that normally occurs.

    For instance, instead of setting up command parameters using the typical Create/Set/Add pattern:

    SqlCommand cmd = new SqlCommand(cmdText, conn);
    SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
    param.Value = "Smith";

    We can simplify this into a single method, cleaning up the caller code as well as (in my opinion) making the code more readable.

    DALCommand cmd = new DALCommand(conn, cmdText);
    cmd.AddParameter("LastName", DALDbType.NVarChar, 32, "Smith");


    Other simplifications that I would include in a DAL is the use of DataAdapters to retrieve DataTable and DataSet objects. Wouldn’t it be nice to have a command object that supports ExecuteDataTable() or ExecuteDataSet() methods?

    Seems like I am very often working with DataTable’s, and using ADO.NET to retrieve these seems a bit more work than it should otherwise be. I would love to be able to just do something like this to simplify my client code:

    DataTable dt = cmd.ExecuteDataTable();



    To recap what we accomplished:

    • We now have a basic DAL framework to abstract the use of database specific objects to more abstract ones. No more dealing with XXXCommand or XXXXConnection per database platform.
    • We’ve also managed to abstract ourselves from database specific types as well as added the flexibility to represent data in any way we want on the database platform (Ex: Bit ==> TinyInt, etc.)
    • We’ve simplified usage of the DAL so we can write less code and let it be more readable.
    Additional Features

    While we haven’t yet completely abstracted ourselves from the differences in databases completely – most notably the connection string and SQL syntax (parameter syntax differences, etc) – at this point I will leave the implementation of these to the reader.

    On projects I’ve worked on we have successfully abstracted away the connection strings as well as a lot of the SQL syntax of queries across the providers we are using.

    If there’s any interest in these techniques, drop me some comments and perhaps I’ll do a follow-up post.

    By the way, here’s the relevant portion of the client code from the beginning of the post with the DAL improvements we’ve implemented compared to the original:


    DataSet ds = new DataSet();
    SqlCommand cmd = new SqlCommand(cmdText, conn);
    SqlParameter param = new SqlParameter("LastName", SqlDbType.NVarChar, 32);
    param.Value = "Smith";
    SqlDataAdapter da = new SqlDataAdapter(cmd);
    DataTable dt = ds.Tables[0];


    DALCommand cmd = new DALCommand(conn, cmdText);
    cmd.AddParameter("LastName", DALDbType.NVarChar, 32, "Smith");
    DataTable dt = cmd.ExecuteDataTable();

    Seems like a winner to me!

    Technorati Tags: ,,,,
    Digg This

    WPF Dashboard and Custom Panels

    I’ve been toying around with creating a dashboard like display in WPF recently and figured there must be a million examples out there.

    Some interesting and/or noteworthy ideas I came across:

    The Telerik display was nice, but I wanted something free at the moment as I was working on a prototype idea. The TechEd demo was pretty close to what I was looking fro but I wanted a bit more “dynamic” functionality.

    Using the work from the aforementioned links as inspiration, I set out to fill the missing gaps and learn a bit more about WPF in the process. Below is a list of the basic features I was shooting for:

    • Generic Management of “Dashboard Parts” that are defined, discovered and created dynamically
    • Support for single instance or multi-instanced parts
    • Drag and drop support of dashboard parts to the grid layout, as well as within the grid layout between “cells”
    • Automatic grid expansion when necessary
    • Ability to Maximize, Restore and Close parts
      The basic idea was to start with the dashboard editing and setup feature, which is what you see below. So here’s what I came up with:


    To see the auto expansion of the grid and drag and drop functionality in action, here’s a video:

      With my specific needs in mind I set about my work. My first attempt to was to try and subclass the Grid in WPF, but I quickly ran into the problem of sub-classing a component with existing public properties – these properties were available to be called instead of my specific functionality. Instead I wanted to make the management of the grid more prominent and explicit.

    One option would have been to create a panel from the ground up that mimicked the behavior of a Grid only exposing the functionality I wanted. Instead I chose a simpler, albeit less elegant approach – the creation of a side component called GridManager.

    The GridManager component takes care of all the “heavy” lifting – processing drag and drop messages, managing the grid layout and managing part lifetime and event handling.

    Dashboard Parts


    Part Interface

    The parts for the dashboard are designed to operate in a plug-in style, where each one can be contained in one or more assemblies external to the main project. Each part needs to implement the following interface to be able to be loaded and recognized by the dashboard application.

        public interface DashboardPart
            event EventHandler PartControlReadyForDisplay;

            System.Windows.Controls.UserControl PartControl { get; }
            string UniqueIdentifier { get; }               
            MyDBConnection DbConnection { get; set; }

            void Initialize();
            void Destroy();

            ObservableCollection<PartProperty> Properties { get; set; }

    2. The PartControlReadyForDisplay event allows parts to signal when they are ready to be displayed in the dashboard. This allows for a part to take as much time as needed to get ready for displaying. In the meantime, a “Loading…” placeholder is used with the parts place.

      Each part is required to have a unique identification – UniqueIdentifier, and can support being passed a database connection (if a data bound part). Additionally, the Properties collection allows any custom defined properties to be integrated into the part editing and setup via the “Properties Grid”.

      The parts lifetime is managed by the application itself and the expected chain of calls to a part are:

    3. Dashboard app creates an instance of the DashboardPart interface implementation defined in the parts catalog
    4. Dashboard app sets the DBConnection property (if needed and in use)
    5. Dashboard app calls Initialize() on the part
    6. Part finishes initialization and raises the PartControlReadyForDisplay event
    7. Dashboard app retrieves the PartControl property to get the parts visual instance and adds it to the grid
    8. When the part is no longer needed, Dashboard app calls Destroy() on the part


    Part Definition

    Currently the definition of parts is accomplished via a “parts catalog”. I decided on this approach because I wanted the definition of parts to be flexible as well as explicit.

    An alternative mechanism would be to scan a “plugins” directory and load the appropriate plug-in dynamically if the proper interface was present. This can be fairly easily implemented.

        <Name>Project Status</Name>
        <Construction Assembly =”TestPart” Type=”TestPart.TestPart1″/>   
        <Name>Procedure Progress</Name>
        <Construction Assembly =”TestPart” Type=”TestPart.TestPart2″/>
            <Name>Test Property</Name>
            <Value>This is my test property</Value>



    It’s amazing how easy and fast it can be to create visually pleasing and functional user interfaces in WPF with a minimum amount of effort. With a bit  more work, a dashboard of the caliber of the one shown in the Telerik demo application can be easily achieved.

    There are a few more features left that I want to implement in the editing and setup portion of the dashboard, such as a more accurate visual of the parts when dragging / dropping and the ability to have a part span cells/rows. Both features should be pretty trivial to implement.

    Note: Use of the Canvas was also tried although I deemed that it’s fairly static nature (fixed with and height) didn’t make it ideal for supporting an arbitrary number of parts.

    As an aside, the styling was very easily accomplished with the WPF Themes CodePlex project.

    Data Visualization in WPF using Graph#

    I’m currently working on a prototype for my current project to visualize a set of data in a way that can give a user a quick overview of the relationships between objects within a “project”.

    In the product I work on there is a lot of inter-related data that users are exposed, but it is still fairly difficult for users to get the “big picture” rather quickly. Instead they currently rely on a set of reports and data dumps to help them visualize the data.

    I thought I would generalize this a bit (names removed to protect the innocent) and showcase a use of a great open source library – Graph# – for visualizing data. Nothing fancy, but I believe showing the data in an additional view, will enable someone to better understand the relationships between the items at a glance.

    First some background is in order. For the purposes of my project I have a pseudo-hierarchical set of items that can be linked to one another. So our model can have both hierarchical relationships as well as associative relationships.


    Demo Background

    To make this a bit more concrete I will use an example right out of software development. We will have a software “project” we are working on with the following item types:

    • Project – deliverable that “contains“ features.
    • Feature – a feature “contains” tasks. A feature may also have a document associated with it.
    • Task – a task is the low level unit of work being done. A task may also have issues “associated” with it.
    • Issue – a problem or defect found in a task. A single issue can pertain to multiple tasks.
    • Document – a textual description of something. Can be associated to a task.


      The design is fairly straight forward in that I’ve defined a few abstract classes that represent a “Model” and “ProjectItem”. Deriving from these I’ve created classes for each of the above item types we will support.

      The GraphViewFactory component is the bridge between the”Model” implementation and the objects the graph needs. It will walk the model and flatten it, producing a set of vertices and links to add to the graph.

      From there the graph control is used to layout and render the graph. The demo application has a few tweakable items available in the UI, such as the Layout algorithm. Additionally you can drill down into a node to get a rendering of that node, it’s sub-tree and it’s immediate parent (to go back up the tree).

      Incidentally, the data is loaded from an XML file – SampleData.xml. This was my first attempt at using LINQ to XML.


      The Finished Product


      While not awe inspiring perhaps, it does showcase the ease with which normal data can be visualized in a different way. The project I am working on actually has spiffy icons, tooltips and details panels for each item with the graph.

      Some of the interesting ways we are considering using visualization

      • Give users the ability to see “covered” items.
        • For instance, in our demo we could visualize the number of issues being generated and easily see which items were getting a lot of defects, or see which items do/don’t have enough “Documentation”
      • The ability to see transitive links. Currently systems are fairly good at showing you a relationship between an item and other items in a many to many relationship. With visualization we can take them to the next level by seeing the connections of the connections of…

      Screenshot #1


      Screenshot #2


      Download the source for the demo here

      Digg This