Prediction is very difficult…

Posted: February 24, 2012 in Opinion

Prediction is very difficult, especially if it’s about the future – Niels Bohr

I was reading an extremely interesting article about the Theory of Disruptive Innovation. Needless to say, Steve Jobs features prominently when we look up any material in this area.

Let me point to one such article which talks about how Steve Jobs solved the Innovator’s Dilemma.

I am not your typical iFanBoy, but you need to respect what the man achieved in his lifetime. The impact of the iPhone/iPad/iMac has been so profound that any literature about disruptive innovation is quite incomplete without understanding the workings of the mind of Steve Jobs.

At this point, I want you, my dear blog-reader, to read an interview of the person who “influenced” Steve Jobs & may I ask you to pay close attention to the last 3 questions in the article – which brings us back to the real purpose of this article, enjoy :)

The US economic outlook is looking bleak & the markets are going flip-flop everyday since the S&P credit-rating downgrade – all of this in response to the US Nation Debt of over $14 trillion & counting!

Do you know, as developers, what is the Technical Debt in your code-base?

Have you been able to estimate, measure, track & payback this debt before it becomes a burden & you too face a credit-rating downgrade by your customers?

Surely, the popular Project Management tools like – TFS, Jira, MS Project etc., should be able to tell you these metrics?

No? Think again

What is Technical Debt

This article will not focus on the definition of Technical Debt, there are tons of well-written articles out there with more details than I can ever hope to cover. Some of the most useful links I have referred to while writing this article are:

Technical Debt – a basic primer from Wikipedia

Technical Debt – the different kinds of debts & more depth in understanding

Technical Debt Quadrant – by Martin Fowler, prepares the kind of mindset needed to address this topic

Paying down your Technical Debt – by Jeff Atwood, a good article to understand why paying down is so important

Is Technical Debt still a useful metaphor? – interesting read. I think the term is even more relevant today than before, but this article has an interestingly different view point.

With the education out of way, my focus for this article is to put forth some practical steps & techniques I have used in my previous projects to pay down Technical Debt.

Robert McIlree has proposed a method to measure Technical Debt. But like the author mentions – some of the parts are based on approximations & we need a more solid method to estimate & project the costs.

Raj Anantharaman mentions another technique using TFS for the measurement & graphical representations.

The technique I am putting forth below is a mix of both Robert & Raj’s work – but generalized to work with any Agile or non-Agile process you might be using & independent of any tools.

To bell the cat

Let us consider a project (Agile/non-Agile does not matter) which has a list of business requirements.

This list can be called a Product Backlog (Agile) or a Business Requirements Document (non-Agile). For simplicity of explanation, let us use the common terminology of Product Backlog to mean both the Agile & non-Agile description of this list.

Along with the Product Backlog, create another list called the Technical Debt Backlog – which, at the start of the project, might be a list of the Technical Debt inherited to the team like: existing services or code-base being used as part of the project, existing schema’s etc. If you are lucky enough to start from scratch, this Backlog will be empty.

Now, for every Sprint/Release/Iteration, record all the items which are termed as Technical Debt as line items in the Technical Debt Backlog.

Estimate & prioritize the Technical Debt Backlog the same way as you would any of your Product Backlog items & you are on your way to Technical Debt freedom!

For the Mathematically-minded

Here are some interesting metrics which can be generated from the Technical Debt Backlog

Table 1: Technical Debt Metrics
Metric Description Units
Debt per release The measure of Technical Debt items introduced into the code-base per release number
Debt per resource The measure of Technical Debt items introduced into the code-base per resource number
Debt Reduction Velocity The measure of Technical Debt items closed/released per release number
Debt Aging The measure of how long the Technical Debt item was sitting in the Technical Debt Backlog before it was taken up in a release Days
Debt Cost The measure of how much the debt costs Man days

These calculations are simple & straight way of looking at the Technical Debt in your project by listing out the items explicitly, estimating & prioritizing them & having your team work on these items.

Technical Debt – like all debt items – can be a very powerful tool to meet your short-term deliveries.

Similarly like all debt items, the longer you have Technical Debt in your code-base, the more interest it gains & the more expensive it is to payback what you owe.

Beware & take care!

Having worked with Agile teams starting back in 2002, my personal experience is that, the mindset of the pigs on the team is a key differentiation between the success or failure of the project.

It does not matter if you have a local co-located Agile team or a global, multicultural, team spread across multiple time-zones Distributed Agile team – it is as simple as the mindset of the members in the team which can be the difference between successful delivery or failure at every step.

The Agile Manifesto values:

Individuals and interactions over processes and tools

I think the authors got this spot-on for the succinctness of the statement & the fact that it is the very first value of the Manifesto!

Individuals & interactions – especially within the Agile team (the Pigs) are the key.

So, what needs to be in the mind of the pig & why does it matter, given that its commitment to the endeavor is bacon?

The Mind of a Pig

For me – if every individual on the Agile team knows his area of contribution & contributes to the best of his ability on that, constitutes the ideal Agile team.

This expectation includes:

  • Understanding the role that individual is performing
  • Understanding the deliveries for that role
  • Understanding the measurement of “done”
  • Understanding how this role impacts the other roles & their deliveries

If an individual contributor can come into an Agile team with the above understanding very clear in his mind, it is of immense help & an major contribution in itself.

Sometime, the understanding of the role in the grand scheme of things can be rather humbling, other times tad disconcerting – but all-in-all extremely vital mindset for the success of the project.

One might argue, that having individuals with this mindset, is an asset to any project & is not dependent on the execution methodology used & indeed will be a success in Agile & non-agile project.

But like I started – it’s all in the mind of a Pig!

“The Managed Extensibility Framework (MEF) simplifies the creation of extensible applications. MEF offers discovery and composition abilities that you can leverage to load application extensions.” – (mef.codeplex.com)

“The common service locator library contains a shared interface for service location which application and framework developers can reference”  - (commonservicelocator.codeplex.com)

Wow!

How to get them both working together, is as below.

NOTE: I am assuming that the reader is aware of Dependency Injection pattern & the Common Service Locator.

The Steps:

  1. Download the CommonServiceLocator_MefAdapter.4.0.zip file & extract the MefServiceLocator class written by Glen Block
  2. Create a new ASP.Net MVC 2.0 Web Application Project
  3. Include the MefServiceLocator class into this project
  4. Open the Global.asax.cs file
  5. Make the following code change in the Application_Start event:
    protected void Application_Start(){     
    
        RegisterDIContainer(); 
    
        AreaRegisteration.RegisterAllAreas();     
    
        RegisterRoutes(RouteTable.Routes);
    
    }

  6. Write the RegisterDIContainer method as:
    protected void RegisterDIContainer(){
        var catalog = new AggregateCatalog(     
    
            new AssemblyCatalog(Assembly.GetExecutingAssembly()),     
    
            new DirectoryCatalog(Server.MapPath(“~/Plugins”))
        ); 
    
        var composition = new CompositionContainer(catalog); 
    
        composition.ComposeParts(this);
    
        var mef = new MefServiceLocator(composition);
    
        ServiceLocator.SetLocatorProvider(() => mef);
    }

     

  7. With this in place, you are now good to do:  

    var controllers = ServiceLocator.Current
            .GetAllInstances<IController>();

Code on…

In part 1 of this series, we described a way to simplify our code to abstract some of the role-based implementation which the ASP.Net Memberships & Roles gives us.

In this part, we will explore the extensions needed in the SQL Server database to have additional table which will help store the data needed to implement this concept.

The Data Model Changes

Access Point Data Model

The above diagram shows the extension needed to the data model to support the additional functionality.

Table Name Purpose
aspnet_AccessPoints This is a master table of all the access points defined in the application (by ApplicationId). The “Value” column can store any value associated with that Access Point.
We will see the use of this field in a future article where we will explore some of the advanced techniques of Access Points
aspnet_RoleHavingAccessPoints This is a simple Role to Access Point mapping table. Using this table we will be able to link a Role to one or more Access Points.
The “Value” column is present here to override the default value in the aspnet_AccessPoints table with a different value for a specified Role
aspnet_UserHavingAccessPoints This is a table which maps the users to Access Points. We will use this table to grant special permissions to a single user.
For example: Consider a situation where all users in role “Editor” should be able to edit articles on your blog, but for use John Doe, who is in the Editor role, also need permission to approve comments.
Usually, we will create a different role for this & assign John Doe to that role.
But in the Access Point implementation, we will use this table & have an entry to map John to that Access Point.
The “Value” column in this table is used to override those of the aspnet_AccessPoints and aspnet_RoleHavingAccessPoints table values. In effect, you can give specific values to Access Points at a user, role or Access Point level with the same order of priority.

With these changes in place, we are now ready to start writing some code to expose this functionality & start seeing some benefits of this technique. We will cover this in part 3 of this series.

Happy Coding!

All articles of this series:

  • Part 1 – an introduction to the Access Point concept
  • Part 2 – database changes needed to enable Access Points
  • Part 3 – code to enhance the ASP.Net Memberships and Roles
  • Part 4 – some common usage patterns and advantages
  • Part 5 – a simple UI implementation to show use of Access Points
  • Part 6 – an admin console to manage Access Points

The beauty of ASP.Net is that some of the core framework components can be replaced by custom implementations by writing the appropriate Provider classes by using the Provider Model.

In the following series of article, I will show a customer implementation of the ASP.Net Membership Providers to extend the Memberships.

Reason of extension

The current Membership implementation is role-based i.e. the user should be in a role to be able to access a feature of your application.

For example, if we have a page with some UI elements whose access depends on role assigned to the user, we can imagine to write code as below:

bool has_access_role1 = this.Page.User.IsInRole("Role1");
bool has_access_role2 = this.Page.User.IsInRole("Role2");

if(has_access_role1) { 
    // do something with Role 1...
}

if(has_access_role2){
    // do something with Role 2...
}

There are a couple of important but subtle assumptions being made in this code snippet:

  • There is an assumption of knowing all the Roles, their names & behavior in the application
  • The implementation and responsibilities of that Role are baked into the application and if there is any future change in this, the code will also need to change

While this is acceptable in most scenarios, I would prefer having another level of abstraction which will remove some of these assumptions and code dependencies.

Introducing Access Points

The implementation I am showing below called as “Access Points” provides this abstraction and is a concept I have been working with in some of my other projects.

  • Each protected element is associated with a single entity called “Access Point” which has is a simple boolean (TRUE|FALSE)
  • Access Points are assigned to a Role or a User
  • When an Access Point is assigned to a Role, the User get assigned to that Access Point by virtue of being assigned to the Role
  • When an Access Point is assigned to a User, it an additional permission which can be set per-user basis

With this implementation, we should be able to re-write the above code sample as below:

var has_access_point1 = (this.Page.User as IAccessPointAware)
                .HasAccessPoint("AccessPoint1");
var has_access_point2 = (this.Page.User as IAccessPointAware)
                .HasAccessPoint("AccessPoint2");

if(has_access_point1){
    // do something...
}

if(has_access_point2){
    // do something more...
}

Some of the advantages this mechanism gives us are:

  • The application has a finite set of control points, which is governed by the developers of the application
  • The Application Administrators can create roles for the application and assign Access Points to that role as needed
  • There is no dependency between the code written in the application and the implementation of the roles for the application
  • With this, the Application Administrators can add/update/delete roles as needed without any impact on the application
  • If a single user needs a permission or access to a single element, the Application Administrator can assign that Access Point to the single user

Next steps

If this something that interests you, please follow on to the next part this series which covers:

  • Part 1 – an introduction to the Access Point concept
  • Part 2 – database changes needed to enable Access Points
  • Part 3 – code to enhance the ASP.Net Memberships and Roles
  • Part 4 – some common usage patterns and advantages
  • Part 5 – a simple UI implementation to show use of Access Points
  • Part 6 – an admin console to manage Access Points

Happy Coding!

References

4GuysFromRolla 2005, A look at ASP.Net 2.0′s Provider Modelhttp://www.4guysfromrolla.com/articles/101905-1.aspx

Rob Howard, Provider Model Design Pattern and Specificationhttp://msdn.microsoft.com/en-us/library/ms972319.aspx

The powers that be agree that Dependency Injection (DI) & Inversion of Control (IoC) patterns help applications become more component-based, easier to maintain & better designed and are open to unit testing & mocking.

But how does one really use DI/IoC in their application to get the advantages aforementioned?

The following series of article provides a step-by-step guide to structure code for working with DI & then implementing IoC for noob programmers.

This article is the first of this series & provides an introduction to DI.

What is Dependency Injection?

Dependency Injection (DI) is a mechanism to inject dependencies into code such that the piece of code can consume the injected dependency without having knowledge of the construction or lifetime of the injected dependency.
In other words, DI in object-oriented computer programming is a technique for supplying external dependency (ie. a reference) to a software component – that is, indicating to a part of program which other parts it can use (Wikipedia, 2010)

Let us understand the definition by breaking down the definition into smaller parts.

What is a dependency?

Consider the following lines of code:

public class AccountController{

    private readonly IAccountRepository _repository;

    public AccountController (
        IAccountRepository repository ) {

        if(repository == null)
            throw new ArgumentNullException("repository");

        _repository = repository;
    }

    public ActionResult Login( string userName,
        string password ){

        var isLoggedIn = _repository.DoLogin(userName,
                                 password);
        return isLoggedIn
            ? View("AccountInfo")
            : View("Login");

    }
}

In the above sample, we can say that the class AccountController has a dependency on a type implementing the IAccountRepository interface ie. for the AccountController to perform its functionality, it would need a valid instance of IAccounRepository to be passed.

One key aspect to note is that we are not passing the concrete class of IAccountRepository, but instead passing in an interface. The advantage of doing this is the AccountController is not dependent on the concrete class, but only an implementation of IAccountRepository. In other words, the AccountController has a dependency on the IAccountRepository implementation.

What is Injection?

In the code sample above, we now know that the AccountController has a dependency on the IAccountRepository. The manner in which we can provide this dependency is known as injection.

One of the simplest ways to do this is:

var account_controller = new AccountController(
       new WindowsAccountRepository());

We could have a WindowsAccountRepository implemented as:

public class WindowsAccountRepository
    : IAccountRepository
{
    public bool DoLogin(
        string userName,
        string password
    ){
        // some code to do login by checking against the Active Directory
    }
}

But then at a later date, if you decided that you would like to have a custom implementation for account management & want to store the information in a database, all you would need to do is create a concrete class for your repository implementing the IAccountRepository & inject that into the constructor of the AccountController like below:

public class CustomAccountRepository
    : IAccountRepository
{
    public bool DoLogin(
        string userName,
        string password
    ){
      // some code to validate against a database
    }
}

Consuming this new class would look like:

var account_controller =
    new AccountController(
        new CustomerAccountRepository()
    );

This way, the complexity of the implementation of IAccountRepository is abstracted away from AccountController. But, AccountController is able to use the behavior exposed via the interface while the consumer of the AccountController is free to inject a concrete implementation of IAccountRepository.

Wrap up

In this first article, we defined dependency injection & what it means using a simple code sample. We have seen how we can replace the dependency by injecting a concrete implementation at the time of consumption.

In the next article, we will look at writing unit tests on the code we have just written & explore the benefits of DI in the context of unit testing.

Comments & suggestion always welcome.

References:

Wikipedia 2010, Dependency Injection – http://en.wikipedia.org/wiki/Dependency_injection