ArticleS. DavidChelimsky.
DependencyInjectionIsOnlyMostlyGood [add child]

Dependency Injection is only MOSTLY Good


Miracle Max: He probably needs a dependency on a collabator.
Inigo Montoya: Use dependency injection. It's good.
Miracle Max: Whoo-hoo-hoo, look who knows so much. It just so happens that dependency injection is only MOSTLY good. There's a big difference between mostly good and all good. Mostly good is slightly bad.


We know that we have to manage dependencies if we want maintainable code, which implies that there are good dependencies and bad dependencies. Well, it's just as easy to inject bad dependencies as good ones unless you're paying attention. But because we're injecting them, it is sometimes not as easily noticed even if you are paying attention. In part because the dependencies become implicit. In part because they are hiding beneath the banner of "dependency injection", which we (erroneously) believe to be inherently a good thing. They're not. They're just mostly good.

Here's an example from the project I'm working on now. We have an ApplicationContext[?] from which we can get a SaveOperationFactory[?], from which we can get a SaveThingOperation[?]. So the production code inside a ThingPresenter[?] might look like this:

public class ThingPresenter
{
private ApplicationContext context;
private Thing thing;

public ThingPresenter(ApplicationContext context)
{
this.context = context;
}

public Thing Thing
{
set { thing = value; }
}

public void SaveThing()
{
context.SaveOperationFactory.NewSaveThingOperation(thing).Execute();
}
}


Our test looks like this (using a hand coded mock):

public class ThingPresenterTest
{
[Test] public void SaveShouldDelegateToSaveThingOperation()
{
thingPresenter.SaveThing();
Assert.IsTrue(saveThingOperation.executeWasCalled);
}


Seems simple, right? Not so fast. I didn't show you the setup!

public class ThingPresenterTest
{
[SetUp] public void SetUp()
{
applicationContext = new MockApplicationContext();
thingPresenter = new ThingPresenter(applicationContext);
}


Still seems simple, right? Not so fast, let's look at MockApplicationContext[?]:

public class MockApplicationContext : ApplicationContext
{
private SaveOperationFactory mockSaveOperationFactory = new MockSaveOperationFactory();
public SaveOperationFactory SaveOperationFactory { get { return mockSaveOperationFactory; }}
}


Now let's look at the MockSaveOperationFactory[?]..

public class MockSaveOperationFactory
{
public SaveThingOperation NewSaveThingOperation
{
return new MockSaveThingOperation();
}
}


OK - you get the idea by now. Here's an alternative using RhinoMocks:

public class ThingPresenterTest
{
[Test] public void SaveShouldDelegateToSaveThingOperation()
{
Expect.Call(saveThingOperation.Execute());
mocks.ReplayAll();
thingPresenter.SaveThing();
mocks.VerifyAll();
}


If you're familiar w/ Rhino Mocks this should present no problem - but let's look at the setup:

public class ThingPresenterTest
{
[SetUp] public void SetUp()
{
applicationContext = (ApplicationContext) mocks.DynamicMock(typeof(ApplicationContext));
saveOperationFactory = (SaveOperationFactory) mocks.DynamicMock(typeof(SaveThSaveOperationFactory));
saveThingOperation = (SaveThingOperation) mocks.DynamicMock(typeof(SaveThingOperation));
Expect.Call(applicationContext.SaveOperationFactory).Return(saveOperationFactory);
Expect.Call(saveOperationFactory.NewSaveThingOperation(null)).IgnoreArguments.Return(saveThingOperation);

thingPresenter = new ThingPresenter(applicationContext);
}


And that's just the setup for this one test method. There are other methods on the ThingPresenter[?] that get other factories from the ApplicationContext[?], which mean there are more mocks to setup. Look at each piece and it looks inocuous, but there are two big problems here.

One is that trying to drive this from tests means a LOT of setup before you can even get the right players in your test. If I want to add an OtherThing[?] to my application, with an OtherThingPresenter[?], I can't test the OtherThingPresenter[?] until I hook everything up in the various mocks.

The other is that debugging is, well, debugging. Chasing down execution paths through several class files, etc, etc, etc. It's painful. And it sucks. It has all the testing problems we get from Service Locator. In fact, injecting dependencies on factories is just Service Locator in disguise.

Here's an alternative. Just give the ThingPresenter[?] exactly what it needs:

public class ThingPresenterTest
{
[SetUp] public void SetUp()
{
thingSaver = (ThingSaver) mocks.DynamicMock(typeof(ThingSaver));
thingPresenter = new ThingPresenter(thingSaver);
thing = new Thing();
thingPresenter.Thing = thing;
}

[Test] public void SaveShouldDelegateToThingSaver()
{
Expect.Call(thingSaver.SaveThing(thing));
mocks.ReplayAll();
thingPresenter.SaveThing();
mocks.VerifyAll();
}


What you see there is everything we need for this test. We can write it just like that and implement the code without depending on anything implicit. The ThingSaver[?] can deal with getting a new SaveThingOperation[?], the code that builds real ThingPresenters[?] in production can use the ApplicationContext[?] to get what it needs to build the ThingPresenter[?], but that's just wiring which is easily tested elsewhere and we really don't need (and shouldn't need) to depend on here.

I think part of the problem is the name "Dependency Injection". It's actually not injecting dependencies at all. In fact, it's protecting us from them. But that name makes it sound like it's good to inject dependencies. Perhaps "Dependency Hiding" would better describe what we're after, just like "Data Hiding" and "Implementation Hiding" - the things that lead us toward highly decoupled, highly maintainable code.


!commentForm
 Wed, 28 Dec 2005 15:02:20, ,
That's why the "Mock Roles not Objects" paper recommends writing the tests with mocks FIRST and using the mocks to guide the design of the interactions between objects. You wrote (or at least showed) the implementation first and then tried to test it. The obvious smell (a massive "train-wreck" of "Ask, Don't Tell" code) would never have arisen if designed test-first.
 Wed, 28 Dec 2005 15:08:00, David Chelimsky, re: Mock Roles not Objects
Good point. Sadly, we started off that on the right path. The tests were written first, and they drove these objects. As the parameter lists grew, we started to smell, well, long parameter lists. As a solution for THAT, we started to create these factories. They were introduced as an apparent design improvement, and in that context they seemed harmless. It is only after that change, when we start to introduce new business entities that participate in all of the relevant layers (model, db, gui, etc) that we learn the pain. We look at a test for another presenter as a guide, and find ourselves in the mess described above. Live and learn.
 Thu, 29 Dec 2005 00:26:35, Mike, it scares me
I don't fully understand what you're trying to accomplish, but a SaveOperationFactory[?] class scares me, because it seems to be centered on a function (save) and it's going to have to be changed every time we add an entity. I've been very pleased with trying the ThingRepository[?] approach from DDD.
 Thu, 29 Dec 2005 03:36:10, GĂ©rard Mendes, I don't get it...
Well, for me Dependency Injection is just linking an interface type to an actual type. It's not quite like saying component A needs a BFactory to get an instance of B... which is what your example shows. In fact, you could legitimately use Dependency Injection to inject your ThingPresenter[?] to get a ThingSaver[?].

What I think you're showing, however, is how important it is to separate configuration from use : factories are often best hidden, for they are one dependency too many. I guess that this was your point...
 Thu, 29 Dec 2005 06:34:31, ,
If you have long parameter lists, the obvious refactoring is to find the single abstraction represented by multiple parameters and pass instances of that class around instead. Where did the need for a factory come from?
 Thu, 29 Dec 2005 08:15:35, David Chelimsky, re: need for factories
First of all, I agree with all of the fears and smells presented here. The whole reason I wrote about this was to expose how using one good idea, dependency injection in this case, can hide the fact that you're using other bad ideas. So I'm not defending the design. I think it's problematic. I just want to clarify how we got there, to expose pitfalls that we fell into.

The situation above evolved over several iterations. First there were "Containers" - something akin to the ThingRepository[?] Mike mentioned - something akin to a DAO. Each one managed a collection of a given type. If a presenter needed to manage three things, it got three of these containers. From there we moved to putting all of the containers in an ApplicationContext[?]. So now a presenter would ask the context for the containers. We were only one level deep in the path to the train-wreck, but it seemed (read: "seemed") better than having to change the parameter list on the constructor every time we added some type to a view.

Later, we discovered that we needed to save a particular object graph in a transaction (which we had not been doing). The Containers didn't easily support this because they didn't have any interface for slinging around connections to each other - each was completely autonomous. So a new concept of a SaveOperation[?] emerged. It allowed us to wrap a transaction in an object who could interact with multiple components of an entity and their related database tables, report back errors, handle rollbacks, etc. It's actually quite useful.

The need for a SaveOperationFactory[?] is that each instance of a SaveOperation[?] represents one transaction, so we couldn't stick a SaveOperation[?] in the constructor on the presenter. And here is where we made what is probably the biggest mistake, the point of really sliding down that slippery slope over "Tell, Don't Ask" violations. We put the SaveOperationFactory[?] inside the ApplicationContext[?] rather than adding it to the parameter list.

So we got here one step at a time.

In the final suggestion I present above, the ThingSaver[?] probably has a SaveOperationFactory[?], but that's transparent to the presenter. The point being that there are ways to accomplish what we needed - but through either laziness, hurriedness, blindness, time pressures, etc, we didn't see the mess that was evolving. It really became exposed, however, when a couple of new guys joined the team and they had trouble writing tests for new parallel pieces of functionality (i.e. new entities and new views to manage them). Those of us who were present during the evolution of the mess were comfortable with it because we understood it. We just accepted it. Thankfully, the new team members said "what in god's name is this?" and we are now on the road to recovery.
 Thu, 29 Dec 2005 11:47:18, ,
It sounds to me like you did *not* use dependency injection along those steps. Getting things from a context is the antithesis of dependency injection - a design using DI would just pass an object's dependencies to it's constructor or as arguments to it's methods.

I find "dependency injection" to be a very annoying term. As another commenter points out, it's not very descriptive. It's just plain old object oriented programming -- using a constructor to initialise an object into a valid state.

And don't get me started on the misuse of the term "inversion of control"!
 Thu, 29 Dec 2005 11:53:29, ,
I don't know why it's crossed out a lot of the text in that comment. Ignore the crossing-out. I think the comments interpret double-dashes as crossing out instead of an em-dash.
  • That's correct "--strike through me--" will result in strike through me. I fixed your comment above. - DaC
 Sun, 16 Apr 2006 06:51:30, Vitali Chkliar, Defaults should reduce setup
How about having some defaults, that can be overwriten by DI?
 Sun, 16 Apr 2006 18:41:26, David Chelimsky, re: Defaults should reduce setup
Personally, I prefer to have one constructor that accepts all of the necessary services. To support defaults, you either need at least two constructors (one that sets up the defaults, one that uses what it is given) or exposed setters. In my experience, both of those options lead to more confusion, though the latter (exposed setters) has proven far worse.
 Mon, 17 Apr 2006 08:19:24, David Chelimsky, clarification re: Defaults should reduce setup
Clarification: When I say that I prefer to avoid defaults I'm talking about default services - not default property values. So if there are a few chained constructors like this:

public Thing(Service_1 service_1)
: this(service_1, default_property_value_1) {}

public Thing(Service_1 service_1, Property_1 property_value_1)
: this(service_1, property_value_1, default_property_value_2) {}

public Thing(Service_1 service_1, Property_1 property_value_1, Property_2 property_value_2) {
...
}

...that's OK w/ me. Imagine, however, supporting all of those property value combinations PLUS making the services optional as well.
 Thu, 11 May 2006 00:36:32, Curt Sampson, Multiple ThingSavers[?]?
I may be making the incorrect assumption here that the same kind of thing does not have different kinds of ThingSavers[?], but it seems to me that it might be more practical just to teach things how to save themselves, so that a presenter can deal with just a thing, and nothing else.
 Fri, 12 May 2006 04:24:42, BvdB[?], OO
Imho I would say that it's not the responsibility of the ThingPresenter[?] to store things. It should present things, isn't that what OO is all about?
The IOC and Dependency thing is about "the Caller beeing the Controller". So you should have some other class(es) responsible for the workflow handling.
 Tue, 16 May 2006 13:42:08, David Chelimsky, agreed
Curt and BvdB[?]: I agree with both of you. This design has many other problems in addition to the topic of the blog.