10 Comments

  1. Ihor

    Please notice, you have a header in text

    What is Dependency Inversion Principle (DIP)

    but next sentence is starting from: “The Dependency Injection principle refers“.

    I hope you mean DIP under header as well, not DI 🙂

    Reply
    • admin

      updated now. Thanks for visiting.

      Reply
  2. Ankit Kumar

    Nice article and very helpfull to understand the Dependency Injection

    Reply
  3. SMandeep

    Simple yet clear understanding of the concept…Thanks

    Reply
  4. Baba Mulani

    it makes the overall design looser without forcing changes

     

    should read

     

    it makes the overall design loosely coupled without forcing changes

    Reply
  5. Steve Naidamast

    I very nice article describing dependence injection, which looks similar to polymorphic inheritance.

    Both cases are unfortunately highly inefficient and should be used with care when considering such an implementation.

    To use such techniques to “loosely couple” objects and\or sections of an application does not always play well with the realistic needs of any such development. The majority of applications are designed for what requirements exist in the present.

    Such devices as dependency injection exist for the possibility of changes in the future that may promote additional but similar processes or brand new processes that are more easily implemented due to dependency injection.

    There are those applications that do in fact require such possibilities for their development but not all and especially not many.

    Nonetheless, many technical managers will espouse this type of development often not having a clue as to the requirements of it nor the complexities than can arise.

    Reply

    • I’m going to have to call Steve on the carpet. Dependency Injection has nothing whatsoever to do with functional requirements. It satisfies quality attributes, or non-functional requirements. I would like to understand the rationale for the claims of inefficiency in the post. DI costs nothing but the time required to understand its proper use and value. Please refer to the book “Dependency Injection in .NET” by Mark Seeman for some excellent additional reading. If you write anything more complex than “Hello World” or expect it to last longer than the time it takes for your stakeholders to want extensions, then following the SOLID principals will serve you well. Please read about cohesion and coupling to get a feel for the value of this post and its roots. Steve, I hope you follow up with these suggestions. They’ll make your designs more useful, and your development life much easier.

      Reply
  6. Morten Herman

    There is a huge problem with the service locator, which is that the dependencies required for your class is not publicly visible, causing a code based where you have to dive into each class to understand the coherence with other objects. Also the service locator makes it hard to mock out dependencies when doing automated tests. The service locator does not reduce coupling, it merely hides it. Consider adding a decorator for a dependency, that would have to be set up in the service locator. But what if the service locator is used from multiple places, and it is only in some cases you need the decoration. Then suddenly you need logic about when to decorate exposed to the users of the service locator. Also the class where the service is needed would have to know that it needed a special version of an interface. This violates the dependency inversion principle that a class should only depend on abstraction and not in detailed knowledge of other classes.

    However most dependency injection frameworks uses a service locator pattern for registration of mappings between classes and interfaces, for use when dependencies are to be created at runtime. So in some cases it makes sense to use the pattern.

    I would always go with dependencies defined as interfaces, with a well defined purpose, using constructor injection and dependency definition set up with a DI container/framework to create high coherence and low coupling along with flexible application configuration.

    Reply

    • I think you mean cohesion not coherence, and your point about hiding the coupling is not true. There is absolutely no way possible to remove all dependency from any engineering effort. By definition subsystems must affect the systems that incorporate them. Cohesion is generally explained stating “a software program designed to perform multiple tasks through multiple modules has a higher probability of having lower cohesion, which negatively affects its overall performance and effectiveness on computing machines”

      The service locator enables the decorator that you advocate by permitting you to use a context to define the abstraction, thereby permitting you to compose it at run-time based on the purpose it is intended to serve.

      Consider the pattern for the MS Rules Framework. You define your vocabularies and rules at design time. They can be added dynamically and are then stored in the policy cache. When a policy is evaluated by the rules engine, it looks at the cache for the definition of the policy using the evaluate method (if the customer is a frequent flier, then discount their flight by “rule = discount percent” calculated against the “vocabulary = subtotal”). If your ticketing application is running, its cache is updated dynamically when the new discount value is updated, or if a new rule is added for the policy that defines a new level of discount (i.e. frequent flyer and gold club member discount is discount + 1.5). This does not require exposing the logic. If you are truly “loosely coupled” then the test should be on the rule, and not the engine implementation of the rule, so testability is not impacted negatively, in fact it is enhanced greatly.

      I do agree with the assertion that service locator is not a silver bullet, but is definitely not a construct to be avoided. Many times you cannot anticipate additional use cases for your component, and will not introduce this level of extensibility/maintainability because you always want to balance against complexity, but we should always be seeking use cases for extending components that we have built. By observing Open / Close and Single Responsibility principals we can generally identify when these patterns could be indicated.

      We as engineers add value to our respective organizations by providing reusable components and extensions. This not only brings reduced cost of goods sold, it reduces customer total cost of ownership by supplying fix once fix everywhere, as well as improving the maturity and reliability of our components over time when we find and fix defects. When we compose our applications with these well matured components then they invariably solidify the applications that use them. This can only lead to good things for the reputation of our customers, our company and ourselves.

      Reply

Leave a Reply