In the discussion following my entry "Why Rails is total overkill and why I love Rack
" several comments raised the issue of whether high coupling is always bad. My answer was that I believe it is, but at the same time it can be worth it sometimes
It seems like a point that is worth further discussion. I'm not going to go into a terrible amount of detail, as I enjoy the discussion more than expounding on a subject that should be relatively uncontroversial.
What do I mean by coupling and cohesion
My earlier entry linked to the Wikipedia articles for these terms, because I was sure some people would misunderstand, and sure enough. So lets go into some more detail:
Two components are loosely coupled, when changes in one never or rarely necessitate a change in the other
Changes that affect external interfaces will of course require changes, and so you can't completely safeguard against changes causing ripples. You can protect against it by narrowing the interface. This is why coupling and cohesion is so tightly related:
A component exhibits high cohesion when all its functions/methods are strongly related in terms of function.
The higher cohesion and lower coupling a system has, in general the more its components exhibit strong data hiding, narrow but general interfaces and a high degree of flexibility.
Why coupling is always bad
Surely increasing dependencies on implementation details of other components isn't a good thing?
The objections I've seen typically doesn't actually
usually imply that coupling is good, though, but that coupling isn't always bad because it's necessary to achieve high cohesion.
Some evils are necessary, but that doesn't make them good. I will not try to argue that increasing coupling isn't sometimes
worth it - see below.
Coupling is always bad because it prevents the replacement or changes of components independently of the whole.
It's hard seeing a defense against this, and indeed hard to argue for it because it appears so self evident.
What are some of the consequences of high coupling?
- Developers / maintenance programmers need to understand potentially the whole system to be able to safely modify a single component.
- Changing requirements that affect the suitability of some component will potentially require wide ranging changes in order to accommodate a more suitable replacement component.
- More thought need to go into choices at the beginning of the lifetime of a software system in order to attempt to predict the long term requirements of the system because changes are more expensive.
I can't think of a single benefit of high coupling in and of itself. If anyone think they can actually defend why high coupling might sometimes be good (as opposed to just occasionally a necessary evil), I'd love you to post your comments to this post...
Cohesion vs. coupling, and why coupling is sometimes worth the cost
Cohesion is about making sure each component does one thing and does it well. The lines get blurry in a language like Ruby, where one "component" could be a library that reopens a class like Object and in effect extends every object in the system. The specifics doesn't really matter. What matters is whether the code is self contained.
It's generally easier to reduce coupling in a highly cohesive system.
It is easier, because a highly cohesive system will group the related functionality together, so that the need to communicate across component boundaries (whether those "components" are classes, separate processes, or methods injected into reopened classes by a library) is reduced.
The key point is that related code often share state. Sharing state across component boundaries increases dependencies. Increased dependencies increase coupling.
Cohesion and coupling are thus not at odds - high cohesion and low coupling are both good, and achieving one tends to make achieving the other easier, not harder. When some people think that high coupling is sometimes excusable, it is often because they confuse cohesion with consistency and ease of use.
I am sure there are many different ideas of what the appropriate tradeoff is. I put the bar pretty high (that is not to say that I don't sometimes violate my own ideals out of laziness, but then again I've been bitten by that several times too)
What can make increased coupling worth the cost
Sometimes a system is simply so large and complex that even if most of your components are highly cohesive
you need to break the components into pieces, and possibly need to be able to plug other code into some of those pieces, to make the system maintainable.
In those cases, there may not be a choice. You may need to scale a system across server boundaries and have to break it into server specific components. Each processing step may need access to and knowledge of the full state to be able to continue processing no matter how you try to slice and dice the tasks.
Another case where increased coupling may be worth the cost is ease of use. A few days ago I wrote a post title URLs do not belong in the views
. One of the approaches I was pondering was to put the routing/dispatch mechanism (the front controller) in charge of generating the URLs. At the same time I wanted to tie the url generation to model instances, not to named routes as Rails for example does (Rails also supports generating routes from model class names, but that's also not what I wanted).
Part of the motivation is an observation that there are many ways to generate URLs from the model objects - my posts for example, have a "slug" used to generate SEO friendly URLs, but that isn't guaranteed to be stable, and certainly isn't until the post is published, so while that is the right URL for a published, public view of the post, it's not appropriate for the admin interface, where one of the operations is to change the slug - I want the admin URLs to stay static. In this case the appropriate URL to use requires knowledge of the contents of the model. It's perfectly appropriate for the view to request data from the model, but I don't want it to make assumptions about formatting of that data.
And wouldn't it be nice if the front controller could instantiate the proper model objects too?
The point isn't what Rails can or cannot do - in this case Rails certainly can do more that a lot of frameworks I've seen, and gets part way there. If you are willing to sacrifice low coupling, allowing the front controller to create a mapping is pretty straight forward, and it certainly would be trivial to make Rails support a model like that (if it doesn't already - I don't know).
Doing those things without
causing a scenario where the front controller knows about the way specific models are built (i.e. how to instantiate objects with a specific ORM), or where the views depend on a specific API of a specific front controller implementation is more work
There are many cases where lower coupling means more work
If you, for your specific use, couldn't care less about the extra coupling because you know you'll never need to exchange a specific component (do you really know? Think long and hard about that), and the benefits in terms of additional work starts being significant, then lowering the bar and accepting higher coupling may be worth it. It's a tradeoff between the increased cost of replacing a component vs. the potentially lower cost of using the component in the first place.
My goal isn't to convince people to always strive for minimal coupling, but to make at least a few people at the very least think twice and make sure they really need to before they start adding extra dependencies to their code.
To relate this to my previous post, have Rails gotten the balance right? In my opinion it hasn't. That's not to say everything in Rails could be cut into independent reusable components without sacrificing usability.
Some thoughts on avoiding coupling
is a good example. Read the Rack specification
. Seriously. It's short.
There's two good things about it:
First of all t's easy to implement Rack again, or parts of it, if you really have to. If for whatever reason the current implementation doesn't meet your needs, it's easy to satisfy the requirements of the specifcation.
Secondly, it's even easier for other components to plug into the Rack infrastructure. Really, a minimal piece of Rack middleware doesn't need to do much more than this (it doesn't technically need even this, as long as it responds to #call, but doing it this way lets you chain them trivially using Rackup config files)
def initialize app
@app = app
def call env
Of course your middleware can (and likely will) access the environment provided to #call, but that interface is not doing much more than passing on data passed in with the request and some information about the server you're running in, just like the CGI environment. As long as you obey with the very simple Rack specification, you can build up complex behavior by layering a number of tiny classes than can be ripped out, reordered, replaced, rewritten etc. as you please.
It's an incredibly powerful model because
of the low coupling. Of course, it's easy to break that by adding lots of data to the environment you pass on. It's not an automatic truth that Rack middleware components will not have high coupling, but it would kind of defeat the purpose
A few general rules to avoid high coupling:
- Make your components as cohesive as possible. If they have more than one responsibility, try to break them in two. Identify what their responsibilities actually are.
- Don't leak state when you don't have to. WHY are specific attributes exposed? Do they have to? Do you need to tell the world which state an object is in, or is it enough to tell the world that the object is or is not in a specific state? The more you hide data, the harder it is to accidentally increase coupling.
- Simplify your interfaces. Can you easily reimplement a class from scratch that satisfy the interfaces? What ARE the interfaces that other components are allowed to depend on? (AND note that interfaces can be complex even if the number of methods are low, if the data passed as arguments is complex)
- Pick interfaces that are already satisfied by components consumers of your interfaces might use. An example of this is again Rack, where the choice of using #call means that a Proc can be used to satisfy the interface requirements. It's a tiny thing in this case, but it does increase flexibility, and makes reimplementing or replacing components, or providing a facade or decorator around an existing component that much easier.
Why do you hate Rails?
Judging by some of the feedback I got, some people clearly think I hate Rails. I don't, which I hope my answers to comments etc. reflected. Rails has done a lot of really great things for Ruby and web development, and it deserves full credit for that.
I do stand by my assessment that I believe Rails is overkill, though. That doesn't mean none of the code in Rails is worthwhile - lots of it clearly is. But I do also strongly believe that Rails would be far better if it was more loosely coupled, making it easier both for alternative implementations of core components to be easily used, and for bits and pieces of Rails to be used by itself. The success of ActiveRecord is a testament to the value of being able to reuse chunks of code originating in Rails, and I'm sure there's lots of other code that would benefit a wider community.
A lot of my reluctance to use Rails boils down to the fact that I prefer to pick components that fit with what I want to do rather than adapting what I want to do to how it'd be easy to do it with a specific framework. I want the flexibility to throw out components when they don't suit me without affecting other parts of my applications.
For other people that's less of a concern, and so they are happy with Rails and want to keep using it, and that's of course their right. Choice is great. Some people are happy with PHP or even ASP too. If it works for them, then that's fine. Switching from something that works perfectly fine for you just to switch is rarely a good idea, and I'd never advocate it.