ArticleS. MichaelFeathers.
ItsTimeToDeprecateFinal [add child]

It's Time To Deprecate Final


I'm in a creepy place right now. No, not Munich (for those of you who know that I'm here working). Munich is great. No, I'm talking about where I am with my thoughts. I have a paper I've been working on for a while that I want to release. It's a rather large reasoned rant entitled 'API Design as if Unit Testing Mattered'. The creepy place I'm in is this space that is halfway between releasing it and not. With time, I can make the arguments that I make even more compelling. At least, that is what I tell myself.

On the TDD mailing list today, (Uncle) Bob Martin did a mini-rant on final in Java. Well, it wasn't a rant, really, but still he pointed out that final is a pain, and it is. My feeling right now is that final (and sealed in .NET for that matter) should really be deprecated. They are fundamentally flawed mechanisms for doing the sort of things that they are used for. Let's list those things:

  1. Keeping users of your class on "the straight and narrow" path.. (if you make methods final there is no chance that someone will override them and screw up their code.. at least that way).
  2. Providing a space for framework developers so that they can change their framework code without trouncing on users who've subclassed it.
  3. Security. The classic example is java.lang.String. If it wasn't final, some bad actor could subclass it and send your passwords across the internet.

That's not an exhaustive list, but it is good sample of the reasons why people use final. Framework developers, in particular, seem to love final. Often, they'll final everything that isn't nailed down. And, that's okay, isn't it?

Well... no.. not really. Here's the problem: When you use final pervasively, you make unit testing nearly impossible. Believe me, I know. I encounter enough teams who are in that situation. Have a class A which uses a class B that is final? Well, you'd better like B when you are testing A because it's coming along for the ride.

Extract an interface for B? Sure, if you own B. If you don't, you have to wrap it to supply a mock. And.. here's the kicker: if you are going to wrap B, well, what about security? All final does for a class is guarantee that the class itself won't be the point of access, but what about your wrapper? If you use the interface of the wrapper everywhere, again, because you want to test, the developers of B haven't made your software more secure, they've merely pushed the problem onto your plate: they've forced you to choose between testability and security. It's rather interesting to consider that perhaps we truly can have security, but only if we can't really be sure our software works.

The problem with final (and a couple other language mechanisms like it) is that it is too coarse a tool for what it does. It prevents everyone from subclassing or overriding. What we really need is selective access, or maybe just convention.

Imagine that you've just bought a stereo from a store. You get it home and, being a tinkerer, you want to play with the internals a bit. You try to open the case and you discover that it is welded shut. It's funny, but that is pretty much the situation we are in when we receive software that uses final or sealed. But consider this: why don't companies weld their products shut when they sell them to us. The reason is because they have found a simpler tool for the job: legal agreements. And the legal agreements can be anything. A common one is "if you open the chasis of this stereo you void your warranty." Yes, that one is rather severe, but the fact remains that the users are able to make an informed choice, and their hands aren't tied when they want to do something as normal as, I don't know, say, testing.

What would it be like if we opted for agreements? A comment in a class might say:

This class is intended to be final. XXX reserves the right to change this API in subsequent versions.

Really, that is all that is needed in many cases. From what I've heard, the Eclipse project uses a convention like this that is nicknamed "soft final."

With luck the idea will spread. In the meantime, however, there are many people who have to jump through hoops to write unit tests for code that uses final APIs, and it is a waste.

java.lang.String? Well, yes, I don't mind that being final. I'll never want to mock it in a test. But, we have to face the fact that final, in general, is broken and that it is not the best possible mechanism for the job. In fact, in many cases no mechanism may be the best mechanism.






!commentForm


 Tue, 16 May 2006 23:34:23, Timo Rantalaiho, In Java, overriding methods called by constructors can be dangerous
You can get whacky bugs from overriding a method that is being called by a constructor. See http://penberg.blogspot.com/2005/05/calling-abstract-method-from.html for discussion.

Not that "soft final" could not work for those cases as well. But having gotten in the habit of rather easily changing a private method to protected to get legacy code under test, I'm happy that I can remind myself of the danger of calling methods from constructors by making those methods final. This is mostly a legacy code problem, because a single constructor should not do much anyway.

I totally agree with your sentiment, and that there are probably no cases when you should use "final" in anything that Fowler calls a "published API".

Yes, it's interesting that C# and Java haven't adopted C++'s rule. C++ prevents those calls at compile time when it can detect them. But, that said, maybe a warning is enough. -- MichaelFeathers
 Wed, 17 May 2006 14:20:50, Ilja Preuß, final in your own code
You are right for code you don't own (and therefore can't change). In code I can change, though, final really isn't much more than a warning - if it gets in the way, I simply remove it. I find that to be rather convenient in some cases.

Something similar could be said about private methods. It happened more than once to me that I wanted to override some detail in a class from a third party, found that the behaviour I wanted to change was extracted into its own method, exactly like I needed it - just to find that the method was private instead of protected. Makes me want to cry...
 Fri, 19 May 2006 23:18:13, Tim Ottinger, You can say that again.
Today it would really have been rather handy to mock a few framework objects. You guessed it-- final and not open-source so I can't change it. C# is no better. No subclassing a sealed class, you might accidentally make it useful. Heaven forbid that people make use of software components.

Sigh
 Sat, 20 May 2006 08:57:19, Daniel Steinberg, Josh Bloch's J1 session
It would be interesting to hear Joshua Bloch's response. At his JavaOne[?] session he advised the audience that "final is the new private". He explained that what he meant by this was that just as he has advocated in the past that everything that can be made private should be, he also is now recommending that everything that can be made final should be. He was thinking from an API designer's point of view and said that the chief exception was serialization. I wonder how he'd respond to the implications to testing.

Also, although I agree with your objection from a testing point of view, this seems like similar arguments about having to make elements non-private to support testing.
 Sat, 20 May 2006 10:28:45, David Chelimsky, scoping mechanisms
I've always found the scoping mechanisms that languages provide to be limiting. For example, imagine that you want to have a method that is only accessible to implementors of a given interface. In java we can make the method protected, but that forces us into using implementation inheritance rather than composition/delegation. We could make the method package-private, but then all of the implementors of that interface have to be in the same package. These sorts of mechanisms force us to structure classes and packages in ways that we might otherwise prefer to avoid in order to limit the scope as we wish.

I wonder if support for some sort of declaritive arbitrary scoping would help to provide the sorts of security that authors who use "final" are looking for without making it so difficult for the rest of us to use their code.
 Sat, 20 May 2006 10:36:10, David Chelimsky, re: Josh Bloch's J1 session
"I agree with your objection from a testing point of view"

Daniel, apologies if I'm misreading your response, but it seems that you want separate the "testing perspective" from the "development perspective". For many of us who practice TDD, there is no such distinction. There are some differences in opinion about the boundaries of coding for testability (i.e. some will expose private methods directly on their classes, while others will move privates out to new classes in order to test them), but we all agree that the provable quality of the code is proportional to it's testability.
 Sun, 21 May 2006 00:52:15, Dave Astels, Final ..or.. you can't build cathedrals with butter knives
Final has no valid use, IMHO.

It's a pandering to paranoia of managers who don't trust their programmers, and the desire to use crap programmers who shouldn't be trusted.

Smalltalk doesn't have the concept of "final".. it doesn't even have the concept of "private" methods. And it's all the better for that. You tell people that certain methods are for internal consumption.. and trust that they have enough sense to take that advise... and if they decide not to.. well.. they were warned.

Get good people that know what they're doing, trust them to to the right thing, and give them the freedom to. Don't get crap people and put limits on what they can do.

Crap people + crap tools = crap software
 Mon, 22 May 2006 09:23:19, Ilja Preuß, nice things about final
To me, both final and private are basically "comments that can't lie". And if you use them in that way - not as a mechanism to limit usage of the code, but to communicate the current usage of the code, I find it to be valuable.
 Wed, 24 May 2006 08:26:40, Elliotte Rusty Harold,
Could you please provide a link to or the full text of Bob Martin's post that you reference here? Thanks.
 Wed, 24 May 2006 19:27:13, Ramon, Final isn't the problem
It's time to deprecate Java, use Smalltalk, you'll feel better!
 Thu, 25 May 2006 10:24:47, ,
If final is so much of a problem, why don't you, or someone else, build a tool that transforms compiled code into code that contains no final, and test against that? Why throw away either security or testing?
 Thu, 25 May 2006 10:58:10, Matisse Enzer, Convention compared to Implementation
Your comment "What we really need is selective access, or maybe just convention." reminds me of the Perl saying that "Perl expects you to stay out of the living room because you were not invited, not because it has a shotgun." :-)

In any case, I think you make good arguments here. The tension between declaring-everything-to-be-exactly-how-I-intend-it and ease of testing is an interesting one, worth lots more thought I think.
 Thu, 25 May 2006 11:37:42, MauHumor[?], final and the virtual table
The "final" keyword also indicates to the compiler (hotspot...not the java compiler) that references to that final class (and all its methods) or the final method does not need to use a virtual table, after all, none can override it.
Without a virtual table the invocation overhead is, obviously, reduced.

In some cases the tradeof between a minus increase in performance and the capacity to make a unit test is worthwhile, in other cases it is not.

And keep in mind that you can also use interfaces to build a unit test, you doesn necessarily need to override the implementation class, just wrap it.
 Thu, 25 May 2006 20:32:57, Rammstein,
"Final has no valid use, IMHO.

It's a pandering to paranoia of managers who don't trust their programmers, and the desire to use crap programmers who shouldn't be trusted."

That's the stupidest comment I have ever read. Thanks to people like we have so many security flaws and other kinds of bugs in software nowadays.

People WILL commit mistakes, just because they are people. Are you advocating that creating tools that help to prevent simple errors is bad?
 Thu, 25 May 2006 22:11:01, David Chelimsky, re: Rammstein
Agreed that people will make mistakes, but the point Michael is making is that people who want to test their code are going to wrap the final classes and make those mistakes anyhow. In that case, final buys us nothing and costs us the extra expense of wrapping the code.
 Fri, 26 May 2006 05:10:51, Mark Thornton, Memory Model and threading
Since JSR-133, "final" has additional significance when creating immutable objects which may be visible from multiple threads. In these cases, if you don't use final appropriately, your code is broken.
 Fri, 26 May 2006 10:07:37, David Chelimsky, re: Memory Model and threading
The meaning of "final" is overloaded here. The keyword is the same, but in the case you describe (if I understand you correctly), it is used to identify immutable objects at runtime. That strikes me as a completely separate issue from classes (not objects) that are immutable at "dev-time".
 Fri, 26 May 2006 11:23:16, Gabriel Belingueres, don't think it will be useful
I never use the final keyword on classes/methods before, but I don't think that deprecating final keyword would be of any use.

As I see it, keywords acting on classes/interfaces/methods, act pretty much as constrants on databases: they just reflect a design intent, and they are the mean to "hardwire" them into the code.

If that design intent either is of no use to you, or is it right or wrong is entirelly another separate issue.

I could think of a fully useful use of final for a pre-release, optimization step on a product which would magically put final keywords to all allowed places to generate a more optimized code (thus avoiding polymorphysm whenever it could)

Finally, I would rather try to deprecate the "super" keyword, since it usually is a code smell.

Regards,
Gabriel
 Fri, 26 May 2006 17:26:45, Tom Ball, Final isn't the problem, bad designs are
I share what appears to be the minority opinion that final is very useful and under-used. As someone who has tried to design APIs with robust implementations for many years now, I've found that designs that just assume inheritance instead of explicitly design for it tend to be very difficult to extend or improve, and thus less useful than more "restrictive" ones. As Josh Bloch and many other experienced public API designers have stated, it is easy to increase access in an initially restricted API but you cannot later restrict it (without breaking compatibility). The more transparent a class is, the less flexibility you have to improvwe it without breaking its clients.

And nothing needs to be accessed in a good API when testing other than its public contract. Many test-writers complain that black-box testing a particular API is impossible and so they need white-box access, but what they are really having problems with is the API itself, not a lack of access to the implementation internals. If you need access to some internal information, say to verify class invariants, then write a package-private method to test that invariant and write a test with the same package name. Show me code that you think requires subclassing of final methods or access to private ones, and I am confident that there a better solution can be found by fixing the API rather than violating the implementation's access.

It's not that people need to test the API (although that would be nice at times) but rather they need to write tests for code that uses the API but doesn't exercise the code behind it. Too often the people who've written the API are inaccessible. I doubt I'd be able to get anyone at Sun or Microsoft to change an API for me :) at least not in a timeframe that matters. -- MichaelFeathers
 Fri, 26 May 2006 19:18:20, David Chelimsky, the real point here is testing
This is about TESTING, not extensibility. And it's not about testing the software that's implementing final classes, it is about testing the software that uses the vendor provided software.

Here's a concrete example (no pun intended), though in a different language. I am currently working on a .NET desktop application which has to be able to print receipts from a collection of transactions. We're using the GDI API provided by C#, which centers around a class named System.Drawing.Graphics. Graphics is a sealed, concrete class. It does not implement an IGraphics interface. It offers no means of serialization, exporting to a stream, or anything that would let me capture its contents in a test.

So in order to test that my class that is generating the receipt does so correctly, I've had to wrap the parts of the API in a GraphicsWrapper[?] interface. The receipt generator uses an implementation of GraphicsWrapper[?], which is then mocked in the tests. The code that runs in production, not in tests, has to use an implementation of this wrapper as well. The implementation simply delegates to the real class, but it is extra code that I chose to write because I DO CARE about testing my code.

OK, so let's drop the "final" argument for a second and imagine that we all agree that sealing this class is a good idea. Even then, if the designers of this library really cared about me being able to test my software, they could have made Graphics an interface instead of a concrete class and provided a factory from which I could acquire a concrete instance of an implementor whose real name I don't even need to know. This would have allowed me to mock the library API and use the provided implementation directly. And then I could have headed home at the end of a productive day for a nice refreshing beer in the spring sun. Instead, I'm heading straight for the bottle of whisky!
 Fri, 26 May 2006 21:02:36, Rogerio Liesenfeld, The use of final is purely a design issue, so writing unit tests shouldn't disallow it
If the issue with "final" is the ability to write unit tests for classes that call final methods, you can use this new tool: jmockit.dev.java.net.
It makes it easy to mock final methods or final classes, and also static methods; or even a constructor in an object directly instantiated with "new" (that is, no need for dependency injection).

That sounds great. I'll have to check it out. One can only wonder, though, whether it really should be necessary to write a tool to do something like this, and what the implications for security are when it's so easy to write tools like this. It's like nailing all of your windows shut for security and then saying to someone "See? We have security. Oh, and if you want to go inside, here's the key to the front door." -- MichaelFeathers
 Sat, 27 May 2006 12:07:06, Isaac Gouy, insinuations
'That sounds great. I'll have to check it out. ... It's like nailing all of your windows shut for security and then saying to someone "See? We have security. Oh, and if you want to go inside, here's the key to the front door."

There's nothing wrong with not knowing about something.
There's a lot wrong with insinuating that jmockit undermines Java security when you know nothing about jmockit.

Isaac, come on guy, I was just asking a question. I'm going to try JMockit tomorrow when this hellish travel day ends for me. -- mf

 Sat, 27 May 2006 18:19:58, Isaac Gouy,
"Isaac, come on guy, I was just asking a question. I'm going to try JMockit tomorrow ..."

Really? Please show what question you asked.
The way it reads to me is uninformed speculation, with a strong baseless suggestion that "tools like this" undermine security.

I think we need to understand java.lang.instrumentation usage before commenting on how it might change Java security.

The question was right where you put ellipses when you quoted me. :) If you think -that- was a strong suggestion from me, we ought to meet sometime. :) - mf
 Sat, 27 May 2006 20:22:06, Anonymous Coward, Gosling regrets not having gone pure interface
Hi there,

what is your OO background? Do you realize that concrete (aka implementation) inheritance is not mandatory to do OO programming? For some OO-minded people separating specification from implementation is an elementary principle. Some cornerstones of OO are: encapsulation, polymorphism, the ability to express ADTs (abstract data types) and the ability to define hierarchies. Implementation inheritance isn't a cornerstone of OO. Do you realize that there are today program written in several languages (including Java) that do *never* rely on implementation inheritance? I've seen several code base heavily tested, with 100% code coverage, without *ever* using implementation inheritance. Your analogy is flawed. Argumentum-ad-defective-APIs (not easily testable APIs) is a logical fallacy.

James Gosling himself several times emitted regrets not having gone "pure interface" (including in online interviews, at artima.com in 1999 and in 2001, articles are still available): we'd have a Java without the abstract keyword, without concrete inheritance and without the protected keyword. It would have been simpler and *much* cleaner from an OO point of view.

I think you really need to brush up your skill on what OO programming is: what you may be discovering is that concrete inheritance, wether you use final or not, very often complicates testing. Concrete inheritance is flawed. All that counts is expressing correctly your ADTs. But go tell that to millions of Java programmers writing procedural codes, using procedural APIs, and thinking they do OO programming...
 Mon, 29 May 2006 20:21:51, OO is OO, it's time to deprecate implementation inheritance
Hi,

there are issues, as you've realized. However they don't come from the fact that a class can be made "final" but from the fact that implementation inheritance is deeply problematic. Both James Gosling ( http://www.artima.com/intv/gosling13.html ) and Bjarne Stroustrup ( http://www.artima.com/intv/modern.html ) consider that implementation inheritance is broken, and both regret that "OO" programmer don't separate definition from implementation (Gosling went as far as saying he regretted not having gone "pure interface"). Stroustrup really sums it in one paragraph:

"In 1987, I supported pure interfaces directly in C++ by saying a class is abstract if it has a pure virtual function, which is a function that must be overridden. Since then I have consistently pointed out that one of the major ways of writing classes in C++ is without any state, that is, just an interface... Some new ideas are hard to get across, and part of the problem is a lot of people don't want to learn something genuinly new. They think they know the answer. And once we think we know the answer, it's very hard to learn something new.

Thing about it: the notion of a state is superfluous when defining abstract data types. No state, no need for implementation inheritance, cleaner OO by only defining interfaces. And at that moment an implementation becomes what it was meant to be: a detail.

The problems you're ranting about would vanish not if API designers were to stop using "final", but if they were to stop using implementation inheritance.

Call me a rebel, but I don't think that implementation inheritance should go away. Yes, I was around when everyone started talking about its problems, the fragile base class problem, easier LSP violations etc. I saw the advice given by the GOF, to prefer delegation, etc, however, I've noticed that if you have tests in place, refactoring from implementation inheritance to delegation is pretty easy. I've also noticed that implementation inheritance offers some clear advantages when it comes to making incremental additions to software without overcommitting to a structural change. I have an example of this in the TDD chapter of my book.

The fact is, we have played with object systems that don't have implementation inheritance. Remember COM? There were many times when implementation inheritance would've been preferable to all of the delegation chaining.

Aside from all this, the problem that I'm talking about really has nothing to do with implementation inheritance. Anything you do to make it impossible to replace or override a method can make testing harder to do. So it isn't just final, non-virtuals (in languages which support them) present the same issues if they are overused. -- MichaelFeathers

 Mon, 29 May 2006 21:33:20, Chaz Haws, Umm, I think you're taking Stroustrup's words out of context.
I've read that Stroustrup interview before, but I went and checked after this comment, and I don't think he's saying at all that implementation inheritance is a bad thing. Yeah, he likes interface inheritance too. But he clearly still also thinks that multiple implementation inheritance is a good thing. Specifically the paragraph with "People quite correctly say that you don't need multiple inheritance, because anything you can do with multiple inheritance you can also do with single inheritance ... But why would you want to do that?" Bjarne Stroustrup is a guy who believes in options. I like options.

I agree "interface only" is clean in some respects, but in other respects your classes can contain a lot more duplication by eliminating implementation inheritance. In fact, I would have to say what you describe has only polymorphism, and not inheritance at all - in short, it's not OO. I used to hear the term "object-based" as opposed to "object-oriented" to distinguish the VB6 feature set from those languages with actual inheritance. I think that's what you're describing. Can you enlighten me as to what I'm missing?

On the actual topic: Makes sense to me to avoid final/sealed, but that would only help so much if one's libraries don't make the same choice. One hopes - and suspects, after other comments - that the mocking tools are in fact up to the job of working around it.

It looks like they are getting there in Java. I haven't seen anything similar for .NET yet and a friend at MS tells me he thinks that with assembly signing, it may not be as easy. Then, there's C++. C++ (and C#) present the same issue with non-virtual functions, and I don't see any tool in sight for C++. -- MichaelFeathers
 Tue, 30 May 2006 18:44:39, Anonymous Coward #2, GDI
Keep in mind that GDI is a set of user-level entry points for kernel-level display drivers. It really wouldn’t make sense for the operating system to (by default) add a dynamic dispatch that sends each call back to a (per-process) user-space wrapper driver so it can inspect the calls and decide how it wants to call the routines that dive into the kernel-level driver. If you want that kind of functionality, you have to implement it yourself. As others have pointed out, you do so by creating your own wrapper around GDI.

Why not supply a wrapper? -- MichaelFeathers
 Tue, 30 May 2006 20:40:26, Anonymous Coward #2, solution to your problem with final
Reading this article and comments, it becomes apparent that the primary objection to final relates to input/output condition testing. Fortunately there's a solution: Sun ships a link editor for Java. I'll leave it as an exercise for the reader to find it and use to eliminate the need for making 'mock objects'.

Hint: Ask your favorite search engine how to "wrap" malloc in C. The same method will work with the Java link editor.

Or you could use JMockIt. The fact is, this is a problem across languages. What you're talking about above is something I call the "link seam" and its one of several places where people can transparently substitute behavior. Itr's nice when your back is against the wall, but I'm sure you'll agree that the number of teams who unit test their code and the number of teams who will resort to link editing for all their API calls will never be the same. I think it's a high barrier to entry for run-of-the-mill unit testing. In my opinion, it means that languages have to do better as well as API designers. And, as I mentioned in the blog. you can wrap, but it does make security concerns look a bit ridiculous too. We'd all be better off if API designers were thinking about testing, or at least following the Golden Rule of API Design. -- MichaelFeathers
 Wed, 31 May 2006 08:38:19, David Chelimsky, re: GDI
I have to agree w/ Michael's comment: "Why not supply a wrapper?".

Here's how I see it. Developers are direct customers of API designers (vendors). As more and more developers are unit testing their code, whether it's before (TDD) or after, it becomes more and more important for API providers to make unit testing easier. Eventually, APIs that are easier to test will win out over those that are not - just like things work in any other market.
 Wed, 31 May 2006 12:35:03, Isaac Gouy, java.lang.instrumentation and the -javaagent JRE flag
"The question was right where you put ellipses when you quoted me. :) If you think -that- was a strong suggestion from me, we ought to meet sometime. :)" Michael Feathers

Here's where I put the ellipsis: "One can only wonder, though, whether it really should be necessary to write a tool to do something like this, and what the implications for security are when it's so easy to write tools like this."

It isn't a question - it's a statement. It suggests JMockit might undermine Java security. It was a statement based on no knowledge of JMockit.

Now that you've looked at JMockit, can you say if using java.lang.instrumentation and the -javaagent JRE flag during testing undermines Java security in deployed applications?

I was just wondering aloud. For what it's worth, I looked at the instrumentation API a while ago. Do I think that j.l.i undermines security? Probably no more than any of the other mechanisms used to gain special access to classes. Each point of access to a system is a potential vulnerability, and not all points of access are programmatic. -- MichaelFeathers.
 Fri, 2 Jun 2006 11:56:50, JM, final
I have to agree with this. While I'm not so sure final is broken, but rather just getting a bad rap for being used too often. Is it's over use bad programming? Probably not. It's simply over used and in my case, I see us as a team having to keep classes open for modification, instead of closed. It seems to inhibit using the Decorator pattern in java and forces more code changes. It seems that Elliotte Rusty Harold is of the reverse opinion (as well as others) and that final should be the default. This change would cause alot of backward compatibility problems. But would force the perspective to change, perhaps many more classes would then purposefully have a subclassable keyword.