📕 Node [[java inheritance is a kludge]]
📄 Java-inheritance-is-a-kludge.md by @enki

Java inheritance is a kludge

The real reason inheritance gets heavily used is that it’s the only way to do polymorphism in languages with really naive static nominal…


Java inheritance is a kludge

The real reason inheritance gets heavily used is that it’s the only way to do polymorphism in languages with really naive static nominal typing (like Java).

With structural typing, you’ve basically separated out interface from implementation, so you don’t need to have any preplanned assocation between different implementations (let alone a hierarchy). With derived types, you don’t need to actually declare which types you take, so you don’t need to abuse the type system to allow more than you’re technically allowed to pass (by creating a do-nothing interface to group together types unrelated except in what you want them for). Duck typing is basically the same thing as structural typing, only with type errors identified at runtime rather than at compile time. So, again, if you have duck typing then polymorphism doesn’t matter much so you don’t need inheritance.

The same flaws in type systems that make formalized inheritance useful for external code make it less useful as a code reuse mechanism. Types end up just being tags (meaningless on their own) associated with data, that must be manipulated in prescribed ways to appease the compiler. And the need to appease the compiler means — well, subclasses are liable to need to take different args to even shared functions! And, if you aren’t defining your own set of types for that, you can’t control their hierarchy in a language like this, so you can’t share code (even though the actual steps being performed are, likely, nigh-identical). A non-issue in derived-type or structural-type languages.

Compare to a prototype language with duck typing, like javascript.

Inheritance in javascript is fantastic, because your functions take basically anything & you can modify them after the fact if necessary. It’s actually good for code reuse, because of type flexibility. But, it has approximately none of the ‘formal’ elements of inheritance (the way we think of them from Java or C++) — no clear hierarchy, you can mix stuff from multiple parents together in free-form ways & patch functions at runtime, and intermediate ‘types’ might not even have names. In other words, it is ‘inheritance’ in only the literal sense — not in the strange, abstract sense that any term commonly used in the software development industry will eventually accrue.

[As Hillel Wayne notes](https://buttondown.email/hillelwayne/archive/if- inheritance-is-so-bad-why-does-everyone-use-it/), the way we usually think of inheritance mixes together concerns for historical reasons — we have inherited habits developed in an era when theoretical distinctions had not yet been made and therefore could not affect praxis. At the same time, these distinctions are not hard to make accidentally!

We have contracts distinct from the legacy of SIMULA, in the form of C’s headers; when C++ and Objective C decided to import ideas about object orientation from Smalltalk, they implemented Smalltalk’s idea of inheritance using C’s contract mechanism (headers). We know in practice that C headers can be used pseudo-structurally — we can swap out binaries under the linker’s nose and mix and match headers from different source, so long as the names and types match (and sometimes, even if they don’t). We can cast to void pointer and then recast to anything, and if we’re clever, it works. In other words, contracts existed in a different form than inheritance, and the accidents of this form of contract (like static nominal typing) were combined with the accidents of contracts implemented via interitance (shared code & a known hierarchy), producing a new, stricter form — and the details of this form are mostly accidents of history, stemming from the particular ways people chose to frame the key elements at the time.

Abstract classes (and, over in java-land, interfaces) tried to claw back some of the strictness: much of the time, the inherited code would not be useful (because of strict & explicit types), so why write it at all? But they again didn’t separate out concerns properly: when you write everything in-house, it’s relatively straightforward to manipulate the type hierarchy however you like and change the position of already-existing classes; when you deal with third party code (or first party code in a large organization), wrapper classes proliferate because positions in type hierarchy are functionally immutable. Java interfaces are a half-measure because they must be declared (as opposed to Go interfaces, which are applied structurally).

Ultimately, most of the problems we complain about with ‘OOP’ are not specific to OOP, but are the result of the juxtaposition of explicit & nominal typing.

Nominal typing can be useful, when type derivation is powerful: the type system, even stripped of meaningful explicit data, can be used as a declarative logic language; it can be relied upon to optimize away impossible paths at compile time; it allows users to concern themselves with types when it matters, and ignore those types when it doesn’t.

Explicit typing can be useful when it’s structural: you can optimize away impossible paths without analysing execution (because a type corresponds to the attributes necessary to perform all supported operations), and every declaration is therefore meaningful.

The combination of nominal and explicit typing produces a world in which one must declare types and cast to them — a world where we work to please the type checker even while knowing the type checker is wrong. It’s a world where the compiler is our adversary and even simple operations require clever hacks. Abstraction layers proliferate because the type system is concerned with covering its own ass and executing someone else’s ill-considered rules rather than helping developers write high-quality software. The ugly, unmaintainable hacks that developers are forced into (the great sins we must commit in order to do our jobs in the presence of eager enforcement of only minor sins) are considered externalities: the compiler as jobsworth.

As an example of an alternative, imagine a version of Java where a third party class could be declared at runtime to have implemented some interface — an assertion checked at compile time. Such a feature (along with best practices around naming) would lead to code that looks very similar, but without the hundreds of empty wrapper classes, overloaded methods, and dangerous runtime casts we see in nontrivial Java code today. Adding this feature would not break any existing Java codebases. All it would do is remove one aspect of nominalism (the class author’s exclusive ownership of a class’s position in type hierarchy) and replace the awkward/dangerous existing workarounds for the problems this constraint causes — allowing developers to trivially combine classes from different sources with similar internal structures into a single supertype. This is still a half-measure, and no substitute for a well-thought- out type system, but it’s an indication of how much benefit only a little bit of thought can be when considering programmer usability!

By John Ohno on May 18, 2020.

[Canonical link](https://medium.com/@enkiv2/java-inheritance-is-a- kludge-8fc8f7bbf5c9)

Exported from Medium on September 18, 2020.

Loading pushes...

Rendering context...