We have default methods that were also referred to as defender methods and 'virtual extension methods'.
While I appreciate the tremendous value of default methods (that in some aspects are even more powerful than their C# counterparts), I wonder what was the decision against allowing to extend existing interfaces without access to their source code.
In one of his answers here in SO Brian Goetz mentioned that default methods are very much designed for convenience as well as interface evolution. So if we write an interface, we can stuff all kinds of utility methods there that we would normally have to place in a separate class. So why not go the extra mile and allow it for interfaces not under our control?
This was driven by a philosophical belief: API designers should control their APIs. While externally injecting methods into APIs is surely convenient, it undermines an API designers control over their API. (This is sometimes called "monkey-patching".)
On the terminology: what C# calls "extension methods" is merely one form of extension method, not the definition of extension method; Java's default methods are also extension methods. The main differences are: C# extension methods are static and are injected at the use-site; Java's are virtual and declaration-site. Secondarily, C# extension methods are injected into types, whereas Java's default methods are members of classes. (This allows you to inject a sum()
method into List<int>
in C# without affecting other List
instantiations.)
It's natural, if you have gotten used to the C# approach, to assume that this is the "right" or "normal" or "real" way to do it, but really, it's just one of many possible ways. As other posters have indicated, C# extension methods have some very serious drawbacks compared to Java's default methods (for example, poor reflective discoverability, poor discoverability through documentation, not overrideable, require ad-hoc conflict-management rules). So the Java glass is well more than half-full here by comparison.