This is the eighteenth part of the Types and Programming Languages series. For your convenience you can find other parts in the table of contents in Part 1 — Do not return in finally
The title seems obvious, right? Your language dictates the rules and you need to play by the book. However, consequences are much bigger than you may think. It’s not only that we do solutions which are simple to use in a given language. It’s also that we consider some solutions “bad in general” because they can create some issues in a given language. However, we need to remember that concepts are never good or bad. It’s the way we use them what makes them as such.
Dependency Injection Containers
Java programmers often claim (I generalize here, obviously) that DI containers are bad and we shouldn’t use them. You can find multiple blog posts, conference talks, or discussions regarding why we shouldn’t use DI, and why Spring should be abandoned. At the same time, Microsoft supports DI container directly in .NET Core platform. How is that?
It’s not that DI containers are good or bad. It’s how we use them. All instance methods in Java are virtual. This makes overriding super simple and possible almost all the time. This lead to a massive use of cglib to create dynamic proxies in runtime. Java’s DI containers allow to inject transient scope into singletons — something very rarely used in .NET. Same goes for application servers — they are much more popular and much more powerful in Java world than in .NET.
Since it’s easier to use code generation, it’s easier to make more powerful and more dangerous solutions. And this leads to problems — to use powerful solutions, you need to understand them and know how to not hurt yourself. And since we don’t have much time to learn our tools, we often make wrong changes in our code base. This leads to subtle issues with transactions, data loss, and similar. Java programmers noticed that and decided to not use these features. However, it’s not that “DI containers are bad”. It’s “the way we use DI containers in Java world is too dangerous and leads to multiple issues”. Just learn your tools, and you will be way safer.
Partial Mocks and virtual methods
Similar thing happens in tests. Java allows for overriding nearly any instance method, so creating a partial mock is allowed and supported by default. On the other hand, you need to explicitly mark your method virtual in C#, so you typically don’t do that. If you do, then people may start asking questions in code review like “why is it virtual? Do you override it?”. So you don’t make it virtual, and you can’t use partial mocks. Completely different than in Java.
Now, since it’s allowed in Java, it’s used more often. The reasoning is “if it’s allowed by default, then it must be good”. Again, we shouldn’t generalize that way.
Multiple Identity Inheritance
Now comes a hot topic. C++ allowed for multiple identity inheritance, and people got scared of the diamond problem. Java decided to avoid that entirely, so they disallowed that. However, at the same time they blocked multiple state inheritance and multiple implementation inheritance. The only allowed form was multiple interface inheritance. Unfortunately, Java language allowed for diamond problem since the very beginning anyway.
So people got scared of inheritance. At the same time, languages like Scala allow for nearly full multiple inheritance, and it’s not a problem. Even Java introduced default interface implementation, which can be used to implement mixins and multiple state inheritance. So, is it okay now?
Once again, it’s not that the multiple identity inheritance is good or bad per se. It’s the way we use it.
Inheritance vs Composition
Which one should we use? “It’s obvious — go with composition. Inheritance is bad”. Again, easy to generalize, but not necessarily correct.
Features of OOP were heavily used in early days of Java. Inheritance was a blast, deep class hierarchies were not surprising. Oddly enough, god objects were popular as well. Time passed by, people realized it may not be a good idea to put everything in an 8000 lines long class. So, what should we do? “Ban inheritance, go with composition”.
This is actually a much bigger discussion. Should we use transaction script and anemic domain model? Is it good? Or should we go with stateful classes and Domain Driven Design? Actually, did you notice that DDD is just another name for doing OOP?
Inheritance is not wrong. You need to understand the concept, and think where it works well. Don’t ban it just because you saw god class with 8k lines of code.
Sure, no way you can use goto. It should be banned, disallowed, removed from the language. Now, go to (pun intended) GitHub and look for usages. You may be surprised.
This is heavily overused. The full quote goes “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” However, this refers to the micro-optimizations, like
++i. And yes, this kind of optimization is probably unneeded, as our code is slow due to other things.
At the same time, we shouldn’t neglect the optimization factor. Think about the algorithm, not about the cycles under the hood. Remove unneeded collections and data structures, but don’t pack work on bits alignment.
But, it’s super easily to decrease the performance by using things enabled in the language. LINQ in .NET is a great example — one of the very first thing you do to speed things up is to remove LINQ calls. But they are enabled by default.
This is super interesting. Green threads were popular, but at some point they were heavily criticized. Microsoft wrote a nice paper about that, and others claimed similar. C# introduced async, which was later copied to other languages (Python, JS, C++, Kotlin).
However, Java implemented async in a different way in their project Loom. They used virtual threads under the hood. And then Microsoft immediately decided to experiment with green threads in .NET.
Again, it’s not that the concept is bad. It’s the way we use it. Green threads pose issues as well, especially around blocking operations, locking primitives, or interop calls. But it’s again about the implementation, not about the idea.