13 KiB
Object-Oriented Programming
"I invented the term 'object oriented' and C++ was not what I had in mind" --Alan Kay, inventor of OOP
Object-oriented programming (OOP, also object-obsessed programming and objectfuscated programming) is a programming paradigm that tries to model reality as a collection of abstract objects that communicate with each other and obey some specific rules. While the idea itself isn't bad and can be useful in certain cases, OOP has become extremely overused, extremely badly implemented and downright forced in programming languages which apply this abstraction to every single program and concept, creating anti-patterns, unnecessary issues and of course bloat. We therefore see OOP as a cancer of software development. Many others oppose it, e.g. Bitreich voice criticism in their manifesto, saying we rather need a subject oriented programming; the idea of OOP being real bad is leaking even into the mainstream so it's becoming less and less controversial to shit on it.
Ugly examples of OOP gone bad include Java and C++ (which at least doesn't force it). Other languages such as Python and Javascript include OOP but have lightened it up a bit and at least allow you to avoid using it.
You should learn OOP but only to see why it's bad (and to actually understand 99% of code written nowadays).
Principles
Bear in mind that OOP doesn't have a single, crystal clear definition. It takes many forms and mutations depending on language and it is practically always combined with other paradigms such as the imperative paradigm, so things may be fuzzy.
Generally OOP programs solve problems by having objects that communicate with each other. Every object is specialized to do some thing, e.g. one handles drawing text, another one handles caching, another one handles rendering of pictures etc. Every object has its data (e.g. a human object has weight, race etc.) and methods (object's own functions, e.g. human may provide methods getHeight
, drinkBeer
or petCat
). Objects may send messages to each other: e.g. a human object sends a message to another human object to get his name (in practice this means the first object calls a method of the other object just like we call functions, e.g.: human2.getName()
).
Now many OO languages use so called class OOP. In these we define object classes, similarly to defining data types. A class is a "template" for an object, it defines methods and types of data to hold. Any object we then create is then created based on some class (e.g. we create the object alice
and bob
of class Human
, just as normally we create a variable x
of type int
). We say an object is an instance of a class, i.e. object is a real manifestation of what a class describes, with specific data etc.
The more "lightweight" type of OOP is called classless OOP which is usually based on having so called prototype objects instead of classes. In these languages we can simply create objects without classes and then assign them properties and methods dynamically at runtime. Here instead of creating a Human
class we rather create a prototype object that serves as a template for other objects. To create specific humans we clone the prototype human and modify the clone.
OOP furthermore comes with some basic principles such as:
- encapsulation: Object should NOT be able to access other object's data directly -- they may only use their methods. For example an object shouldn't be able to access the
height
attribute of aHuman
object, it should be able to access it only via methods of that object such asgetHeight
. (This leads to the setter/getter antipattern). - polymorphism: Different objects (e.g. of different classes) may have methods with the same name which behave differently for either object and we may just call that method without caring what kind of object that is (the correct implementation gets chosen at runtime). E.g. objects of both
Human
andBomb
classes may have a methodsetOnFire
, which with the former will kill the human and with the latter will cause an explosion killing many humans. This is good e.g. in a case when we have an array of GUI components and want to perform e.g. resize on every one of them: we simply iterate over the whole array and call the methodresize
on each object without caring whether the object is a button, checkbox or a window. - inheritance: In class OOP classes form a hierarchy in which parent classes can have child classes, e.g. a class
LivingBeing
will haveHuman
andAnimal
subclasses. Subclasses inherit stuff from the parent class and may add some more. However this leads to other antipatterns such as the diamond_problem. Inheritance is nowadays regarded as bad even by normies and is being replaced by composition.
Why It's Shit
- OOP is just a bad abstraction for many problems that by their nature aren't object-oriented. OOP is not a silver bullet, yet it tries to behave as one. The greatest issue of OOP is that it's trying to solve everything. For example it forces the idea that data and algorithms should always come together, but that's simply a stupid statement in general, there is no justification for it, some data is simply data and some algorithms are simply algorithms. You may ask what else to use instead of OOP then -- see the section below.
- For simple programs (which most programs should be) such as many Unix utilities OOP is simply completely unnecessary.
- OOP languages make you battle artificial restrictions rather than focus on solving the problem at hand.
- Great number of the supposed "features" and design patterns (setters/getters, singletons, inheritance, ...) turned out to actually be antipatterns and burdens -- this isn't a controversial statement, even OOP proponents usually agree with this.
- OOP as any higher abstraction very often comes with overhead, memory footprint and performance loss (bloat) as well as more complex compilers, language specifications, more dependencies etc.
- The relatively elegant idea of pure OOP didn't catch on and the practically used OOP languages are abomination hybrids of imperative and OOP paradigms that just take more head space, create friction and unnecessary issues to solve. Sane languages now allow the choice to use OOP fully, partially or avoid it completely, which leads to a two-in-one overcomplication.
- The naive idea of OOP that the real world is composed of nicely defined objects such as
Human
s andTree
s also showed to be completely off, we instead see shit likeAbstractIntVisitorShitFactory
etc. - The idea that OOP would lead to code reusability also completely failed, it's simply not the case at all, implementation code of specific classes is typically burdened with internal and external dependencies just like any other bloated code. OOPer believed that their paradigm would create a world full of reusable blackboxes, but that wasn't the case, OOP is neither necessary for blackboxing, nor has the practice shown it would contribute to it -- quite on the contrary, e.g. simple imperative header-only C libraries are much more reusable than those we find in the OOP world.
- Good programmers don't need OOP because they know how to program -- OOP doesn't invent anything, it is merely a way of trying to force good programming mostly on incompetent programmers hired in companies, to prevent them from doing damage. However this of course doesn't work, a shit programmer will always program shit, he will find his way to fuck up despite any obstacles and if you invent obstacles good enough for stopping him from fucking up, you'll also stop him from being able to program something that works well as you tie his hands. Yes, good programmers write shit buggy code too, but that's more of a symptom of bad, overcomplicated bloated capitalist design of technology that's just asking for bugs and errors -- here OOP is trying to cure symptoms of an inherently wrong direction, it is not addressing the root cause.
- OOP just mostly repeats what other things like modules already do.
- If you want to program in object-oriented way and have a good justification for it, you don't need an OOP language anyway, you can emulate all aspects of OOP in simple languages like C. So instead of building the idea into the language itself and dragging it along forever and everywhere, it would be better to have optional OOP libraries.
- It generalizes and simplifies programming into a few rules of thumb such as encapsulation, again for the sake of inexperienced noobs. However there are no simple rules for how to program well, good programming requires a huge amount of experience and as in any art, good programmer knows when breaking the general rules is good. OOP doesn't let good programmers do this, it preaches things like "global variables bad" which is just too oversimplified and hurts good programming.
So Which Paradigm To Use Instead Of OOP?
After many people realized OOP is kind of shit, there has been a boom of "OOP alternatives" such as functional, traits, agent oriented programming, all kinds of "lightweight"/optional OOP etc etc. Which one to use?
In short: NONE, by default use the imperative paradigm (also here many times interchangeably called "procedural"). Remember this isn't to say you shouldn't ever apply a different paradigm, but imperative should be the default, most prevalent and suitable one to use in solving most problems. There is nothing new to invent or "beat" OOP.
But why imperative? Why can't we simply improve OOP or come up with something ultra genius to replace it with? Why do we say OOP is bad because it's forced and now we are forcing imperative paradigm? The answer is that the imperative paradigm is special because it is how computers actually work, it is not made up but rather it's the natural low level paradigm with minimum abstraction that reflects the underlying nature of computers. You may say this is just bullshit arbitrary rationalization but no, these properties makes imperative paradigm special among all other paradigms because:
- Its implementation is simple and suckless/LRS because it maps nicely and naturally to the underlying hardware -- basically commands in a language simply translate to one or more instructions. This makes construction of compilers easy.
- It's predictable and efficient, i.e. a programmer writing imperative code can see quite clearly how what he's writing will translate to the assembly instructions. This makes it possible to write highly efficient code, unlike high level paradigms that perform huge amounts of magic for translating foreign concepts to machine instructions -- and of course this magic may differ between compilers, i.e. what's efficient code in one compiler may be inefficient in another (similar situation arose e.g. in the world of OpenGL where driver implementation started to play a huge role and which led to the creation of a more low level API Vulkan).
- It doesn't force high amounts of unnecessary high level abstraction. This means we MAY use any abstraction, even OOP, if we currently need it, e.g. via a library, but we aren't FORCED to use a weird high level concepts on problems that can't be described easily in terms of those concepts. That is if you're solving a non-OOP problem with OOP, you waste effort on translating that problem to OOP and the compiler then wastes another effort on un-OOPing this to translate this to instructions. With imperative paradigm this can't happen because you're basically writing instructions which has to happen either way.
- It is generally true that the higher the abstraction, the smaller its scope of application should be, so the default abstraction (paradigm) should be low level. This works e.g. in science: psychology is a high level abstraction but can only be applied to study human behavior, while quantum physics is a low level abstraction which applies to the whole universe.
Once computers start fundamentally working on a different paradigm, e.g. functional -- which BTW might happen with new types of computers such as quantum ones -- we may switch to that paradigm as the default, but until then imperative is the way to go.
History
TODO