From 0bbafe778edbfd66145b9e57134c9496d4cec389 Mon Sep 17 00:00:00 2001 From: Miloslav Ciz Date: Wed, 17 Nov 2021 16:02:35 -0600 Subject: [PATCH] Update --- coc.md | 6 ++++-- oop.md | 20 +++++++++++++++--- optimization.md | 56 +++++++++++++++++++++++++++++-------------------- 3 files changed, 54 insertions(+), 28 deletions(-) diff --git a/coc.md b/coc.md index c2d7be2..8fc4d3f 100644 --- a/coc.md +++ b/coc.md @@ -1,5 +1,7 @@ # Code of Conduct -Code of conduct (COC) is a shitty invention of [SJW](sjw.md)s that dictates how development of specific software should be conducted, generally pushing toxic woke concepts such as forced inclusivity or use of politically correct language. COC is typically placed in the software repository as a `CODE_OF_CONDUCT` file. In practice COCs are used to kick people out of development because of their political opinions expressed anywhere, inside or outside the project, and to push political opinions through software projects. +Code of conduct (COC) is a shitty invention of [SJW](sjw.md) fascists that dictates how development of specific software should be conducted, generally pushing toxic woke concepts such as forced inclusivity or use of politically correct language. COC is typically placed in the software repository as a `CODE_OF_CONDUCT` file. In practice COCs are used to kick people out of development because of their political opinions expressed anywhere, inside or outside the project, and to push political opinions through software projects. -Based software must never include any COC, with possible exceptions of anti-COC or parody style COCs, not because we dislike genuine inclusivity, but because we believe COCs are bullshit and mostly harmful as they support bullying, censorship and exclusion of people. \ No newline at end of file +**[LRS](lrs.md) must never include any COC**, with possible exceptions of anti-COC (such as NO COC) or parody style COCs, not because we dislike genuine inclusivity, but because we believe COCs are bullshit and mostly harmful as they support bullying, censorship and exclusion of people. + +Anyway it's best to avoid any kind of COC file in the repository, it just takes up space and doesn't serve anything. We may simply ignore this shitty concept completely. You may argue why we don't ignore e.g. [copyright](copyright.md) in the same way and just not use any [licenses](license.md)? The situation with copyright is different: it exists by default, without a license file the code is proprietary and our neighbors don't have the legal safety to execute basic freedoms, they may be bullied by the state -- for this we are forced to include a license file to get rid of copyright. With COC there simply isn't any such implicit issues to be solved (because COCs are simply inventing their own issues), so we just don't try to solve non-issues. \ No newline at end of file diff --git a/oop.md b/oop.md index 4ac24b5..3f6c818 100644 --- a/oop.md +++ b/oop.md @@ -1,6 +1,6 @@ # Object-Oriented Programming -Object-oriented programming (OOP, also object-obsessed programming) is a [programming paradigm](paradigm.md) that tries to model reality as a collection of abstract objects that communicate with each other and obey some specific rules. While the idea itself isn't bad and can be useful in certain cases, OOP has become extremely overused and downright built into programming languages which often force users to apply this abstraction to every single program which creates [anti-patterns](anti_pattern.md), unnecessary issues and of course [bloat](bloat.md). We therefore see OOP as a [cancer](cancer.md) of software development. +Object-oriented programming (OOP, also object-obsessed programming) is a [programming paradigm](paradigm.md) that tries to model reality as a collection of abstract objects that communicate with each other and obey some specific rules. While the idea itself isn't bad and can be useful in certain cases, OOP has become extremely overused, extremely badly implemented and downright forced in programming languages which apply this abstraction to every single program and concept, creating [anti-patterns](anti_pattern.md), unnecessary issues and of course [bloat](bloat.md). We therefore see OOP as a [cancer](cancer.md) of software development. Ugly examples of OOP gone bad include [Java](java.md) and [C++](cpp.md) (which at least doesn't force it). Other languages such as [Python](python.md) and [Javascript](javascript.md) include OOP but have lightened it up a bit and at least allow you to avoid using it. @@ -8,12 +8,26 @@ You should learn OOP but only to see why it's bad (and to actually understand 99 ## Principles +Bear in mind that OOP doesn't have a single, crystal clear definition. It takes many forms and mutations depending on language and it practically always combined with other paradigms such as the [imperative](imperative.md), so things may be fuzzy. + +Generally OOP programs solve problems by having **objects** that communicate with each other. Every object is specialized to do some thing, e.g. one handles drawing text, another one computes some specific equation etc. Every object has **data** (e.g. a human object has weight, race etc.) and **methods** (object's own [functions](function.md), e.g. human may provide methods `getHeight` or `drinkBeer`). Objects may send **messages** to each other: e.g. a human object sends a message to another human object to get his name (in practice this means the first object calls a method of the other object just like we call functions). + +Now many OO languages use so called **class OOP**. In these we define object [classes](class.md), similarly to defining [data types](data_type.md). A class is a "template" for an object, it defines its methods and what kind of data it will hold. Any object we then create is then created based on some class (e.g. we create the object `alice` and `bob` of class `Human`). We say a the object is an **instance** of that class. + +OOP furthermore comes with some basic principles such as: + +- **[encapsulation](encapsulation.md)**: Object should NOT be able to access other object's data directly -- they may only use their methods. (This leads to the setter/getter antipattern). +- **[polymorphism](polymorphism.md)**: Different objects (e.g. of different classes) may have methods with the same name which behave differently for either object and we may just call that method without caring what kind of object that is (the correct implementation gets chosen at runtime). E.g. objects of both `Human` and `Bomb` classes may have a method `setOnFire`, which with the former will kill the human and with the latter will cause an explosion killing many humans. +- **[inheritance](inheritance.md)**: In class OOP classes form a hierarchy in which parent classes can have child classes, e.g. a class `LivingBeing` will have `Human` and `Duck` subclasses. Subclasses inherit stuff from the parent class and may add some more. However this leads to other antipatterns such as the [diamond_problem](diamond_problem.md). Inheritance is nowadays regarded as bad even by normies and is being replaced by [composition](composition.md). + ## Why It's Shit - For simple programs (which most programs should be) OOP is an unnecessarily high and overly complex abstraction. +- OOP is just a bad abstraction for many problems that by their nature aren't object-oriented. OOP is not a [silver bullet](silver_bullet.md), yet it tries to behave as one. - Great number of the supposed "features" and design-patterns (setters/getters, singletons, inheritance, ...) turned out to actually be anti-patterns and burdens. -- OOP as any higher abstraction very often comes with overhead and performance loss as well as more complex [compilers](compiler.md). -- The relatively elegant idea of pure OOP didn't catch up and the practically used OOP languages are abomination hybrids of imperative and OOP paradigms. +- OOP as any higher abstraction very often comes with overhead, memory footprint and performance loss ([bloat](bloat.md)) as well as more complex [compilers](compiler.md). +- The relatively elegant idea of pure OOP didn't catch up and the practically used OOP languages are abomination hybrids of imperative and OOP paradigms that just take more head space, create friction and unnecessary issues to solve. +- The naive idea of OOP that the real world is composed of nicely defined objects such as `Humans` and `Trees` also showed to be completely off, we instead see shit like `AbstractIntVisitorShitFactory`. - TODO ## History diff --git a/optimization.md b/optimization.md index f0776f5..2c1d469 100644 --- a/optimization.md +++ b/optimization.md @@ -1,27 +1,37 @@ # Optimization -Optimization means making a program more efficient (in terms of some metric such as speed or memory usage) while preserving its functionality. +Optimization means making a program more efficient in terms of some metric such as speed or memory usage (but also others such as power consumption, network usage etc.) while preserving its functionality. -## Rules & Tips +Unlike [refactoring](refactoring.md), oprimization changes the behavior of the program to a more optimal one (but again, it doesn't change its functionality). -- Tell your compiler to actually optimize (`-O3`, `-Os` etc.). -- gprof is a utility you can use to profile your code. -- `` has types such as `uint_fast32_t` which picks the fastest type of at least given width on given platform. -- Keywords such as `inline`, `static` and `const` can help compiler optimize well. -- Optimize the bottlenecks! Optimizing in the wrong place is a complete waste of time. If you're optimizing a part of code that's taking 1% of your program's run time, you will never speed up your program by more than that 1% even if you speed up the specific part by 10000%. -- You can almost always trade space (memory usage) for time (CPU demand) and vice versa and you can also fine-tune this. You typically gain speed by precomputation (look up tables, more demanding on memory) and memory with compression (more demanding on CPU). -- Avoid branches (ifs). They break prediction and instruction preloading and are often source of great performance losses. Don't forget that you can compare and use the result of the operation without using any branching (e.g. `x = (y == 5) + 1;`). -- Use iteration instead of recursion if possible (calling a function is pretty expensive). -- You can use good-enough approximations instead of completely accurate calculations, e.g. taxicab distance instead of Euclidean distance, and gain speed or memory without trading. -- Operations on static data can be accelerated with accelerating structures (indices for database lookups, spatial grids for collision checking, ...). -- Use powers of 2 whenever possible, this is efficient thanks to computers working in binary. Not only may this help nice utilization and alignment of memory, but mainly multiplication and division can be optimized by the compiler to mere bit shifts which is a tremendous speedup. -- Write cache-friendly code (minimize long jumps in memory). -- Compare to 0 if possible. There's usually an instruction that just checks the zero flag which is faster than loading and comparing two arbitrary numbers. -- Consider moving computation from run time to compile time. E.g. if you make a resolution of your game constant (as opposed to a variable), the compiler will be able to partially precompute expressions with the display dimensions and so speed up your program (but you won't be able to dynamically change resolution). -- On some platforms such as ARM the first arguments to a function are passed via registers, so it's better to have few parameters in functions. -- Optimize when you already have a working code. As Donald Knuth put it: "premature optimization is the root of all evil". -- Use your own caches, for example if you're frequently working with some database item you better pull it to memory and work with it there, then write it back once you're done (as opposed to communicating with the DB there and back). -- Single compilation unit (one big program without linking) can help compiler optimize because it can see the whole code at once, not just its parts. -- Search literature for algorithms with better complexity class (sorts are a nice example). -- For the sake of embedded platforms consider avoiding floating point as that is often painfully slowly emulated in software. -- Early branching can bring a speed up (instead of branching inside the loop create two versions of the loop and branch in front of them). \ No newline at end of file +## General Tips'n'Tricks + +- **Tell your compiler to actually optimize** (`-O3`, `-Os` etc.). +- **gprof is a utility you can use to profile your code**. +- **`` has fast type nicknames**, types such as `uint_fast32_t` which picks the fastest type of at least given width on given platform. +- **Keywords such as `inline`, `static` and `const` can help compiler optimize well**. +- **Optimize the bottlenecks!** Optimizing in the wrong place is a complete waste of time. If you're optimizing a part of code that's taking 1% of your program's run time, you will never speed up your program by more than that 1% even if you speed up the specific part by 10000%. +- **You can almost always trade space (memory usage) for time (CPU demand) and vice versa** and you can also fine-tune this. You typically gain speed by precomputation (look up tables, more demanding on memory) and memory with compression (more demanding on CPU). +- **Avoid branches (ifs)**. They break prediction and instruction preloading and are often source of great performance losses. Don't forget that you can compare and use the result of the operation without using any branching (e.g. `x = (y == 5) + 1;`). +- **Use iteration instead of [recursion](recursion.md)** if possible (calling a function is pretty expensive). +- **You can use good-enough [approximations](approximation.md) instead of completely accurate calculations**, e.g. taxicab distance instead of Euclidean distance, and gain speed or memory without trading. +- **Operations on static data can be accelerated with accelerating structures** ([look-up tables](lut.md) for functions, [indices](indexing.md) for database lookups, spatial grids for collision checking, ...). +- **Use powers of 2** whenever possible, this is efficient thanks to computers working in binary. Not only may this help nice utilization and alignment of memory, but mainly multiplication and division can be optimized by the compiler to mere bit shifts which is a tremendous speedup. +- **Write cache-friendly code** (minimize long jumps in memory). +- **Compare to [0](zero.md) if possible**. There's usually an instruction that just checks the zero flag which is faster than loading and comparing two arbitrary numbers. +- **Consider moving computation from run time to compile time**. E.g. if you make a resolution of your game constant (as opposed to a variable), the compiler will be able to partially precompute expressions with the display dimensions and so speed up your program (but you won't be able to dynamically change resolution). +- On some platforms such as ARM the first **arguments to a function may be passed via registers**, so it may be better to have fewer parameters in functions. +- **Optimize when you already have a working code**. As Donald Knuth put it: "premature optimization is the root of all evil". Nevertheless you should get used to simple nobrainer efficient patterns by default and just write them automatically. +- **Use your own caches where they help**, for example if you're frequently working with some database item you better pull it to memory and work with it there, then write it back once you're done (as opposed to communicating with the DB there and back). +- **[Single compilation unit](single_compilation_unit.md) (one big program without linking) can help compiler optimize better** because it can see the whole code at once, not just its parts. It will also make your program compile faster. +- Search literature for **algorithms with better [complexity class](complexity_class.md)** (sorts are a nice example). +- For the sake of embedded platforms **avoid [floating point](floating_point.md)** as that is often painfully slowly emulated in software. Use [fixed point](fixed_point.md). +- **Early branching can create a speed up** (instead of branching inside the loop create two versions of the loop and branch in front of them). This is a kind of space-time tradeoff. + +## When to Actually Optimize? + +Nubs often ask this. Generally fine, sophisticated optimization should come as one of the last steps in development, when you actually have a working thing. These are optimizations requiring significant energy/time to implement -- you don't want to spend resources on this at the stage when they may well be dropped in the end, or they won't matter because they'll be outside the bottleneck. However there are two "exceptions". + +The highest-level optimization is done as part of the initial design of the program, before any line of code gets written. This includes the choice of data structures and mathematical models you're going to be using, the very foundation around which you'll be building your castle. This happens in your head at the time you're forming an idea for a program, e.g. you're choosing between [server-client](server_client.md) or [P2P](p2p.md), [monolithic or micro kernel](kernel.md), [raytraced](ray_tracing.md) or [rasterized](rasterization.md) graphics etc. These choices affect greatly the performance of your program but can hardly be changed once the program is completed, so they need to be made beforehand. **This requires wide knowledge and experience**. + +Another kind of optimization done during development is just automatically writing good code, i.e. being familiar with specific patterns and using them without much thought. For example if you're computing some value inside a loop and this value doesn't change between iterations, you just automatically put computation of that value **before** the loop. Without this you'd simply end up with a shitty code that would have to be rewritten line by line at the end. \ No newline at end of file