Update
This commit is contained in:
parent
8c64eca41b
commit
690e441493
7 changed files with 134 additions and 23 deletions
5
books.md
5
books.md
|
@ -5,10 +5,11 @@ Here there will be a constantly WIP list of [books](book.md) that might be of in
|
||||||
{ Let's aim for quality rather than quantity here, don't put any book that has some connection to our cause here, but rather the ones you've read and which you judge as a quality book that enriched you in some way. ~drummyfish }
|
{ Let's aim for quality rather than quantity here, don't put any book that has some connection to our cause here, but rather the ones you've read and which you judge as a quality book that enriched you in some way. ~drummyfish }
|
||||||
|
|
||||||
- **Blackout** (2017, Elsberg): Fiction, telling a story of a large blackout in Europe that shows to really be caused by [bloated](bloat.md) tech. For [collapse](collapse.md) enjoyers this is an interesting read if only for the detailed description of the consequences a sudden loss of electric power.
|
- **Blackout** (2017, Elsberg): Fiction, telling a story of a large blackout in Europe that shows to really be caused by [bloated](bloat.md) tech. For [collapse](collapse.md) enjoyers this is an interesting read if only for the detailed description of the consequences a sudden loss of electric power.
|
||||||
|
- **Computer Science: An Overview** (J. Glenn Brookshear): Cool [bachelor](bachelor.md) level overview of whole [computer science](compsci.md), including things like [history](history.md) of computers, their architecture, [computer graphics](graphics.md), [compression](compression.md), [encryption](encryption.md), [AI](ai.md), [operating systems](operating_system.md), [complexity](computational_complexity.md) etc., with explanations that are neither too simplified nor too long and overcomplicated with equations, i.e. there is a nice balance, probably most useful to fresh university computer science students.
|
||||||
- **Day of the Triffids** (1951, Wyndham): Excellent sci-fi in which civilization comes to an end due to a disaster (won't spoil), very nice for collapse preps or just people enjoying a great story narrated in captivating way :-) The movie is a joke, don't even search for it.
|
- **Day of the Triffids** (1951, Wyndham): Excellent sci-fi in which civilization comes to an end due to a disaster (won't spoil), very nice for collapse preps or just people enjoying a great story narrated in captivating way :-) The movie is a joke, don't even search for it.
|
||||||
- **Einstein: His Life and Universe** (Isaacson, 2008): [Einstein](einstein.md)'s biography, quite a nice read about a pretty awesome man who's image has been so distorted by the mainstream shit.
|
- **[Einstein](einstein.md): His Life and Universe** (Isaacson, 2008): [Einstein](einstein.md)'s biography, quite a nice read about a pretty awesome man who's image has been so distorted by the mainstream shit.
|
||||||
- **Encyclopedia Britannica 11th edition** (1911): Extremely large, old, uncensored [encyclopedia](encyclopedia.md), mostly digitized and fulltext searchable, also completely [public domain](public_domain.md), with very long articles on all topics up to the date of its publication. Great source of lesser known information and an alternative to modern censored sources. Also check out other similar encyclopedias.
|
- **Encyclopedia Britannica 11th edition** (1911): Extremely large, old, uncensored [encyclopedia](encyclopedia.md), mostly digitized and fulltext searchable, also completely [public domain](public_domain.md), with very long articles on all topics up to the date of its publication. Great source of lesser known information and an alternative to modern censored sources. Also check out other similar encyclopedias.
|
||||||
- **Flatland** (Abbott, 1884): Absolutely amazing fantasy story set in two dimensional land with characters being geometric shapes, while being a critic of society to a big degree, it discusses practical and mathematical aspects of actually living in two dimensions, how the characters see, how they build their houses etc. It is now absolutely [public domain](public_domain.md)!
|
- **[Flatland](flatland.md)** (Abbott, 1884): Absolutely amazing fantasy story set in two dimensional land with characters being geometric shapes, while being a critic of society to a big degree, it discusses practical and mathematical aspects of actually living in two dimensions, how the characters see, how they build their houses etc. It is now absolutely [public domain](public_domain.md)!
|
||||||
- **Free as in Freedom** (Sam Williams, 2002): Free-licensed official biography of [Richard Stallman](rms.md), contains many historical details about how [free software](free_software.md) came to be, how [open source](open_source.md) spoiled it etc.
|
- **Free as in Freedom** (Sam Williams, 2002): Free-licensed official biography of [Richard Stallman](rms.md), contains many historical details about how [free software](free_software.md) came to be, how [open source](open_source.md) spoiled it etc.
|
||||||
- **Free Culture** (Lessig, 2004): Creative-commons licensed (non-free but gratis) book by the founder of [Creative Commons](creative_commons.md) and [free cutlure](free_culture.md), goes into details on how copyright became abused by capitalism, why public domain is being smothered and why we must support free culture.
|
- **Free Culture** (Lessig, 2004): Creative-commons licensed (non-free but gratis) book by the founder of [Creative Commons](creative_commons.md) and [free cutlure](free_culture.md), goes into details on how copyright became abused by capitalism, why public domain is being smothered and why we must support free culture.
|
||||||
- **Game Engine Black Book: Doom** (Sanglard, 2019): Gratis, very nice book dissecting all the details about the legendary [Doom](doom.md) engine and its internals -- how it worked, why was it so fast, what hacks went into it, written so that a reader of any programming skill (even none) will find something interesting. A must read for fans of oldschool game programming.
|
- **Game Engine Black Book: Doom** (Sanglard, 2019): Gratis, very nice book dissecting all the details about the legendary [Doom](doom.md) engine and its internals -- how it worked, why was it so fast, what hacks went into it, written so that a reader of any programming skill (even none) will find something interesting. A must read for fans of oldschool game programming.
|
||||||
|
|
7
complexity.md
Normal file
7
complexity.md
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
# Complexity
|
||||||
|
|
||||||
|
Complexity may stand for several things, for example:
|
||||||
|
|
||||||
|
- opposite of [simplicity](kiss.md), general complexity of technology, see e.g. [bloat](bloat.md)
|
||||||
|
- [computational complexity](computational_complexity.md), mathematical study of computer resource usage
|
||||||
|
- ...
|
99
computational_complexity.md
Normal file
99
computational_complexity.md
Normal file
|
@ -0,0 +1,99 @@
|
||||||
|
# Computational Complexity
|
||||||
|
|
||||||
|
Computational complexity is a formal ([mathematical](math.md)) study of resource usage (usually time and memory) by [computers](computer.md) as they're solving various types of problems. For example when using computers to [sort](sorting.md) arrays of numbers, computational complexity can tell us which [algorithm](algorithm.md) will be fastest as the size of the array grows, which one will require least amount of memory ([RAM](ram.md)) and even what's generally the fastest way in which this can be done. While time ("speed", number of steps) and memory (also *space*) are generally the resources we are most interested in, other can be considered too, e.g. network or power usage. Complexity theory is extremely important and one of the most essential topics in [computer science](compsci.md); it is also immensely practically important as it helps us [optimize](optimization.md) our programs, it teaches us useful things such as that we can trade time and space complexity (i.e. make program run faster on detriment of memory and vice versa) etc.
|
||||||
|
|
||||||
|
Firstly we have to distinguish between two basic kinds of complexity:
|
||||||
|
|
||||||
|
- **[algorithm](algorithm.md) complexity**: Complexity of a specific algorithm. For example [quick sort](quick_sort.md) and [bubble sort](bubble_sort.md) are both algorithms for sorting arrays but quick sort has better time complexity. (Sometimes we may extend this meaning and talk e.g. about memory complexity of a [data structure](data_structure.md) etc.)
|
||||||
|
- **problem complexity**: Complexity of the best algorithm that can solve particular problem; e.g. time complexity of sorting an array is given by time complexity of the algorithm that can sort arrays the fastest.
|
||||||
|
|
||||||
|
## Algorithm Complexity
|
||||||
|
|
||||||
|
Let us now focus on algorithm complexity, as problem complexity follows from it. OK, so **what really is the "algorithm complexity"?** Given resource *R* -- let's consider e.g. time, or the number of steps the algorithm needs to finish solving a problem -- let's say that complexity of a specific algorithm is a [function](function.md) *f(N)* where *N* is the size of input data (for example length of the array to be sorted), which returns the amount of the resource (here number of steps of the algorithm). However we face issues here, most importantly that the number of steps may not only depend on the size of input data but also the data itself (e.g. with sorting it may take shorter time to sort and already sorted array) and on the computer we use (for example some computers may be unable to perform multiplication natively and will emulate it with SEVERAL additions, increasing the number of steps), and also the exact complexity function will be pretty messy (it likely won't be a nice smooth function but rather something that jumps around a bit). So this kind of [sucks](suck.md). We have to make several steps to get a nice, usable theory.
|
||||||
|
|
||||||
|
The **solution** to above issues will be achieved in several steps.
|
||||||
|
|
||||||
|
FIRSTLY let's make it more clear what *f(N)* returns exactly -- when computing algorithm complexity we will always be interested in one of the following:
|
||||||
|
|
||||||
|
- **best case** scenario: Here we assume *f(N)* always returns the best possible value for given *N*, usually the lowest (i.e. least number of steps, least amount of memory etc.). So e.g. with array sorting for each array length we will assume the input array has such values that the given algorithm will achieve its best result (fastest sorting, best memory usage, ...). I.e. this is the **lower bound** for all possible values the function could give for given *N*.
|
||||||
|
- **average case** scenario: Here *f(N)* returns the average, i.e. taking all possible inputs for given input size *N*, we just average the performance of our algorithm and this is what the function tells us.
|
||||||
|
- **worst case** scenario: Here *f(N)* return the worst possible value for given *N*, i.e. the opposite of best case scenario. This is the **upper bound** for all possible value the function could give for given *N*.
|
||||||
|
|
||||||
|
This just deals with the fact that some algorithms may perform vastly different for different data -- imagine e.g. linear searching of a specific value in a list; if the searched value is always at the beginning, the algorithm always performs just one step, no matter how long list is, on the other hand if the searched value is at the end, the number of steps will increase with the list size. So when analyzing an algorithm **we always specify which kind of case we are analyzing** (WATCH OUT: do not confuse these cases with differences between big O, big Omega and big Theta defined below). So let's say from now on we'll be implicitly examining worst case scenarios.
|
||||||
|
|
||||||
|
SECONDLY rather than being interested in PRECISE complexity functions we will rather focus on so called **asymptotic complexity** -- this kind of complexity is only concerned with how fast the resource usage generally GROWS as the size of input data approaches big values (infinity). So again, taking the example of array sorting, we don't really need to know exactly how many steps we will need to sort any given array, but rather how the time needed to sort bigger and bigger arrays will grow. This is also aligned with practice in another way: we don't really care how fast our program will be for small amount of data, it doesn't matter if it takes 1 or 2 microseconds to sort a small array, but we want to know how our program will [scale](scalability.md) -- if we have 10 TB of data, will it take 10 minutes or half a year to sort? If this data doubles in size, will the sorting time also double or will it increase 1000 times? This kind of complexity also no longer depends on what machine we use, the rate of growth will be the same on fast and slow machine alike, so we can conveniently just consider some standardized computer such as [Turing machine](turing_machine.md) to mathematically study complexity of algorithms.
|
||||||
|
|
||||||
|
Rather than exact value of resource usage (such as exact number of steps or exact number of bytes in RAM) asymptotic complexity tells us a **[class](class.md)** into which our complexity falls. These classes are given by mathematical functions that grow as fast as our complexity function. So basically we get kind of "tiers", like *constant*, *linear*, *logarithmic* *quadratic* etc., and our complexity simply falls under one of them. Some common complexity classes, from "best" to "worst", are following (note this isn't an exhaustive list):
|
||||||
|
|
||||||
|
- **constant**: Given by function *f(x) = 1* (i.e. complexity doesn't depend on input data size). Best.
|
||||||
|
- **[logarithmic](log.md)**: Given by function *f(x) = log(x)*. Note the base of logarithm doesn't matter.
|
||||||
|
- **linear**: Given by function *f(x) = x*.
|
||||||
|
- **linearithmic**: Given by function *f(x) = x * log(x)*.
|
||||||
|
- **quadratic**: Given by function *f(x) = x^2*.
|
||||||
|
- **cubic**: Given by function *f(x) = x^3*.
|
||||||
|
- **exponential**: Given by function *f(x) = n^x*. This is considered very bad, practically unusable for larger amounts of data.
|
||||||
|
|
||||||
|
Now we just put all the above together, introduce some formalization and notation that computer scientists use to express algorithm complexity, you will see it anywhere where this is discussed. There are the following:
|
||||||
|
|
||||||
|
- **big O (Omicron) notation**, written as *O(f(N))*: Says the algorithm complexity (for whatever we are measuring, i.e. time, space etc. and also the specific kind of case, i.e. worst/best/average) is asymptotically bounded from ABOVE by function *f(N)*, i.e. says the **upper bound** of complexity. This is probably the most common information regarding complexity you will encounter (we usually want this "pessimistic" view). More mathematically: complexity *f(x)* belongs to class *O(g(y))* if from some *N0* (we ignore some initial oscillations before this value) the function always stays under function *g* multiplied by any non-zero constant *C*. Formally: *f(x) belongs to O(g(y)) => exists C > 0 and N0 > 0: for all n >= N0: 0 <= f(n) <= C * g(n)*.
|
||||||
|
- **big Omega notation**, written as *Omega(f(N))*: Says the algorithm complexity **lower bound** is given by function *f(N)*. Formally: *f(x) belongs to Omega(g(y)) => exists C > 0 and N0 > 0: for all n >= N0: 0 <= C * g(n) <= f(n)*.
|
||||||
|
- **big Theta notation**, written as *Theta(f(N))*: This just means the complexity is both *O(f(N))* and *Omega(f(N))*, i.e. the complexity is tightly bounded by given function.
|
||||||
|
|
||||||
|
Please note that big O/Omega/Theta are a different thing than analyzing best/worst/average case! We can compute big O, big Omega and big Theta for all best, worst and average case, getting 9 different "complexities".
|
||||||
|
|
||||||
|
Now notice (also check by the formal definitions) that we simply don't care about additive and multiplicative constant and we also don't care about some initial oscillations of the complexity function -- it doesn't matter if the complexity function is *f(x) = x* or *f(x) = 100000000 + 100000000 * x*, it still falls under linear complexity! If we have algorithm *A* and *B* and *A* has better complexity, *A* doesn't necessarily ALWAYS perform better, it's just that as we scale our data size to very high values, *A* will prevail in the end.
|
||||||
|
|
||||||
|
Another thing we have to clear up: **what does input size really mean?** I.e. what exactly is the *N* in *f(N)*? We've said that e.g. with array sorting we saw *N* as the length of the array to be sorted, but there are several things to additionally talk about. Firstly it usually doesn't matter if we measure the size of input in bits, bytes or number of items -- note that as we're now dealing with asymptotic complexity, i.e. only growth rate towards infinity, we'll get the same complexity class no matter the units (e.g. a linear growth will always be linear, no matter if our *x* axis measures meters or centimeters or light years). SECONDLY however it sometimes DOES matter how we define the input size, take e.g. an algorithm that takes a square image with resolution *R * R* on the input, iterates over all pixels and find the brightest one; now we can define the input size either as the total number of pixels of the image (i.e. *N = R * R*) OR the length of the image side (i.e. *N = R*) -- with the former definition we conclude the algorithm to have linear time complexity (for *N* input pixels the algorithm makes roughly *N* steps), with the latter definition we get QUADRATIC time complexity (for image with side length *N* the algorithm makes roughly *N * N* steps). What now, how to solve this? Well, this isn't such an issue -- we can define the input size however we want, we just have to **stay consistent** so that we are able to compare different algorithms (i.e. it holds that if algorithm *A* has better complexity than algorithm *B*, it will stay so under whatever definition of input size we set), AND when mentioning complexity of some algorithm we should mention how we define the input size so as to prevent confusion.
|
||||||
|
|
||||||
|
With memory complexity we encounter a similar issue -- we may define memory consumption either as an EXTRA memory or TOTAL memory that includes the memory storing the input data. For example with array sorting if an algorithm works [in situ](in_situ.md) (in place, needing no extra memory), considering the former we conclude memory complexity to be constant (extra memory needed doesn't depend on size of input array), considering the latter we conclude the memory complexity to be linear (total memory needed grows linearly with the size of input array). The same thing as above holds: whatever definition we choose, we should just mention which one we chose and stay consistent in using it.
|
||||||
|
|
||||||
|
## Problem Complexity
|
||||||
|
|
||||||
|
*See also [P vs NP](p_vs_np.md).*
|
||||||
|
|
||||||
|
As said, problem complexity is tied to algorithm complexity; a complexity of specific problem (e.g. sorting an array, factorization of a number, searching a sorted list etc.) is determined by the best possible [algorithm](algorithm.md) that solves the problem (best in terms of analyzed complexity). Traditionally we use [Turing machines](turing_machine.md) and [formal languages](formal_language.md) to analyze problem complexity. Here we'll stay a bit informal and only mention some ideas.
|
||||||
|
|
||||||
|
Similarly to algorithm complexity, with problems we again define **[classes](class.md)** that are only about "how quickly the resource usage grows as we increase input size". The main difference is we are examining problems, so the classes we get are classes of PROBLEMS (usually classes of formal languages, like e.g. in Chomsky's language hierarchy), not classes of functions (seen in algorithm complexity). Some common classes are:
|
||||||
|
|
||||||
|
- **DTime(f(x))**: Problems for whose solution a [DETERMINISTIC](determinism.md) Turing machine has an algorithm with time complexity *O(f(x))*.
|
||||||
|
- **NTime(f(x))**: Problems for whose solution a NON-DETERMINISTIC Turing machine has an algorithm with time complexity *O(f(x))*.
|
||||||
|
- **DSpace(f(x))**: Same as DTime but for space complexity.
|
||||||
|
- **NSpace(f(x))**: Same as NTime but for space complexity.
|
||||||
|
- **[P](p.md)**: Union of all classes *DTime(n^k)*, i.e. all problems that can be solved by a DETERMINISTIC Turing machine with [polynomial](polynomial.md) time complexity.
|
||||||
|
- **[NP](np.md)**: Union of all classes *NTime(n^k)*, i.e. the same as above but for NON-DETERMINISTIC Turing machine. It is currently not know [if classes P and NP are the same](p_vs_np.md), though it's believed to be not the case -- in fact this is probably the most famous yet unsolved problem of computer science.
|
||||||
|
- **EXP**: Union of all classes *DTime(2^n^k)*.
|
||||||
|
- ...
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
Practically analyzing time complexity of algorithms mostly involves looking at the loops in our algorithm as these are what makes number of steps in algorithm variable. Linear sequences of instructions (such as initializations) don't interest us, no matter how long, as these have no effect on asymptotic complexity. But remember that looping may also be achieved with [recursion](recursion.md) etc., so just look carefully.
|
||||||
|
|
||||||
|
Let's consider the simple [bubble sort](bubble_sort.md) array sorting algorithm, with the simple optimization that ends as soon as the array is sorted; here it is written in [C](c.md):
|
||||||
|
|
||||||
|
```
|
||||||
|
void bubbleSort(int *array)
|
||||||
|
{
|
||||||
|
for (int i = 0; i < N - 1; ++i)
|
||||||
|
{
|
||||||
|
int end = 1;
|
||||||
|
|
||||||
|
for (int j = 0; j < N - 1 - i; ++j)
|
||||||
|
if (a[j] > a[j + 1])
|
||||||
|
{
|
||||||
|
swap(&a[j],&a[j + 1]);
|
||||||
|
end = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (end) // no swap happened => sorted, end
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The size of input data is the length of input array (`N`), for memory we consider only the extra memory used. Let's see about different complexities now:
|
||||||
|
|
||||||
|
- **asymptotic TIME complexity for WORST case**: Worst case happens when the optimizing condition (`if (end)`) never triggers and so both loops (outer and inner one) in our algorithm run all their iterations; the outer loop is performed approximately *N* times (actually *N - 1* times but remember that asymptotic complexity ignores -1 here as an additive constant) and for each of its iterations the inner loop runs approximately *N - i* times. So e.g. for *N = 5* we get approximately 5 + 4 + 3 + 2 + 1 steps, so for given *N* we basically have to sum up numbers from 1 to *N* -- there is a formula for computing such sum and that is *(N *(N + 1)) / 2 = N^2 / 2 + N / 2*. With asymptotic complexity we just take the biggest term and ignore any multiplication constant (division by two), so we just see *N^2* here and conclude that the time complexity of worst case for bubble sort is quadratic, i.e. *O(N^2)*. Notice that technically we can also say the complexity belongs to any "worse" class, e.g. *O(N^3)* (as *O(N^2)* is its subclass), which is technically true but doesn't tell us as much. So here it's better to further more precisely say the complexity of worst case also belongs to *Omega(N^2)* (the lower bound) and therefore (by belonging to both *O(N^2)* and *Omega(N^2)*) also belongs to *Theta(N^2)*, i.e. it "won't be slower BUT NOR faster than *N^2*".
|
||||||
|
- **asymptotic TIME complexity for BEST case**: Best case happens when the input array is already sorted -- here the algorithm enters the outer loop, then runs the inner loop -- approximately *N* iterations -- and then (since no swap happened) ends at the final condition. So the time complexity is linear -- we can say that the upper asymptotic bound on best case scenario is *O(N)*. Again we see the complexity is linearly bound from the bottom too so it's better to say the complexity of best case belongs to *Theta(N)*.
|
||||||
|
- **space (memory) complexity**: Bubble sort works in place and though it uses extra variables, the size of those variables doesn't depend on the size of input array, so (as we only count extra memory requirements) we can say memory complexity is constant, i.e. *O(1)*, and also *Theta(1)*.
|
||||||
|
|
||||||
|
TODO: also something simpler? problem complexity?
|
|
@ -1,6 +1,6 @@
|
||||||
# Corporation
|
# Corporation
|
||||||
|
|
||||||
Corporation is basically a huge company that doesn't have a single owner but is rather managed by many shareholders. Corporations are one of the most powerful, dangerous and unethical entities that ever came into existence -- their power is growing, sometimes even beyond the power of states and their sole goal is to make as much profit as possible without any sense of morality. Existence of corporations is enabled by [capitalism](capitalism.md).
|
Corporation is basically a huge company that doesn't have a single owner but is rather managed by many shareholders. Corporations are one of the most powerful, dangerous and unethical entities that ever came into existence -- their power is growing, sometimes even beyond the power of states and their sole goal is to make as much profit as possible without any sense of morality. Existence of corporations is enabled by [capitalism](capitalism.md). Examples of corporations are [Micro$oft](microsoft.md), [Apple](apple.md), [Amazon](amazon.md), [Walmart](walmart.md), [Te$la](tesla.md), [McDonald$](mcdonalds.md), [Facebook](facebook.md) etc.
|
||||||
|
|
||||||
The most basic fact to know about corporations is that **100% of everything a corporation ever does is done 100% solely for maximizing its own benefit for any cost, with no other reason, with 0 morality and without any consideration of consequences**. If a corporation could make 1 cent by raping 1000000000 children and get away with it, it would do so immediately without any hesitation and any regret. This is very important to keep in mind. Now try to not get depressed at realization that corporations are those to whom we gave power and who are in almost absolute control of the world.
|
The most basic fact to know about corporations is that **100% of everything a corporation ever does is done 100% solely for maximizing its own benefit for any cost, with no other reason, with 0 morality and without any consideration of consequences**. If a corporation could make 1 cent by raping 1000000000 children and get away with it, it would do so immediately without any hesitation and any regret. This is very important to keep in mind. Now try to not get depressed at realization that corporations are those to whom we gave power and who are in almost absolute control of the world.
|
||||||
|
|
||||||
|
|
2
jokes.md
2
jokes.md
|
@ -33,6 +33,8 @@ Please do NOT post lame "big-bang-theory"/[9gag](9gag.md) jokes like *sudo make
|
||||||
- `int randomInt(void) { int x; return x; }`
|
- `int randomInt(void) { int x; return x; }`
|
||||||
- Boss: "We're going to need to store additional information about gender of all 1600 people in our database." Me: "OK that's only 200 extra bytes.". Diversity department: "You're fired."
|
- Boss: "We're going to need to store additional information about gender of all 1600 people in our database." Me: "OK that's only 200 extra bytes.". Diversity department: "You're fired."
|
||||||
- the [downto](downto.md) operator
|
- the [downto](downto.md) operator
|
||||||
|
- I find it much more pleasant to browse the web on a 1 bit display, it can't display a [rainbow](lgbt.md).
|
||||||
|
- There's a new version of Debian Bull's Eye that's compiled exclusively with [Rust](rust.md). Its code name is [Bull's Shit](bullshit.md). { Again credit goes to my friend <3 ~drummyfish }
|
||||||
|
|
||||||
## See Also
|
## See Also
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
This is an auto-generated article holding stats about this wiki.
|
This is an auto-generated article holding stats about this wiki.
|
||||||
|
|
||||||
- number of articles: 532
|
- number of articles: 532
|
||||||
- total size of all texts in bytes: 2668827
|
- total size of all texts in bytes: 2670984
|
||||||
|
|
||||||
longest articles:
|
longest articles:
|
||||||
|
|
||||||
|
@ -23,6 +23,25 @@ longest articles:
|
||||||
latest changes:
|
latest changes:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
Date: Tue Jan 2 15:42:07 2024 +0100
|
||||||
|
acronym.md
|
||||||
|
bloat.md
|
||||||
|
chaos.md
|
||||||
|
collision_detection.md
|
||||||
|
communism.md
|
||||||
|
fantasy_console.md
|
||||||
|
how_to.md
|
||||||
|
jargon_file.md
|
||||||
|
less_retarded_society.md
|
||||||
|
lrs.md
|
||||||
|
modern.md
|
||||||
|
open_console.md
|
||||||
|
smol_internet.md
|
||||||
|
suckless.md
|
||||||
|
temple_os.md
|
||||||
|
vector.md
|
||||||
|
wiki_stats.md
|
||||||
|
zen.md
|
||||||
Date: Mon Jan 1 15:33:55 2024 +0100
|
Date: Mon Jan 1 15:33:55 2024 +0100
|
||||||
acronym.md
|
acronym.md
|
||||||
fight_culture.md
|
fight_culture.md
|
||||||
|
@ -37,24 +56,5 @@ programming_tips.md
|
||||||
usa.md
|
usa.md
|
||||||
wiki_stats.md
|
wiki_stats.md
|
||||||
Date: Sun Dec 31 01:01:53 2023 +0100
|
Date: Sun Dec 31 01:01:53 2023 +0100
|
||||||
books.md
|
|
||||||
democracy.md
|
|
||||||
encyclopedia.md
|
|
||||||
git.md
|
|
||||||
jesus.md
|
|
||||||
marxism.md
|
|
||||||
needed.md
|
|
||||||
political_correctness.md
|
|
||||||
prime.md
|
|
||||||
selflessness.md
|
|
||||||
usa.md
|
|
||||||
wiki_stats.md
|
|
||||||
wikipedia.md
|
|
||||||
Date: Fri Dec 29 13:37:40 2023 +0100
|
|
||||||
ascii_art.md
|
|
||||||
cc.md
|
|
||||||
dog.md
|
|
||||||
free_software.md
|
|
||||||
git.md
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
2
work.md
2
work.md
|
@ -4,6 +4,8 @@
|
||||||
|
|
||||||
Work, better known as slavery, is an unpleasant effort that one is required to suffer, such as harvesting crops or [debugging](debugging.md) computer programs. Work hurts living beings and takes away the meaning of their lives, it destroys their bodies and minds. Work makes us slaves, it wastes our lives and is a cause of a large number of [suicides](suicide.md) -- many consider it better to die than to work. One of the main goals of civilization is to eliminate any need for work, i.e. create machines that will do all the work for humans (see [automation](automation.md)).
|
Work, better known as slavery, is an unpleasant effort that one is required to suffer, such as harvesting crops or [debugging](debugging.md) computer programs. Work hurts living beings and takes away the meaning of their lives, it destroys their bodies and minds. Work makes us slaves, it wastes our lives and is a cause of a large number of [suicides](suicide.md) -- many consider it better to die than to work. One of the main goals of civilization is to eliminate any need for work, i.e. create machines that will do all the work for humans (see [automation](automation.md)).
|
||||||
|
|
||||||
|
**[Fun](fun.md) fact**: the Spanish (also Portuguese etc.) word for work, "trabajo", comes from *tripalium*, a device made of three sticks used to force slaves to work. { Thanks to my friend who told me about this <3 ~drummyfish }
|
||||||
|
|
||||||
While good society tries to eliminate work, [capitalism](capitalism.md) aims for the opposite, i.e. artificially creating bullshit jobs and bullshit needs so as to keep everyone enslaved to the system. Fortunately movements such as the [antiwork](antiwork.md) movement try to oppose this, however masses have already been brainwashed to be hostile to such movements and instead demand their own enslavement.
|
While good society tries to eliminate work, [capitalism](capitalism.md) aims for the opposite, i.e. artificially creating bullshit jobs and bullshit needs so as to keep everyone enslaved to the system. Fortunately movements such as the [antiwork](antiwork.md) movement try to oppose this, however masses have already been brainwashed to be hostile to such movements and instead demand their own enslavement.
|
||||||
|
|
||||||
[We](lrs.md) see it as essential to start educating people about the issue as well as starting to eliminate jobs immediately with things such as [automation](automation.md) and [universal basic income](ubi.md).
|
[We](lrs.md) see it as essential to start educating people about the issue as well as starting to eliminate jobs immediately with things such as [automation](automation.md) and [universal basic income](ubi.md).
|
||||||
|
|
Loading…
Reference in a new issue