master
Miloslav Ciz 5 months ago
parent 01b86872e4
commit 750346cd35

@ -67,3 +67,4 @@ The following is a list of some notable esoteric languages.
## See Also
- [conlang](conlang.md)
- [micronation](micronation.md)

@ -4,6 +4,8 @@
Feminism, also feminazism or femifascism, is a [fascist](fascism.md) [terrorist](terrorism.md) [pseudoleftist](pseudoleft.md) movement aiming for establishing [female](woman.md) as the superior gender, for social revenge on men and gaining political power, e.g. that over [language](political_correctness.md). Similarly to [LGBT](lgbt.md), feminism is violent, [toxic](toxic.md) and [harmful](harmful.md), based on [brainwashing](brainwashing.md), mass hysteria, [bullying](bullying.md) (e.g. the [metoo](metoo.md) campaign) and [propaganda](propaganda.md).
A quite nice article on feminism can also be found on the [incel](incel.md) wiki at https://incels.wiki/w/Feminism.
{ [LMAO](lmao.md), **a supposed woman writer who won 1 million euro prize turned out to actually be three men writers**, see Carmen Mola :) Also the recent "historically first all female space walk" during which they managed to lose $100K worth of equipment :D ~drummyfish }
If anything's clear, then that feminism is not at all about gender equality but about hatred towards men. Firstly feminism is not called *gender equality movement* but *feminism*, i.e. for-female, and as we know, [name plays a huge role](name_is_important.md). To a feminist man is what a [jew](jew.md) was to the Nazi; the whole story is repeated again, we have yet again not learned a bit from our history. Indeed, women have historically been oppressed and needed support, but once women reach social equality -- which has basically already happened a long time ago now -- feminist movement will, if only by [social inertia](social_inertia.md), keep pursuing more advantages for women (what else should a movement called *feminism* do?), i.e. at this point the new goal has already become female superiority. In the age of capital no one is going to just dissolve a movement because it has already reached its goal, such a movement present political capital one will simply not throw out of window, so feminists will forever keep saying they're being oppressed and will forever keep inventing new bullshit issues to keep [fighting](fight_culture.md). Note for example that feminists care about things such as wage gap but of course absolutely don't give a damn about opposite direction inequality, such as men dying on average much younger than women etc. -- feminism cares about women, not equality. And of course, when men establish "men rights" movements, suddenly feminists see those as "fascist", "toxic" and "violent" and try to destroy such movements.

@ -39,3 +39,115 @@ Now it's pretty clear this description gets a bit tedious, it's better, especial
| D |0.25 |0.25 | 0.5 | 0 |
We can see a few things: the NPC can't immediately attack from cover, it has to search for a target first. It also can't throw two grenades in succession etc. Let's note that this model will now be yielding random sequences of actions such as [*cover*, *search*, *shoot*, *shoot*, *cover*] or [*cover*, *search*, *search*, *grenade*, *shoot*] but some of them may be less likely (for example shooting 3 bullets in a row has a probability of 0.1%) and some downright impossible (e.g. two grenades in a row). Notice a similarity to for example natural language: some words are more likely to be followed by some words than others (e.g. the word "number" is more likely to be followed by "one" than for example "cat").
## Code Example
Let's write an extremely primitive Markov bot that will work on the level of individual text characters. It will take a training text on input, for example a book, and learn the probabilities with which any letter is followed by another letter. Then it will generate a random output according to these probabilities, something that should resemble the training text. Yes, you may say we are doing a super simple [machine learning](machine_learning.md).
Keep in mind this example is really extremely simple, it only looks one letter back and makes some further simplifications, for example it only approximates the probabilities with kind of a [KISS](kiss.md) hack -- we won't record any numeric probability, we'll only hold a table of letters, each one having a "bucket" of letters that may possibly follow; during training we'll always throw a preceding letter's follower to a random place in the preceding letter's bucket, with the idea that once we finish training, statistically in any bucket there will remain more letters that are more likely to follow given letter, just because we simply threw more such letters in. Similarly when generating the output text we will choose a letter to follow the current one by looking into the table and pulling out a random follower from that letter's bucket, again hoping that letters that have greater presence in the bucket will be more likely to be randomly selected. This approach has issues, for example regarding the question of ideal bucket size, and it introduces statistical biases (maximum probability is limited by bucket size, order matters, later letters are kind of privileged), but it kind of works. Try to think of how we could make a better text generator -- for starters it might work on the level of words and could take into account a history of let's say three letters, i.e. it would record triplets of words and then list of words that likely follow, along with each one's probability that we would record as an actual number to make the probabilities accurate.
Anyway with all this said, below is a [C](c.md) code implementing the above described text generator. To use it just pipe some input ASCII text to it, however make it reasonably sized (a few thousand lines maybe, please don't feed it whole Britannica, the output won't be better), keep in mind the program always trains itself from scratch (in practice we might separate training from generation, as serious training might take very long, i.e. we would have a separate training program that would output a trained model, i.e. the learner probabilities, and then a generator that would only take the trained model and generate output text). Here is the code:
```
#include <stdio.h>
#include <stdlib.h>
#define OUTPUT_LEN 10000 // length of generated text
#define N 16 // bucket size for each letter
#define SEED 123456
#define IGNORE_NEWLINES 1
unsigned char charFollowers[256][N];
int main(void)
{
srand(SEED);
for (int i = 0; i < 256; ++i)
for (int j = 0; j < N; ++j)
charFollowers[i][j] = ' ';
unsigned char prevChar = 0;
while (1)
{
int c = getchar();
if (c == EOF)
break;
#if IGNORE_NEWLINES
if (c == '\n')
c = ' ';
#endif
charFollowers[prevChar][rand() % N] = c; // put char at random place
prevChar = c;
}
prevChar = ' ';
for (int j = 0; j < OUTPUT_LEN; ++j) // now generate the output
{
prevChar = charFollowers[prevChar][rand() % N]; // take random follower
putchar(prevChar);
}
puts("\n");
return 0;
}
```
Trying it out on the text of [this wiki](LRS_wiki.md) may output something like this:
```
Ther thellialy oris threstpl/pifragmediaragmagoiby s agmexes, den
atss pititpenaraly d thiplio s ts gs, tis wily NU gmarags gos
aticel/EEECTherixed atstixedells, s s ores agolltixes tixe. TO: N
s, s, TOpedatssth NUCAPorag: puffrits, pillly ars agmen No tpix abe
aghe. aragmed ssh titixen plioix ag: Th tingoras TOD s wicipixe d
tpllifr.edarenexeramed Thecospix ts ts s osth s pes ovipingor
g: agors agass s TOnamand s aghech th wopipistalioiaris agontibuf
ally Thrixtply tiaceca th oul/EEEEEEEECPU), wicth NU athed wen
aragag athichipl Thechixthass s gmelliptilicex th ostunth gmagh
atictpixe. ar Th on wipixexepifrag gman g: sthabopl/te.
```
We see at first glance it looks a bit like English text, even with some quirks specific to this wiki, for example here and there having FULL CAPS words (due to acronyms and also rants that often appear here). It even generated the word "CPU". Notice the algorithm correctly learned punctuation, i.e. it knows that after commas and periods there is almost always space and after space there is usually not another space. For comparison here is a Spanish-like text generated from Don Quixote (with accents removed):
```
Diloma Dadro hacaci gua usta lesano strore sto do diaco; ma ro
hiciso stue ue dita. do que menotamalmeci ma quen do gue lo;
denestajo qucos rdo horor Da que qunca. quadombuce que queromiderbre
hera ha rlabue F de querdos Dio macino; dombidrompo mi ste derdiba
l, mbiolo Ferbes l ste s lolo que ha Du hano quenore Dio ueno que
hala F uano he Dorame de qus rl, ha didesa que halanora Fla quco
dil qucio ue do mestostaloste hados de gusta querana. stuce F s s
Do lo dre s hal Fro gue sa sa. la sido la dico; hado mbuno Do.
mororo; rdenaja. qunolole Diba. do. Fa gor stamestamo ha quno
unostabro quero mue s Diado Didota. quencoralor dio sotomo Fuen
que halora. gunore quabrbe rol gostuno hadolmbe Da que unendor
que le di so; qunta rajos s F de qucol
```
We see some shorter words like *lo*, *le*, *de*, *he*, *que* and *sido* are real Spanish words. Though punctuation is quite nice, the algorithm fails to learn that after period the word of the next sentence should start with a capital letter (it only does so sometimes by pure chance) -- this is due to the algorithm only seeing one character back; after a period there is also one space which already makes the algorithm forget about the period. This would be addressed by keeping longer history, as said above. Now let's try a difference kind of text altogether, let's try to feed in the source code of [Anarch](anarch.md):
```
2 camechererea = 20;
#erereppon.xereponioightFuaighe16_ARABEIUnst
chtreraySqua->rarepL_RCL_CL_PE;
caminsin.yDINeramaxer = costRCL_PERCL_ditsins->pL_ime1
= 0;
* = RCL_dime1,y 1)
0;
}
}
ck;
camererayDimameaxSqua ca = ca->ra caininin.xS_UAME;
caminstFua-> 0 0;
} ca->ponstramiomereaxSquts chts 154;
1)
```
Here it's pretty clear the code won't work but its structure really does resemble the original source: curly brackets and semicolons are correctly followed by newlines, assignments look pretty correct as well, dereference arrows (`->`) appear too -- the code even generated the `RCL_` prefix of the [raycastlib](raycastlib.md) functions that's widely seen in the original code.

@ -43,6 +43,7 @@ These are mainly for [C](c.md), but may be usable in other languages as well.
- **What's fast on one platform may be slow on another**. This depends on the instruction set as well as on compiler, operating system, available hardware, [driver](driver.md) implementation and other details. In the end you always need to test on the specific platform to be sure about how fast it will run. A good approach is to optimize for the weakest platform you want to support -- if it runs fasts on a weak platform, a "better" platform will most likely still run it fast.
- **Prefer preincrement over postincrement** (typically e.g. in a for loop), i.e. rather do `++i` than `i++` as the latter is a bit more complex and normally generates more instructions.
- **Mental calculation tricks**, e.g. multiplying by one less or more than a power of two is equal to multiplying by power of two and subtracting/adding once, for example *x * 7 = x * 8 - x*; the latter may be faster as a multiplication by power of two (bit shift) and addition/subtraction may be faster than single multiplication, especially on some primitive platform without hardware multiplication. However this needs to be tested on the specific platform. Smart compilers perform these optimizations automatically, but not every compiler is high level and smart.
- **Use switch instead of if branches** -- it should be common knowledge but some newcomers may not know that switch is fundamentally different from if branches: switch statement generates a jump table that can branch into one of many case labels in constant time, as opposed to a series of if statements which keeps checking conditions one by one, however switch only supports conditions of exact comparison. So prefer using switch when you have many conditions to check. Switch also allows hacks such as label fall through which may help some optimizations.
- **Else should be the less likely branch**, try to make if conditions so that the if branch is the one with higher probability of being executed -- this can help branch prediction.
- Similarly **order if-sequences and switch cases from most probable**: If you have a sequences of ifs such as `if (x) ... else if (y) ... else if (z) ...`, make it so that the most likely condition to hold gets checked first, then second most likely etc. Compiler most likely can't know the probabilities of the conditions so it can't automatically help with this. Do the same with the `switch` statement -- even though switch typically gets compiled to a table of jump addresses, in which case order of the cases doesn't matter, it may also get compiled in a way similar to the if sequence (e.g. as part of size optimization if the cases are sparse) and then it may matter again.
- **Variable aliasing**: If in a function you are often accessing a variable through some complex dereference of multiple pointers, it may help to rather load it to a local variable at the start of the function and then work with that variable, as dereferencing pointers costs something. { from *Game Programming Gurus* -drummyfish }

Loading…
Cancel
Save