This commit is contained in:
Miloslav Ciz 2025-01-27 01:29:42 +01:00
parent 1cc86672ff
commit a0e1f37d3e
11 changed files with 2005 additions and 1955 deletions

29
log.md
View file

@ -111,19 +111,20 @@ Other important formulas with logarithms include these:
To get back to the **logarithmic scales** for a moment: these are scales whose value at each step increases not by a constant added number, but by multiplying the value of the previous step by some fixed fraction. In graphs such scale may be used on the *x* or *y* axis or both, depending on the need -- imagine for instance we were about to plot some exponentially increasing phenomenon, i.e. something that over each period of time (such as a year) grows by some fixed PERCENTAGE (fraction). Example may be the [Moore's law](moores_law.md) stating that the number of [transistors](transistor.md) in integrated circuits doubles every two years. Plotting this with linear scales we'll see a curve that very quickly shoots up, turning steeper and steeper, creating a very inconvenient, hard to read graph. If instead we used logarithmic scale on the *y* axis (number of transistors), we'd get a nice straight line! This is because now as we're moving by years on the *x* axis, we are jumping by orders of magnitude on the *y* axis, and since that is logarithmic, a jump by order of magnitude shift us a constant step up. This is therefore very useful for handling phenomena that "up closer" need higher resolution and "further away" rather need more more space and bigger "zoom out" on detriment of resolution, such as the map of our Universe perhaps.
## Programming
## Programming And Approximations
It won't come as a surprise that we'll find the logarithm function built in most of the popular [programming_languages](programming_language.md), most often present as part of the standard math [library](library.md)/module. Make sure to check which base it uses etc. [C](c.md) for example has the functions *log(x)* (natural logarithm), *log10(x)* and *log2(x)* under *math.h* -- if you need logarithm with different base, the simple formula given somewhere above will serve you to convert between arbitrary bases (also shown in an example below).
Should you decide for any reason to implement your own logarithm, consider first your requirements. If integer logarithm [is enough](good_enough.md), the straightforward "[brute force](brute_force.md)" way of searching for the correct result in a for loop is quite usable since the number of iterations can't get too high (as by repeated exponentiation we quickly cover the whole range of even 64 bit integers). In C this may be done as follows:
Should you decide for any reason to implement your own logarithm, consider first your requirements. If integer logarithm [suffices](good_enough.md), the straightforward "[brute force](brute_force.md)" way of searching for the correct result in a for loop is quite usable since the number of iterations can't get too high (as by repeated exponentiation we quickly cover the whole range of even 64 bit integers). In C this may be done as follows (we have to watch out for [overflows](overflow.md) that could get us stuck in an infinite loop; this could also be addressed by using division instead of multiplication, but division can be very slow):
```
int logIntN(int base, int x)
unsigned int logIntN(unsigned int base, unsigned int x)
{
int r = 0, n = base;
unsigned int r = 0, n = base, nPrev = 0; // nPrev to detect overflow
while (n <= x)
while ((n <= x) & (n > nPrev))
{
nPrev = n;
n *= base;
r++;
}
@ -132,16 +133,16 @@ int logIntN(int base, int x)
}
```
If we don't insist on having the base a variable, the function will probably [get faster](optimization.md), and especially so in the case of *log2* where multiplication can be replaced by a bit shift:
If we don't insist on having the base a variable it's better to have it constant, the function will most likely [get faster](optimization.md) (passing one fewer argument, compiler can optimize expression with the constant etc.) -- especially *log2* can be optimized by using a bit shift and being able to simplify everything like this:
```
int logInt2(int x)
unsigned int logInt2(unsigned int x)
{
int r = 0, n = 2;
unsigned int r = 0;
while (n <= x)
while (x > 1)
{
n <<= 1;
x >>= 1;
r++;
}
@ -182,4 +183,10 @@ double logFloatN(double base, double x)
}
```
If you have the *pow* function at hand, you can probably implement floating point logarithm also through [binary search](binary_search.md) with delta.
If your inventory includes the *[pow](pow.md)* function, you can probably use it to implement floating point logarithm also through [binary search](binary_search.md) with delta.
As for [approximations](approximation.md): unfortunately good ones are often plagued by narrow interval of convergence. Attempts at construction of a function resembling logarithm may perhaps start with a similarly shaped function *1 - 1/x*, then continue by pimping it up and adding correcting expressions until it looks cool. This may lead for example to the following expression: { Made by me. ~drummyfish }
*log10(x) ~= 3.0478 + 0.00001 * x - 205.9 / (x + 100) - (1233 * x + 10) / (625 * (x + 1) * x)*
The advantage here is that it looks reasonable on a wide interval from 0 up to many thousands: before *x* gets to higher hundreds the error is somewhere around 3%, then around 2000 gets to some 10% and around 10000 to approx 20% where it then seems to stay for a very long time.