Update
This commit is contained in:
parent
f920115b91
commit
75f99e7b81
69 changed files with 2008 additions and 1791 deletions
|
@ -8,7 +8,7 @@ Even though neural networks absolutely aren't the only possible model used in [m
|
|||
|
||||
**Currently neural networks seem to be bringing back [analog](analog.md) computing.** As of 2023 most neural networks are still simulated with [digital](digital.md) computers, but due to the fact that such networks are analog and parallel in nature the digital approach is inelegant (we make digital devices out of analog circuits and then try to make them behave like analog devices again) and inefficient (in terms of energy consumption). Therefore analog is making a comeback and researchers are experimenting with analog implementations, most notably electronic (classic electronic circuits) and photonic (optics-based) ones. Keep in mind that digital and analog networks are compatible; you can for example train a network digitally and then, once you've found a satisfying network, implement it as analog so that you can e.g. put it in a cellphone so that it doesn't drain too much energy. Analog networks may of course be embedded in digital devices (we don't need to go full analog).
|
||||
|
||||
**[Hardware](hw.md) acceleration of neural networks is being developed.** Similarly to how [GPUs](gpu.md) arised to accelerate computer [graphics](graphics.md) during the 90s video game boom, similar hardware is arising for accelerating neural network computations -- these are called **[AI accelerators](ai_accelerator.md)**, notably e.g. Google's [TPU](tpu.md) (tensor processing unit). Currently GPUs are still mostly used for neural networks -- purely software networks are too slow. It is possible that future neural network hardware will be analog-based, as mentioned above.
|
||||
**[Hardware](hw.md) acceleration of neural networks is being developed.** Similarly to how [GPUs](gpu.md) appeared to accelerate computer [graphics](graphics.md) during the 90s video game boom, similar hardware is appearing for accelerating neural network computations -- these are called **[AI accelerators](ai_accelerator.md)**, notably e.g. Google's [TPU](tpu.md) (tensor processing unit). Currently GPUs are still mostly used for neural networks -- purely software networks are too slow. It is possible that future neural network hardware will be analog-based, as mentioned above.
|
||||
|
||||
## Details
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue