- #1
rjbeery
- 346
- 8
In Information Theory, entropy is defined as the unpredictability of information content and, as such, the entropy of the output from so-called pseudo random number generators (PRNG) is often measured as a test of their "randomness". An interesting paradox arises with this definition...
Start with a suitable seed, [itex]S_n[/itex], consisting of an array of n previously generated pseudo-random numbers from the function [itex]PRNG(seed)[/itex]. In order to maximize the randomness of [itex]PRNG[/itex] we want [itex]PRNG(S_n)[/itex] to return a result such that the entropy of [itex]S_{n+1}[/itex] is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of [itex]PRNG[/itex] we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.
Start with a suitable seed, [itex]S_n[/itex], consisting of an array of n previously generated pseudo-random numbers from the function [itex]PRNG(seed)[/itex]. In order to maximize the randomness of [itex]PRNG[/itex] we want [itex]PRNG(S_n)[/itex] to return a result such that the entropy of [itex]S_{n+1}[/itex] is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of [itex]PRNG[/itex] we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.