How can we transmit data using only 2 bits?

In summary: But if we start looking closely, we see that the space bits might be completely useless. They're just holding onto some value that we could be using to encode something else.In summary, this discusses a way to send data without sending bits, by instead using delays to encode the data. This is possible because networks have relatively long delays, and can be communicated between relatively quickly.
  • #1
-Job-
Science Advisor
1,158
4
Since this is a Computer Science forum now i thought i'd post this impractical yet interesting take on data transfer.

If machine A has to send n bits over to machine B, and A knows the clock speed of B, rather than sending the n bits, A can instead always send two bits bit-1 and bit-2. B calculates the delay between the arrival of bit-1 and bit-2 (in B's clock cycles) which gives a number whose bit representation is the data that A sent.

So if A wanted to send a 5-bit string to B, for example 10011 (which is 19 in decimal) and knowing that B's clock speed is 1Ghz, A would send two bits, the second one being sent 19 billionths of a second after the first. B receives the two bits, computes the delay (which amounts to 19 computer cycles) and knows that A sent the value 19, or 10011.

This isn't practical for a many reasons, networks have long and largely variable delays (bits could take different routes, through faster or slower routers and shared mediums). But it's interesting, i think, that we can transmit any quantity of data between two machines using only 2 bits, given enough time (exponential time actually, with respect to the number of bits that need to be sent, which makes this completely impractical since just 50 bits would take forever, literally).

What's happening is actually that, if we were to look at the two bits in traffic, we'd see that they are separated by a distance d at all times. The value of d in binary is the data being sent. So we're not breaking any rules, physically there is a valid encoding of the data, it's encoded in a distance.

It's the same idea as encoding data in a long shoe lace, for example. We can take a long shoe lace and cut it to a size s where s in binary is the data we want to store. Or, instead of a shoe lace, we can take two baseballs and throw them into the air, one after the other. The amount of time between when the two baseballs hit the ground (in some time unit) gives the encoded data. By delaying the throw of the second ball, we can encode as much data as we want, without the need for a storage medium.

We can also choose to split the data we want to send into groups. For each group we would use the two-bit technique. If we do this we can obtain a process which enables us to send n bits from A to B, by sending only (1+n/logn) physical bits and waiting a total of (n^2/logn) clock cycles.
 
Computer science news on Phys.org
  • #2
Interesting.
It occurs to me that in this case data transmission would be subject to the rules of SR.
You would only observe the correct delay if the the transmitter and receiver are relatively at rest in the same frame.

I would have to think a while about this, but I wonder if this might constitute a simplified explanation of how GPS works.
 
  • #3
Of course in all that time, the receiving computer's bandwidth is monopolized by the sender, since any other transmission in the meantime will ruin the communication. Unless, that is, a b-bit identifier is sent instead of a single bit. In that case the bandwidth taken is 2b+k (for some small k, whatever the network requires, perhaps 2) and the protocol is limited to 2^b members communicating to a single computer.
 
  • #4
It's possible to compromise and be able to send n-bit strings in less than exponential time at the cost of sending more bits.

For example, suppose we split the n bits into n/logn groups of logn bits. We send an initial bit that signals the start of transmission, then every subsequent bit establishes the delay which encodes one of the n/logn bit groups. This means that we send 1+n/logn bits.

For sending each bit group, we must wait for 2^logn = n cycles to elapse. Since there are n/logn groups, we must wait a total of n^2/logn cycles, which is feasible from an implementation perspective. This particular approach would be only for a one-time use, but it shows that we can reach a balance between sent bits and elapsed cycles.

So for example, if we have any 100 bit string and B is running at 1Ghz, then we can send all the data over by:
- sending over 1+100/6 = 18 bits approximately.
- waiting a total of 100*100/6 = 1667 billionths of a second.

So every time we can get away with sending less bits. This brings up interesting questions about the relationship between time and space. Superficially we might say that we're encoding some data in time, so we can get away with using less space bits.
 

Related to How can we transmit data using only 2 bits?

1. What is low bandwidth networking?

Low bandwidth networking refers to the use of communication networks with limited data transfer capabilities. This means that the network can only transmit a small amount of data at a time, leading to slower data transfer speeds.

2. Why is low bandwidth networking important?

Low bandwidth networking is important because it allows for communication and data transfer in areas with limited internet access or in situations where high bandwidth networks are not available. It also helps to reduce costs and conserve resources, making it a more sustainable option.

3. How does low bandwidth networking affect internet speed?

Low bandwidth networking can significantly impact internet speed as it limits the amount of data that can be transmitted at once. This can result in slower loading times for websites, videos, and other online content.

4. What are some examples of low bandwidth networking technologies?

Some examples of low bandwidth networking technologies include dial-up internet, satellite internet, and mobile data networks. These technologies have limited data transfer capabilities compared to high-speed broadband networks.

5. How can low bandwidth networking be improved?

Low bandwidth networking can be improved by implementing compression techniques, optimizing data transfer protocols, and using caching mechanisms. These methods can help reduce the amount of data being transmitted, thus improving the overall speed and efficiency of the network.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
8
Views
1K
Replies
7
Views
2K
  • Electrical Engineering
Replies
15
Views
2K
Replies
1
Views
1K
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Classical Physics
Replies
18
Views
2K
  • Special and General Relativity
5
Replies
146
Views
7K
  • Special and General Relativity
2
Replies
37
Views
3K
Replies
4
Views
6K
Back
Top