Distance between memory and processor

In summary: I guess that way you can get more work done without having to increase the speed of the individual processors.
  • #1
Chemp_93
10
0

Homework Statement


This problem is from Spacetime Physics by Taylor

In one second some desktop computers can carry out one million instructions in sequence. Assume that carrying out one of the instruction requires transmission of data from the memory to the processor and transmission of the result back to the memory for storage.

a) What is the maximum average distance between memory and processor in a "one-megaflop" computer? Is this maximum distance increased or decresed if the signal travels through conductors at one half the speed of light in a vacuum?

Homework Equations



1 mega-flop = 1 sec / 106 instructions

c = 3 * 108 m/s


The Attempt at a Solution



For the first question asked;
With all that is given, I was not sure how to go about the problem so I just did some dimensional analysis to get a unit of meters.

C * 1 sec/ 10^6 instructions ≈ 300 m/ instruction of light travel time.

This turns out to be incorrect, my TA said that I must take into account that light takes a round trip.
My guess would be to multiply by 2, but doesn't the units of m/instruction take account for the round trip (since it is 'per' instruction')?

Very little is given so I am almost positive that the calculations should not be mathematically complicated.

Can someone tell me what is wrong with my logic?
 
Physics news on Phys.org
  • #2
one million instructions per second means one instruction takes 1 microsecond that means that 0.5us to get data and 0.5us to put it back. How far does light travel in 0.5us?
 
  • #3
0.5 μs * c = 150 meters traveled.

But that makes no sense because the distance between memory and processor can't be 150 meters apart
 
  • #4
Chemp_93 said:

Homework Statement


This problem is from Spacetime Physics by Taylor

In one second some desktop computers can carry out one million instructions in sequence. Assume that carrying out one of the instruction requires transmission of data from the memory to the processor and transmission of the result back to the memory for storage.

a) What is the maximum average distance between memory and processor in a "one-megaflop" computer? Is this maximum distance increased or decresed if the signal travels through conductors at one half the speed of light in a vacuum?

Homework Equations



1 mega-flop = 1 sec / 106 instructions
The units on the right are upside-down.
1 megaflop = 106 instructions/second

BTW, the "flop" part means "floating-point" instructions, which are more processor intensive than integer instructions.
Chemp_93 said:
c = 3 * 108 m/s


The Attempt at a Solution



For the first question asked;
With all that is given, I was not sure how to go about the problem so I just did some dimensional analysis to get a unit of meters.

C * 1 sec/ 10^6 instructions ≈ 300 m/ instruction of light travel time.

This turns out to be incorrect, my TA said that I must take into account that light takes a round trip.
My guess would be to multiply by 2, but doesn't the units of m/instruction take account for the round trip (since it is 'per' instruction')?

Very little is given so I am almost positive that the calculations should not be mathematically complicated.

Can someone tell me what is wrong with my logic?
 
  • #5
Mark44 said:
The units on the right are upside-down.
1 megaflop = 106 instructions/second

BTW, the "flop" part means "floating-point" instructions, which are more processor intensive than integer instructions.

Okay... that doesn't really help though.
 
  • #6
150m is the maximum distance ignoring any time required by the memory or processor to do their thing.
 
  • #7
CWatters said:
150m is the maximum distance ignoring any time required by the memory or processor to do their thing.

I figured that 150 m is more likely the solution. So if the memory and processor took more time 'to do their thing', wouldn't the distance between them increase?
 
  • #8
Think about it a bit more...

The system has 1uS to get the data from memory to the CPU and back to the memory. If the memory and CPU "waste" some of that time there will less time will be available to send the data over the wires. Therefore the maximum distance they can be apart will ...?
 
  • #9
Thanks CWatters I see what you're saying.

But, the math isn't telling me the same.
If L = the length between memory and processor then..

2L = C * t(time it takes for each instruction)

L = C*t/ 2

If I increase the time for each instruction (t) wouldn't that increase L?
 
  • #10
Chemp_93 said:
If I increase the time for each instruction (t) wouldn't that increase L?

It would increase the permissible MAXIMUM length. There is no requirement that computer manufacturers put the memory in the next room, so that just means that as far as the actual length used, the computer could run faster. The slower you make the instruction, the faster it COULD run and the farther apart the memory and CPU COULD be.
 
  • #11
Alright that makes more sense. Thanks for the help.
 
  • #12
Chemp_93 said:
0.5 μs * c = 150 meters traveled.

But that makes no sense because the distance between memory and processor can't be 150 meters apart

150 m is indeed huge compared to the size of a typical computer today, but remember that 1 million (mega) operations per second is also REALLY slow compared to a typical computer today. For example, my PC has a 3.47 GHz processor, which means it's doing up to 3.47 billion operations per second. That turns into around a 4 cm maximum limit on the distance that signals can travel. Add in all the other delays, and suddenly you may have to worry quite a lot about speed-of-light propagation delays.

Of course, as you can imagine, dealing with signal delays in a computer is MUCH more complicated than your question makes it out to be. But it's interesting to note that we actually have sort of hit a "maximum frequency" (at around 2-4 GHz) in the last decade or so, and it's become difficult to push past that. I think it's mostly due to heat dissipation issues at high frequency, but speed of light delay might very well play a part in that. The tendency now is to add multiple processors running in parallel ("dual core," "quad core," etc.) rather than boosting the frequency.
 
  • #13
thegreenlaser said:
150 m is indeed huge compared to the size of a typical computer today, but remember that 1 million (mega) operations per second is also REALLY slow compared to a typical computer today. For example, my PC has a 3.47 GHz processor, which means it's doing up to 3.47 billion operations per second. That turns into around a 4 cm maximum limit on the distance that signals can travel. Add in all the other delays, and suddenly you may have to worry quite a lot about speed-of-light propagation delays.

Of course, as you can imagine, dealing with signal delays in a computer is MUCH more complicated than your question makes it out to be. But it's interesting to note that we actually have sort of hit a "maximum frequency" (at around 2-4 GHz) in the last decade or so, and it's become difficult to push past that. I think it's mostly due to heat dissipation issues at high frequency, but speed of light delay might very well play a part in that. The tendency now is to add multiple processors running in parallel ("dual core," "quad core," etc.) rather than boosting the frequency.

What exactly is speed of light propagation delay?

Also, are there limits on physical size of these parallel processors that the speed of light imposes?
 
  • #14
Chemp_93 said:
What exactly is speed of light propagation delay?
It's what you've been calculating.

Being pessimistic, here's what could happen inside a poorly designed computer:
  1. The CPU starts processing an operation and determines it needs an item from memory.
  2. The CPU sends a fetch command to memory to retrieve those items.
  3. The fetch command travels at a speed that is at most the speed of light from the CPU to memory.
  4. The memory module receives the fetch command,
  5. The memory module looks up the requested items.
  6. The memory module sends the requested items to the CPU.
  7. The retrieved data travels at a speed that is at most the speed of light from memory to the CPU.
  8. The CPU computes the result of the operation.
  9. The CPU sends the result and the address in which to store the result to memory.
  10. The store command travels at a speed that is at most the speed of light from the CPU to memory.
  11. The memory module receives the store command.
  12. The memory module processes the store command.
  13. The memory module sends an acknowledgment to the CPU.
  14. The acknowledgment travels at a speed that is at most the speed of light from memory to the CPU.
  15. The CPU receives the acknowledgment.
  16. The CPU starts processing the next operation (repeat step #1).

In this pessimistic architecture, the speed of light propagation delay rears its ugly head four different times, highlighted in bold. This is pessimistic due to items 10 to 15. The CPU can bypass those last six steps and start with the next command if we can assume that the memory module will store the data as requested. That still leaves two propagation delays. Note well: Omitting those steps might not be a good idea if a computer has multiple CPUs.

The calculation in Taylor only looks at steps 3 and 7. It ignores that the other steps take time, too, and it ignores that the there is a time delay in the signal itself. It implicitly assumes a zero width signal. The amount of information that can be transmitted in a zero width signal is of course zero. The calculation in Taylor gives an upper limit on the distance between CPU and memory.

After taking all those other considerations into account, you'll find that a modern computer is physically impossible. (Steps 5 and 12 in particular are extremely slow.) There is no way to achieve the speeds a modern computer attains with that pessimistic architecture, or even with the optimistic architecture that results from bypassing steps 10 to 15.

Modern computers attain their ridiculously high operating speeds by not going to memory if at all possible. They keep a local copy (cache) of a parts of memory on the CPU, and only go to memory when that local cache does not contain the needed information. Even that isn't good enough. Modern computers have a hierarchy of memory caches.
 

Related to Distance between memory and processor

1. What is the distance between memory and processor?

The distance between memory and processor is the physical distance between the two components in a computer system. This distance can vary depending on the design and layout of the system, but is typically very small, measured in millimeters or centimeters.

2. Why is the distance between memory and processor important?

The distance between memory and processor is important because it affects the speed and efficiency of data transfer between the two components. A shorter distance allows for faster communication and processing, while a longer distance can lead to delays and slower performance.

3. How does the distance between memory and processor impact computer performance?

The distance between memory and processor can have a significant impact on computer performance. As mentioned before, a shorter distance allows for faster data transfer and processing, resulting in better overall performance. A longer distance can cause delays and bottlenecks, leading to slower performance.

4. Can the distance between memory and processor be changed?

The physical distance between memory and processor cannot be changed after a computer system has been built. However, the design and layout of the system can be optimized to minimize the distance between these components and improve performance.

5. How does the distance between memory and processor differ in different types of computers?

The distance between memory and processor can vary between different types of computers. For example, in a desktop computer, the memory and processor are usually located on the same circuit board, resulting in a very short distance between them. In a laptop or mobile device, the components may be spread out more, resulting in a slightly longer distance. Additionally, in larger and more complex systems, such as servers, the distance between memory and processor may be greater due to the need for more memory and processing power.

Similar threads

  • Introductory Physics Homework Help
Replies
8
Views
1K
  • Introductory Physics Homework Help
Replies
13
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
5
Views
2K
  • Introductory Physics Homework Help
Replies
8
Views
5K
  • Introductory Physics Homework Help
Replies
14
Views
2K
  • Introductory Physics Homework Help
Replies
5
Views
6K
  • Introductory Physics Homework Help
Replies
2
Views
10K
  • Introductory Physics Homework Help
Replies
2
Views
2K
Replies
2
Views
898
Back
Top