Memory Location of C Variables: Random or Type-Specific?

  • Thread starter kidsasd987
  • Start date
  • Tags
    Memory
In summary: Automatic data is allocated when the program starts up, and it is just a big pool of memory that is not locked down. Thread local data is data that is only accessible from within a thread.In summary, int and float variables are assigned according to their type specifiers, but memory location(address) israndom.
  • #1
kidsasd987
143
4
if we declare variables with different type specifiers, is the memory location(address) assigned according to the type specifier or is that just random?

ex.

1. int variables are assigned location closer to each other or specific region of memory, and floats, double have different locations.


2. if 1 is wrong, the location is randomly assigned.




Thanks you.
 
Technology news on Phys.org
  • #2
Where separate variables are stored in a language like C is implementation dependent. There are usually directives that can place variables in the memory address of your choice but even then the address in a system with virtual memory could be completely different from it's physical memory address on the hardware.
 
  • #3
kidsasd987 said:
1. int variables are assigned location closer to each other or specific region of memory, and floats, double have different locations.

That would make no sense since memory allocation is a run-time activity and variables come and go based not on what kind they are but on the calling order of the subroutines in which they are created. When the run-time code requests a memory block from the OS, it is just that ... a block of memory. The OS does not care what it is going to be used for.

All the variables for a subroutine, regardless of type, are created when that subroutine is called (unless they are declared static) and then if/when that sub calls another sub, the 2nd sub's variables are allocated memory. When a sub is exited, it tells the OS to de-allocate the memory it was given (although actual de-allocation may be delayed depending on the kind of garbage collection system being used.
 
  • #4
What post #2 and #3 said.

But on most types of computer, the hardware works faster if variables are "aligned" to start at preferred positions in memory. For example a variable that holds a 32-bit integer, that takes up 4 bytes of memory, will be allocated so its starting address is always a multiple of 4. 64-bit floating point variables might also start on multiples of 4 bytes, or on some systems it would be faster if they started on multiples of 8 bytes.

So, if your program allocates variables of different sizes, there may be small unused "holes" in memory between them.

That is not quite the same as your example 1, but it might be what gave you the idea.
 
  • #5
phinds said:
That would make no sense since memory allocation is a run-time activity and variables come and go based not on what kind they are but on the calling order of the subroutines in which they are created.

A common arrangement is a "run-time stack" or "call stack." Suppose the following sequence of calls takes place:

Function A calls function B.
Function B calls function C.
Function C returns (to B).
Function B calls function D.
Function D returns (to B).
Function B returns (to A).
Function A calls function E.

Call the block of memory associated with each function (its variables, parameters, and other housekeeping information) A, B, C, etc. Then the stack grows and shrinks as follows, as the functions are called and return:

Code:
      C     D
   B  B  B  B  B     E
A  A  A  A  A  A  A  A

http://en.wikipedia.org/wiki/Call_stack

Within each block, the variables and parameters might be arranged in the sequence in which the function declares them, possibly with "padding" as needed so each one starts on a 4-byte boundary (or whatever is most efficient for the processor).
 
  • #6
In modern computers the memory address of a static variable is just a convenient way to reference it. The data at that address is shifted into several levels of memory so that a small amount of very fast memory contains the data currently being used and a large amount of slower memory keeps data that has not been accessed for a while. There can be 5 levels of cash memory (5 different speeds) and main memory. The memory is divided into blocks that are moved together. Because memory is moved on blocks and speed is the top priority, data that is used close together in the code is put close together in memory. That way they end up in the fastest memory together. That is more important than the type of variable.
 
  • #7
kidsasd987 said:
1. int variables are assigned location closer to each other or specific region of memory, and floats, double have different locations.

2. if 1 is wrong, the location is randomly assigned.
Neither is correct.

The C memory model (2011 version of the standard) recognizes four storage durations for data: static, thread, automatic, and allocated. I'm going to ignore thread local data. For one thing, where that goes is highly implementation dependent. For another, your question is very basic. Threading is anything but a basic concept.

Static data are the variables you declare at file scope, the static variables you declare at function scope, and string constants such as the "Hello, world!" in const char * homework = "Hello, world!";. Automatic data include the arguments to and return value from functions, and non-static variables declared at file scope. Finally, allocated data are chunks of data created by malloc and released by free.

Where those three types of data "live" is implementation dependent, but most implementations use the following widely used practice. Static data are built into your executable at compile/link time. They are a part of your code. Automatic data live on the "call stack". Finally, allocated data is a part of the memory heap.

Note that none of these organize your data by data type.
 
  • #8
jtbell said:
Within each block, the variables and parameters might be arranged in the sequence in which the function declares them, possibly with "padding" as needed so each one starts on a 4-byte boundary (or whatever is most efficient for the processor).

Yeah, that's how I understand it as well.
 
  • #9
By the way, kidsasd987, this is an EXCELLENT kind of question to be asking. I worry sometimes that many young programmers today have no idea how computers work. They don't really program the computers at all, they program a language that insulates them from the computer.

That works just fine as long as nothing goes wrong but debugging can be pretty much impossible if you don't really know what's going on. You are working on learning what's really going on, so keep it up !

Here's one for you, in pseudo code:

Code:
declare A, B, and C as 32-bit floating point variables
declare D as a boolean variable
set A = 1.4
set B = 2.6
set C = A + B
set D = true if C = 4.0 and false if C is not = 4.0

Why in most computers does D turn out to be false? (Actually, it should always be false but I've been told that there are implementations where it comes out true although I've never experienced that myself)
 
Last edited:
  • #10
phinds said:
Code:
set A = 1.4
set B = 2.6
set C = A + B
set D = true if C = 4.0 and false if C is not = 4.0

Why in most computers does D turn out to be false? (Actually, it should always be false but I've been told that there are implementations where it comes out true although I've never experienced that myself)
It used to be a real problem that D was false. I am surprised that you see that happen these days. I have not seen the problem you describe for a very long time. The introduction of hidden bits and the later adoption of the IEEE 754 Floating Point Arithmetic standard has largely eliminated any worries about that. It is still in the programming standards to not compare floats for equality, but that is because not all processors have completely adopted IEEE 754 (although it is 30 years old).
 
  • #11
FactChecker said:
It used to be a real problem that D was false. I am surprised that you see that happen these days. I have not seen the problem you describe for a very long time. The introduction of hidden bits and the later adoption of the IEEE 754 Floating Point Arithmetic standard has largely eliminated any worries about that. It is still in the programming standards to not compare floats for equality, but that is because not all processors have completely adopted IEEE 754 (although it is 30 years old).

If you don't understand the basic fact that almost all floating point values are approximate, you shouldn't be writing software that uses floating point for anything serious IMO.

Relying on "hidden bits" to keep you safe is about as useful as relying on magic, or prayer.

See http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html (which was written by somebody who doesn't believe in magic, but does understand IEEE-754).

As for relying on standards as an alternative to believing in magic:
Many programmers may not realize that even a program that uses only the numeric formats and operations prescribed by the IEEE standard can compute different results on different systems. In fact, the authors of the standard intended to allow different implementations to obtain different results. Their intent is evident in the definition of the term destination in the IEEE 754 standard: ...
(from the above reference).
 
Last edited:
  • #12
AlephZero said:
If you don't understand the basic fact that almost all floating point values are approximate, you shouldn't be writing software that uses floating point for anything serious IMO.
All modern military airplanes use floating point calculations in their flight controls.
Relying on "hidden bits" to keep you safe is about as useful as relying on magic, or prayer.

See http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html (which was written by somebody who doesn't believe in magic, but does understand IEEE-754).

As for relying on standards as an alternative to believing in magic:

(from the above reference).
After decades of telling new-hires that they should never compare floating point variables for equality, I began to realize that the comparisons that used to fail, no longer do. Then I tried examples that used to be a problem due to the binary representation of their fractional part. I could not find any failures. That is all I can say. If you have an example that still fails, I would like to see it. @phind's example above, run on my PC, gives the correct answer.

EDIT: Example was posted by @phind
 
Last edited:
  • #13
FactChecker said:
It used to be a real problem that D was false. I am surprised that you see that happen these days. I have not seen the problem you describe for a very long time. The introduction of hidden bits and the later adoption of the IEEE 754 Floating Point Arithmetic standard has largely eliminated any worries about that. It is still in the programming standards to not compare floats for equality, but that is because not all processors have completely adopted IEEE 754 (although it is 30 years old).

Other than the implied leading one in normalized numbers, IEEE 754 has no "hidden bits". I suspect the "hidden bits" about which you are writing are the extra 16 bits in Intel's 80 bit floating point registers. Those 80 bit floating point numbers are not part of the floating point standard. Those "hidden bits" are lost as soon as you write those registers to memory.


FactChecker said:
All modern military airplanes use floating point calculations in their flight controls.
Yep, they do. And any code that assumes that 1.6+2.4==4 is roundly rejected during peer review.

After decades of telling new-hires that they should never compare floating point variables for equality, I began to realize that the comparisons that used to fail, no longer do. Then I tried examples that used to be a problem due to the binary representation of their fractional part. I could not find any failures. That is all I can say. If you have an example that still fails, I would like to see it.

Here's one. YMMV, depending on your machine and your compiler.
Code:
// C or C++
#include <stdio.h>

int main () {
   double x =  25.4 / 10.0  *  1.0 / 2.54 ;
   double y = (25.4 / 10.0) * (1.0 / 2.54);
   printf ("%g %s %g\n", x, ((x == y) ? "=" : "!="), y); 
}

Code:
# python
x =  25.4 / 10.0  *  1.0 / 2.54
y = (25.4 / 10.0) * (1.0 / 2.54)
if (x == y) :
   comp = '=' 
else :
   comp = '!='
print str(x) + ' ' + str(comp) + ' ' + str(y)

Code:
# perl
$x =  25.4 / 10.0  *  1.0 / 2.54 ;
$y = (25.4 / 10.0) * (1.0 / 2.54);
printf ("%g %s %g\n", $x, (($x == $y) ? "=" : "!="), $y);
 
Last edited:
  • #14
D H said:
Here's one. YMMV, depending on your machine and your compile
Code:
# perl
$x =  25.4 / 10.0  *  1.0 / 2.54 ;
$y = (25.4 / 10.0) * (1.0 / 2.54);
printf ("%g %s %g\n", $x, (($x == $y) ? "=" : "!="), $y);

Well, I stand corrected. Thanks! That is a very good example.

I don't want to hijack this thread, but is this because of the inaccuracy of storing the fractional part of intermediate calculations in binary numbers?

It's good to know that the standards are correct. As far as my misconception, I'll swallow my pride.
 
  • #15
The problem is always one of rounding error. If you had a decimal computer you would have the same problem with, just with a somewhat different set of numbers/calculations, so the problem isn't binary or any other number system, it's that computers have a finite #of "decimal places" / "binary places" whatever with which to represent numbers that cannot PRECISELY be represented that way. No decimal computer can precisely store the exact number 1/3, for example.
 
  • #16
FactChecker said:
All modern military airplanes use floating point calculations in their flight controls.
The issue of comparing floating point numbers has been well covered in the posts before.
I will just add this:
I have no doubt that all military airplanes use floating point calculations for navigation and in some cases for actual control. However, I would be surprised if there were many instances where comparing for equality existed - even for an approximate comparison. For example, checking a numerator for zero before dividing ignores the fact that if the denominator is nearly zero, you will be saved from an arithmetic exception only to die from lack or precision. And so, what you need to do is determine the "other" method of making the calculation and deciding what region around zero is best handled using that alternative.
It would be the same for every other comparison I can think of. After all, this isn't a Math exercise, it's an Engineering exercise - where limits are maximums, minimums, or tolerances. And, if you are doing a Math exercise involving specific values of real numbers, you probably shouldn't be using floating point.

There is one example that will work, but which I would be very uncomfortable with. And if it occurred in a transportation system - such as a military aircraft - I would be alarmed. That would be this:
Code:
#define VALUE_IS_LATE -20.0
#define VALUE_IS_MISSING -21.0

double fFuelLevel = VALUE_IS_MISSING

...

if(fFuelLevel == VALUE_IS_MISSING) { ... }
Assuming fuel level can never be negative, it will work - but will it always work? Even if the development environment changes?
A better way to do this would be to create a class that included the floating point number and a status flag as member variables.
 
Last edited:
  • #17
https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library
This provides decimal, not binary support for IEEE 754-2008 and is meant to meet the legal requirements for decimal floating point operations.

And C# supports a 128bit fp decimal datatype called Decimal.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Assuming these are libraries then they are likely not based on native decimal opcodes. The Sparc fujitsu M10 does have native decimal fp opcodes, and is therefore faster than a library implementation.
http://www.fujitsu.com/global/products/computing/servers/unix/sparc/concept/

So I think everyone posting here has a solid piece of a correct view of things, just not quite the same context...

I, too, would like to see a citation of some older (10+ years) decimal fp cpus in Aviation, rather than library implementations. Which is how I took the comments on Aviation computer architecture.
 
  • #18
.Scott said:
I have no doubt that all military airplanes use floating point calculations for navigation and in some cases for actual control. However, I would be surprised if there were many instances where comparing for equality existed - even for an approximate comparison.
You are exactly right. There is a lot of floating point calculations in current safety-critical flight controls, but the standards (MISRA and others) forbid checking for equality of floats. This is common sense in general since the odds are so small of two floats being exactly equal. My misconception was that I thought that IEEE 754 solved the problem when two simple mathematically identical values were compared. @D H's example is a perfect one to show that I was wrong. I will want to study it more. Thanks all for clearing up my misconception. I am afraid I have hijacked this thread. Sorry.
 

Related to Memory Location of C Variables: Random or Type-Specific?

1. What is the memory location of a C variable?

A C variable is stored in a specific location in the computer's memory, which is determined by the compiler during the compilation process.

2. Is the memory location of a C variable random?

No, the memory location of a C variable is not random. It is determined by the compiler and remains the same throughout the execution of the program.

3. Does the type of variable affect its memory location in C?

Yes, the type of variable does affect its memory location in C. Different types of variables require different amounts of memory, so they will be stored in different locations.

4. Why is the memory location of a C variable important?

The memory location of a C variable is important because it determines how the variable can be accessed and manipulated in a program. It also affects the performance and efficiency of the program.

5. Can the memory location of a C variable be changed?

No, the memory location of a C variable cannot be changed during the execution of a program. It is determined by the compiler and remains the same throughout the program's execution.

Similar threads

  • Programming and Computer Science
Replies
17
Views
1K
  • Programming and Computer Science
Replies
10
Views
1K
  • Programming and Computer Science
Replies
3
Views
771
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
681
  • Programming and Computer Science
Replies
19
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
714
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
710
  • Programming and Computer Science
Replies
16
Views
2K
  • Programming and Computer Science
Replies
23
Views
2K
Back
Top