How come we haven't advanced since binary code?

In summary, binary code has been with us since the 1940's, and tertiary code would be wrong because it fundamentally changes the way computers process information.
  • #1
Femme_physics
Gold Member
2,550
1
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
 
Technology news on Phys.org
  • #2
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse. I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
 
  • #3
Oh, that actually makes a lot of sense. :) Thanks.
 
  • #4
KrisOhn said:
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse.
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head. [/quote]
KrisOhn said:
I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.
 
  • #5
Mark44 said:
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head.
Yes, I understand that, I meant what I said to be a rough approximation of what happens.

In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.

I had not considered this, I can see what is meant by simple now.
 
  • #6
My point was that simplicity is in the eye of the beholder. From one perspective, simplicity is being able to do complicated things with a minimum of code. For example, I remember using an implementation of BASIC that included matrix operations. (This type of BASIC ran on some kind of minicomputer back in the mid 70s.) You could add together two matrices and store the sum in another matrix using this syntax: C = A + B. Other high-level languages, such as C and Fortran, required considerably more lines of code to do the same thing.

From another perspective, simplicity could be from the point of view of what individual assembly instructions are doing, which I mentioned already.
 
  • #7
Femme_physics said:
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
Search Wiki for Octal and Hexadecimal.
 
  • #8
I vaguely remember perhaps 10-20 years ago when they were trying to cram more bits into the same space that one company announced a ram or rom that used more than just high or low voltage, instead I think they tried using 4 voltages so they could get two bits stored in the space of one transistor or capacitor.
 
  • #9
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.
 
  • #10
timthereaper said:
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.

This is a little bit different, what happens here is you perform a series of quantum operations on each bit such that, by the time you are done and you measure the results, there is a probability distribution that each bit is 1 or 0. So making a quantum algorithm is all about structuring your operations such that at the end there is a more-than-50% probability that the final answer is "correct", and running the quantum algorithm involves running it multiple times and then taking the "majority" answer. When we are doing math reasoning about a quantum algorithm, we represent this by at each moment pre-measurement saying the qubit has a value which is a quantum superposition of the states |0> and |1>, and that superposition allows us to compute the probability distribution. However whether the qubit actually is in multiple states simultaneously combining |0> and |1> until the measurement and then randomly "collapses" to one on measurement, or what, depends on which interpretation of quantum mechanics you prefer.

I suggest reading Scott Aaronson's "quantum computing since Democritus" series, on the right hand bar here (note: the alphabetical order is a little jumbled) http://www.scottaaronson.com/blog/
 
  • #11
Some forms data transmissions encode more than 1 bit per frequency cycle, using combinations of amplitude and phase shifting to encode the data.

In computers themselves, some binary math operations such as addition or multiplication, are sped up by doing more stuff in parallel, to reduce the number of gate propagation delays.
 
  • #12
I don't see the reason people mix base system(I guess that's what you meant with binary code) with programming languages. The programming language is just a tool to control the machine. It doesn't have anything to do with the binary system. You could use C, assembly, Python, Java even in a tertiary base or 20ary base and so on. The base is just a way to represent information.

The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.
 
  • #13
Pithikos said:
The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.

That's obviously true but I can see things evolving in times of quantum computer mechanics. However, in my opinion simplistic is best and binary is very simplistic :)
 
  • #14
There have indeed been various computers based on non-base 2 architecture.

The IBM 650 used base 10 http://en.wikipedia.org/wiki/IBM_650
UNIVAC used 36 bit words http://en.wikipedia.org/wiki/UNIVAC_1107

However base 2 is used almost universally now because you can build computer memory efficiently with one transistor representing on/off or zero/one.

Building transistor storage with 3 or more states just hasn't prove effective or efficient. DRAM rules! http://upload.wikimedia.org/wikipedia/commons/3/3d/Square_array_of_mosfet_cells_read.png
 

Related to How come we haven't advanced since binary code?

1. Why is binary code still used instead of more advanced systems?

Binary code is still used because it is the most basic and fundamental form of data representation in computers. It allows for data to be represented using only two digits, 0 and 1, which makes it easier for computers to process and store information.

2. Has there been any progress in advancing beyond binary code?

Yes, there has been progress in developing more advanced systems for data representation, such as hexadecimal and octal code. These systems use a larger number of digits to represent data, which allows for more complex and efficient processing of information.

3. Why do we still rely on binary code for computing?

We still rely on binary code for computing because it has proven to be a reliable and efficient method for data processing. It is also deeply integrated into the design of computer hardware and software, making it difficult to completely move away from.

4. Are there any limitations to advancing beyond binary code?

Yes, there are limitations to advancing beyond binary code. While it is possible to use more advanced systems for data representation, it would require significant changes to existing computer hardware and software, which can be costly and time-consuming.

5. Will we ever completely move away from binary code in the future?

It is unlikely that we will completely move away from binary code in the near future. While advancements in technology may allow for more efficient and complex systems for data representation, binary code will likely continue to be the basis for computing due to its reliability and compatibility with existing systems.

Similar threads

Replies
13
Views
2K
Replies
3
Views
420
  • Programming and Computer Science
Replies
4
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
10
Views
3K
  • Programming and Computer Science
Replies
8
Views
918
Replies
6
Views
1K
  • Programming and Computer Science
Replies
13
Views
2K
  • Programming and Computer Science
Replies
6
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
Back
Top