How Do You Get From 1's and 0's to complex software and digital info?

In summary, basic data of this kind (of 1's and 0's) produces more complex information, up to the point of producing the different types of programming languages/codings that are able to produce more complex bits of data like the alphabet and special key characters that are in languages of the programs that then produce the different laws and structures of those programming languages? Are the different laws and language alphabet of these programs just different combinations/arrangements of 1's and 0's? If so, what are the different types of methods and ways in which one can create complicated information from these basic signal datas, such as matrices or
  • #1
shushi_boi
45
0
TL;DR Summary
How do simple Architectural basic computing binary code signaling create a diversity of information and software properties from information signals of existence (=1's) and non-existence (=0's)?
How do simple Architectural basic computing binary code signaling create a diversity of information and software properties from information signals of existence (=1's) and non-existence (=0's)? After reading some Theoritical and Analytical Science papers recently about the debate on Emergence/Supervenience vs Reductionism and the ontological status of properties/essence [complexity vs simplicity and bottom up vs top down approach of constructing models like in cosmology etc.] (although I've been thinking about this for a while now), as well as the inter-relationship between space-time and information-entropy concerning maxwell's demon (which was solved through computer science by RolfLandauer and Charles Benneth), it got me to think about how do we get so many different chemicals and particles in the world that creates a diversity of matter and properties in the universe from a simple singularity, and to sort of find an analogous concrete example to see if this concept is possible in reality.

However I don't really understand much about coding and computing engineering so I wanted to ask at the most fundamental levels of computer science, how does a computer go from simple 1's and 0's (different informational electrical signals and electromagnetic patterns imprinting nodes and modules to interpret data to useful information) to complext codes, matrices, alphabets, languages, ontologies, which increases in complexity and works in a synchronized matter bringing together all of the information from the firmware producing different signals and information into a complex system with a lot of different properties and attributes from those simple digital signals.

Although this may be a big topic, I was mainly interested in a summarized description about this process of creating complex signals and data from simpler data and signals that creates complex information that structures the architecture that later transforms the hardware physical system to software/firmware structures (signals and structures that correspond to the properties of the physical structures the hardware) that later creates programs that later gives us Windows, Vehicle PCM's, Videos/Movies, etc. I apologize if I made a bunch of mistakes in how I outlined different levels of computing engineering, but how do we get complex information from simple information in computing science? I'm intested in the analogous possible correspondence with cosmological models of the Big Bang Theory on how we go from simple to complex properties and information in our universe.
 
Technology news on Phys.org
  • #2
shushi_boi said:
Summary:: How do simple Architectural basic computing binary code signaling create a diversity of information and software properties from information signals of existence (=1's) and non-existence (=0's)?
It doesn't. As long as it is 'simple Architectural basic computing binary code signaling' (I suppose that should be 'hardware' in short) it'll be always about 1's and 0's (with that existence/nonexistence stuff being completely irrelevant - both states do exists).

It's always the user/programer who gives name, structure, use and meaning.
 
  • Like
Likes shushi_boi
  • #3
Rive said:
It doesn't. As long as it is 'simple Architectural basic computing binary code signaling' (I suppose that should be 'hardware' in short) it'll be always about 1's and 0's (with that existence/nonexistence stuff being completely irrelevant - both states do exists).

It's always the user/programer who gives name, structure, use and meaning.

Thank you for your response! Having the basic binary code and the user/programmer (with meaning and blueprints in their minds), how does basic data of this kind of code signaling (of 1's and 0's) produce more complex information, up to the point of producing the different types of programming languages/codings that are able to produce more complex bits of data like the alphabet and special key characters that are in languages of the programs that then produce the different laws and structures of those programming languages? Are the different laws and language alphabet of these programs just different combinations/arrangements of 1's and 0's? If so, what are the different types of methods and ways in which one can create complicated information from these basic signal datas, such as matrices or other different types of useful combinations of arranging said data bits?
 
  • #4
I think you need an explanation at a very low level of the circuitry inside the CPU itself.

There is code even within the CPU itself, in my colleage days I did some basic CPU programming to make some LED's flash consecutively and that was programmed in using HEX code so we are already above the 1's & 0's by the time we are programming stuff.
 
  • Like
Likes shushi_boi
  • #5
It might be worth looking up how IP Subnets work in binary. It's not CPU code, but it does show you how the computer works out in binary whether the IP address you are trying to get to is in your local subnet or if it's in a different subnet and needs to be routed to the defaut gateway.

Similar stuff will be going on inside the CPU itself so it will at least give you some idea. I fear though that the answers you are looking for will ulimately require an answer at an extremely low level that may require detailed CPU architecture knowledge to appreciate.
 
  • Like
Likes shushi_boi
  • #6
shushi_boi said:
Although this may be a big topic, I was mainly interested in a summarized description about this process of creating complex signals and data from simpler data...
We go to school to learn this stuff. It takes many semesters. One pithy answer is that it is done with layers upon layers of abstraction. Nobody tries to understand it all from bottom to top. We understand it layer by layer.

One might start at the Electrical Engineering side of things and talk about circuit elements such as vacuum tubes or transistors. With transistors, a DC power supply, and an interpretation that +5V represents a 1 and 0V represents a zero, one can construct circuit elements that implement logical ANDs, logical ORs and logical NOTs. [A positive feedback loop -- controlling a large output with a small input is a key enabling feature].

One can go further and loop the outputs of these ANDs, ORs and NOTs back as inputs and form a circuit with two stable states -- a "flip flop". [A negative feedback loop is an equally important enabling feature]

One normally also supplies a clock input (e.g. from a crystal oscillator) so that circuit transitions take place in a regular and predictable fashion.

With an array of flip flops, one can apply interconnections and a ripple carry to implement an adder.

With a suitable array of "and" gates, "or" gates and flip flops, one can implement directly addressable read/write memory.

With https://en.wikipedia.org/wiki/Two%27s_comple, one can use an adder to do subtraction. We have the ability to store numbers. With a little bit more effort, we can implement a central processing unit to use stored data to select which operation to perform -- a machine to execute stored programs. [The ability to treat stored data as executable programs is a crucial enabling feature]

It is no particular problem to invent a coding scheme by which letters and digits are represented as sequences of ones and zeros. We can create programs to output carefully constructed sequences. We can create devices like printers and card readers to produce output and take input.

One can turns one's attention to the problem of communications. There is "framing" (mustering groups of bits for serial transmission on a communications medium), coding them for transmission, decoding them on receipt and undoing the framing. The art of bulk data storage and retrieval (e.g. tape and disk) has much in common with the art of data transmission and reception.

Since it is painful programming the computer with stored programs consisting of sequences of ones and zeroes, we can invent an "assembly language" which represents operations with mnemonics such as AND, TAD, DCA, ISZ and JMP (PDP-8 assembly language in this case). We can write programs (with ones and zeroes) which take input programs (written with AND, TAD, etc) and produce output programs (with ones and zeroes). We can execute these programs.

We can do the same trick for "high level" languages such as COBOL or BASIC. Writing programs (with ANDs and TADs) which take input programs (with statements like "add current_earnings to past_net_worth giving current_net_worth") to produce output programs (with ones and zeroes). We can execute those programs. [It has been remarked that a line of COBOL usually translates one for one to a line of IBM BAL]

We can employ engineers to invent data structures, re-implement storage methodologies, CPU architectures and programming languages to the point where the above looks like a stick figure drawing compared to the reality of modern computers and networking.
 
Last edited:
  • Like
  • Informative
Likes alejandro23, DaveE, shushi_boi and 4 others
  • #8
It's pretty simple really.

For every natural number, there is a binary (base 2) encoding of it. Just like there is a decimal encoding (base 10). In base 10, you have a set of digits dcba. a is the 1's place and says how many 10^0's the number has, b is how many 10^1's it has, then c is the number of 10^2's and d is the number of 10^3's. So ##1234 = 10^0 \times 4 + 10^1\times 3 + 10^2\times 2 + 10^3 \times 1 = 4 + 30 + 200 + 1000##. In binary, it is in power of 2's. So a binary number dcba is ##2^0\times a + 2^1 \times b + 2^2 \times c + 2^3 \times d##. For example, ##1011 = 2^0 \times 1 + 2^1 \times 1 + 2^2 \times 0 + 2^3 \times 1 = \text{ decimal } 1 + 2 + 0 + 8 = 11##.

You can also approximate real numbers, but only approximate. E.g. from the below link, 1.2345 can be encoded as ##12345 \times 10^{-4}##. 12345 is the significand and you have 10 is the base and -4 is the exponent. So you simply designate some number of bits ##n## to encode the number, and reserve some of those bits to store the exponent.

https://en.wikipedia.org/wiki/Floating-point_arithmetic

You want to encode characters? Then give each character a number. You want to store a matrix of natural numbers, then store NxM numbers in some order (e.g. each row one at a time from left to right).

Then to compute things, @jbriggs44 explained a lot about how the machinery works using pulses of high voltage to drive things forward, and logic gates to perform operations.

In fact, it turns out that all you really need is nand gates (which compute the function f( i, j ) = not( i and j ) = 1 if either i and/or j is 0. With just nand gates (actually nor gates work too), you can express any boolean expression (which is a logical expression that has either a true or false value). This property is called logical completeness, or functional completeness.

https://en.wikipedia.org/wiki/Functional_completeness

Further, there is a concept of Turing completeness.

Turing was a famous mathematician/computer scientist who came up with the widely used theoretical model of computation called the Turing machine. We use that model to prove things about computation, such as what is computable or not. The Turing machine is based on how a person might compute, e.g. a person with a paper and pencil. Together with Alonzo Church, Turing put forth the Church Turing Thesis.

It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine.
https://en.wikipedia.org/wiki/Church–Turing_thesis

Notice the words natural numbers. We already showed how to represent natural numbers with 0's and 1's (base 2), and how to express any boolean combination of 0's and 1's using logic gates. The only thing really missing is memory (writing things down, and reading what you've written down).

A system is Turing complete if it can compute anything a Turing machine can (e.g. anything thought to be computable).

https://en.wikipedia.org/wiki/Turing_completeness

There are some surprising examples of Turing complete systems. Magic the gathering (the card game) turns out to be Turing complete. Conway's game of life turns out to be Turing complete. There are lots of simple examples.

The way that such complex things can come from such simple ones is more generally described as emergence.

https://en.wikipedia.org/wiki/Emergence


Some things which are not computable are some (actually most) real numbers.

And maybe random numbers?

https://cstheory.stackexchange.com/questions/1263/truly-random-number-generator-turing-computable
 
Last edited:
  • Like
  • Informative
Likes alejandro23, DrClaude and shushi_boi
  • #9
I am responding to OP's original post, I don't understand most of it even though I was an EE that studied hard on electromagnetics and all. Yes, binary is only 1 and 0, by if you have 2 bits, you can have combination of 00, 01, 10 and 11. with 4 bits, you have 16 combinations, 8 bits, you have 256 combinations and so on.

So this simple 1 and 0 quickly becomes very complex representing very complex sequence. Like simple ASCII code only use 8 bits, it can represent all the alphabets upper and lower cases and the other symbols of the keyboard. You can use more bits to represent even more complex codes. It all started out as 1 and 0.

It is hard to explain this until you actually study digital logic. That can show you how to take a bunch of 1 and 0s to create a very complex system. This is not even as obvious in computer languages unless you look into assemble language or machine language. But if you learn basic digital logic starting with AND, OR, NOT, XOR gates and build on it, you can quickly see this 1 and 0s grows into a very complex system very fast and complex decision making ability.

From reading your post, you seems to making it a lot more complicate by using very intimidating words and phrases. This is very very simple, using a lot of common sense. It starts out very simple and turn complex as you add more conditions ( bits).

say using over simplified example. You have a system that has two inputs, bit1 is 1 when temperature is high, bit2 is 1 when smoke is detected. So the decision making is if bit = 1, bit2 = 0, it's likely hot weather, turn the air condition on harder. If both bits are high, there might be a fire, turn on the alarm. Then set the timer, if both bits are still high after 5 minutes, call 911, turn on the sprinkler.

All these decision making only started with 2 bits of 1 and 0 and using a timer to time 5 minutes! It's just that simple, just a lot of common sense.

This is the logic design for the example:
chart.jpg


You see, all the decision can be generated by only 3 AND gates and one INVERTER and one timer. I show you the TRUTH table what an AND gate and INVERTER do. It is just that simple. You can imagine if it doesn't take much more to make it a lot more complex.
 
Last edited:
  • Informative
  • Like
Likes alejandro23 and shushi_boi
  • #10
One typo: The State Table should have a "1" in the lower right corner.
 
  • Like
Likes shushi_boi and yungman
  • #11
I earned my living from being a programmer, sometimes using assembler, which is very close to 'machine code' which is really just 0's and 1's (in chunks of 8 or 16 ...), and higher level languages such as C++ and VB. All they do is shuffle 1's and 0's about and from that you get addition, multiplication, word processors, graphics, Artificial Intelligence, this website and everything else that computers do. I never cease to be amazed.

What's this?
0100100100100000011001010110000101110010011011100110010101100100001000000110110101111001001000000110110001101001011101100110100101101110011001110010000001100110011100100110111101101101001000000110001001100101011010010110111001100111001000000110000100100000011100000111001001101111011001110111001001100001011011010110110101100101011100100010110000100000011100110110111101101101011001010111010001101001011011010110010101110011001000000111010101110011011010010110111001100111001000000110000101110011011100110110010101101101011000100110110001100101011100100010110000100000011101110110100001101001011000110110100000100000011010010111001100100000011101100110010101110010011110010010000001100011011011000110111101110011011001010010000001110100011011110010000000100111011011010110000101100011011010000110100101101110011001010010000001100011011011110110010001100101001001110010000001110111011010000110100101100011011010000010000001101001011100110010000001110010011001010110000101101100011011000111100100100000011010100111010101110011011101000010000000110000001001110111001100100000011000010110111001100100001000000011000100100111011100110010000000101000011010010110111000100000011000110110100001110101011011100110101101110011001000000110111101100110001000000011100000100000011011110111001000100000001100010011011000100000001011100010111000101110001010010010110000100000011000010110111001100100001000000110100001101001011001110110100001100101011100100010000001101100011001010111011001100101011011000010000001101100011000010110111001100111011101010110000101100111011001010111001100100000011100110111010101100011011010000010000001100001011100110010000001000011001010110010101100100000011000010110111001100100001000000101011001000010001011100010000001000001011011000110110000100000011101000110100001100101011110010010000001100100011011110010000001101001011100110010000001110011011010000111010101100110011001100110110001100101001000000011000100100111011100110010000001100001011011100110010000100000001100000010011101110011001000000110000101100010011011110111010101110100001000000110000101101110011001000010000001100110011100100110111101101101001000000111010001101000011000010111010000100000011110010110111101110101001000000110011101100101011101000010000001100001011001000110010001101001011101000110100101101111011011100010110000100000011011010111010101101100011101000110100101110000011011000110100101100011011000010111010001101001011011110110111000101100001000000111011101101111011100100110010000100000011100000111001001101111011000110110010101110011011100110110111101110010011100110010110000100000011001110111001001100001011100000110100001101001011000110111001100101100001000000100000101110010011101000110100101100110011010010110001101101001011000010110110000100000010010010110111001110100011001010110110001101100011010010110011101100101011011100110001101100101001011000010000001110100011010000110100101110011001000000111011101100101011000100111001101101001011101000110010100100000011000010110111001100100001000000110010101110110011001010111001001111001011101000110100001101001011011100110011100100000011001010110110001110011011001010010000001110100011010000110000101110100001000000110001101101111011011010111000001110101011101000110010101110010011100110010000001100100011011110010111000100000010010010010000001101110011001010111011001100101011100100010000001100011011001010110000101110011011001010010000001110100011011110010000001100010011001010010000001100001011011010110000101111010011001010110010000101110

Hint: the sequence 00100000 occurs 150 time in it.
 
  • Informative
Likes shushi_boi

1. How is software created from 1's and 0's?

Software is created by writing code using programming languages, which are then translated into 1's and 0's by a compiler. These 1's and 0's are known as machine code and are the instructions that a computer follows to execute the desired tasks.

2. What is the role of binary code in creating software?

Binary code, which consists of 1's and 0's, is the fundamental language used by computers to process and store information. It is the backbone of all software as it represents the instructions that a computer follows to perform tasks and manipulate data.

3. How do 1's and 0's represent complex information?

Through a process called binary encoding, where 1's and 0's are used to represent different characters and symbols. By combining different sequences of 1's and 0's, a computer can represent and process complex information such as text, images, and videos.

4. What is the relationship between 1's and 0's and digital information?

Digital information is created and stored as a series of 1's and 0's in a computer's memory. These 1's and 0's are then interpreted by the computer's processor to perform tasks and display information on a screen.

5. How do 1's and 0's allow for the creation of complex software?

1's and 0's provide the foundation for all software, as they are the basic building blocks used to create code and instructions for a computer to follow. By combining and manipulating these 1's and 0's, developers can create complex software programs with various functions and capabilities.

Similar threads

  • Programming and Computer Science
Replies
2
Views
1K
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
2
Views
885
Replies
9
Views
1K
  • General Discussion
Replies
3
Views
988
  • Programming and Computer Science
Replies
4
Views
4K
Replies
1
Views
1K
Back
Top