Data Compression And Human Brain

In summary: MPEG-6.In summary, Raavin had an idea to teach computers to understand text the way humans do by saving the text in a similar way humans remember information. This would require understanding the human brain more.
  • #1
STAii
333
1
The other day i came across [Removed Broken Link], and while reading it i started thinking; "Why don't we human try to compress the things we learn before learning them, and uncompress them when we need to remember them ?".
Then i realized that the human brain finds it easier to memorize a meaningfull text than to memorize some bunch of letters and numbers that don't mean anything until they are uncompressed.
This gave me a new idea, why not to teach the computer to actually understand a text passage instead of saving it byte by byte ? Maybe this can allow the computer to use the disk storage a little better.
So what stops us from learning the computer to actually save text files the same way we memorize text files ?
(forget about other files (like pictures) for a second :smile:)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
This gave me a new idea, why not to teach the computer to actually understand a text passage instead of saving it byte by byte ?

it's a cool idea, but wouldn't it require basicly creating a computer that worked more like the human brain. remembering things by association and not just taking a byte by byte snapshot?

sounds like something neural network people would try to do.
 
  • #3
It would probably be ok as long as you didn't mind the interpretation the computer gave when spitting back the info.

I had thought about something like this a while ago. The idea was that you develop a huge database that you have common passages or strings or whatever and the file you store cross references with the database so rather than compressing with an algorithm based on the actual file data, you find common patterns first then drag uncommon data to the database to use next time.

eg. a dictionary database that you store a symbol instead of words or common combinations of words in the file and use the database to reconstruct with no loss.

I was originally thinking about binary files and taking the repeated chunks, like in commonly used libraries, and storing them in a database. After some looking into it there wasn't really a huge saving. Probably only a few k on a small binary file.

Getting back to the first bit, the task of parsing and associating the text probably isn't impossible, but what you got at the end would only vaguely resemble the original text.

Raavin
 
  • #4
First of all, thanks for the reply.
I was thinking about the same, making kind of a dictionary (that may get standard some day) with the sentences, words and phrases that are used over and over, but i didn't try to calculate how much this can be useful for the disk space (i assume if the data entries are quite long, it will be quite usefull (it also depends on the number of entries in the database, since more entries will mean more disk space only to save the entry index into the file).
And of course it will not really be easy to make this database, it will be even harder to update it globally !

Raavin, you said you were able to save some Kilobytes, but how
big was the engine that makes the uncompression ? (cause if the engine was not really big, this may get EXTREMLY usefull in boot sectors).

spacemanspiff, yes, i think we will need to understand the human brain more in order to make such a thing into reality (without using the database idea proposed by Raavin).
 
  • #5
I never actually implemented an alorithm. I just manually compared chunks of data. Dragged out bits and compared them in a hex editor. In the other 'make your own OS thread' the idea might come in handy if you had strict rules on compiling and structure of binaries. Basically though, you're talking about restricting libraries people can use to program with. Like Visual basic. VB binaries are sort of like a binary script and the VB dll has the job of doing the actual work. You can have a really small program with a large set of dlls if they can do everything you would ever want to do. I have 1 gig of files in my system32 folder at the moment. If your operating system started out with all the dlls that were allowed in the operating system you could restrict the binary size.

Raavin
 
  • #6
Hello,

I'm not much into computer science but like to remind you of something. As you know, there are different versions of MPEG compression scheme. MPEG-1 for data rates about 1.5 Mbps (T1 links for example), MPEG-2 used in VCD and DVD (higher quality at data rates around 5 Mbps, also used in medical imaging), MPEG-3 (MPEG-1, layer 3) only on audio channel in MP3, MPEG-4 in web video. There is an underway vesion of MPEG, namely MPEG-7 or MCDI (Media Content Description Interface), that does exactly what you intend of "understanding" text but does that on images. It uses feature extraction and other machine vision methods to get the "real" things out of motion picture's frames, eg basic shapes and perhaps more complex objects. And instead of recording pixel and motion vectors records the shapes whose description needs much less space.
 
  • #7
Raavin.
I see your point.
But i don't see what you are doing with 1 gig of DLLs !

Manuel_Silvio.
So, basicially what this kind of MPEG does is that it changes each frame into (what is so called) Paths, then saved the paths instead of the saving the data pixel by pixel ...
But you know, when you have a frame of a metal square, you always find odd pixels somewhere, that will not come in the whole context.
So if this algorathim tries to actually save those odd pixels as paths, you will end up with a bigger disk space.
Therefore i assume that this compression will not show a movie 100% like the original.

And after all, MPEG is not looseless, right ? (we are talking about looseless compression here !)
 
  • #8
Hi there,

You're right STAii; all MPEG standards are lossy algorithms. I think human brain also uses lossy ways to describe its input. Human beings never remember the one-to-one presentation of what they've seen before; they remember pragmatically emphasized details along with their judgement of the picture they've seen. From that I think it is obvious that MPEG is more similar to what human brain does than RLE.

Besides, MPEG-7, I think, goes much further than describing shape vectors. Considering its official name, MCDI, it should be able to describe the frame as a gathering of "objects". These objects can even be defined in a header and then re-used all over the motion picture, so they can be much more complex than basic paths. They may include for example the description unique to a specific computer case that happens to recur over all or many of the frames.

It is obvious that no compression algorithm is perfect. RLE and other lossless algorithms fail when there is low redundancy as with EXE files. MPEG and other lossy algorithms fail when high detail is required. Every algorithm has its drawbacks including the algorithm that our brain uses for pattern matching and object recognition. The erroneous nature of our brain can be simply observed in optical illusions or where individuals mismatch some object with another as long as they're held back from gaining extra information to be fed into the brain.

Aside from human brain, other living processors are known to "unfairly" emphasize certain details that are most important to the survival of their owners and to "selectively" ignore many features of the input signals that are considered "useless" to them.

A possibly efficient compression algorithm can be one that recognizes this selective ignorance and takes advantage of it to cut down the size of what has to be presented to a human brain. DCT algorithm used in JPEG and MPEG is one such example that relies on human eye's indifference to eliminating parts of color gradients.

However, there's a fatal problem with using such schemes for information prepared for computer processing. Our current computers are all linear and clocked. They're finite-state machines that occupy a certain state in each time slot. While human brain is a non-linear parallel processor which isn't bound to occupying a certain state of finite number of states at every given time. This means that you can feed a human brain with ambiguous information and expect something while a computer will stop working at the sight of any ambiguity. Consequently, using lossy algorithms with computers leads nowhere. On the other hand, lossless algorithms will fail because critical information ready for being processed always have very low redundancy, eg in an EXE file.
 
  • #9
Oh! I forgot something.

All vector image formats are examples of what you intend with "understanding", ignoring the fact that the user herself/himself gives the understanding to her/his computer through CAD or other applications. DXF, for example, contains a set of pre-defined tokens that describe the shapes. Just like our own brains that describe a monitor as "grey 17" rounded glowing box".

Such tokens are present in raster formats as well. A palette header in a 256 color BMP defines a set of tokens to aviod repeating the same 3 byte RGB values all over the file, it uses the 1 byte palette index instead. Saving a BMP is a totally automated process and the computer, in a sense, understands the image colors and constructs the palette header out of this understanding.
 

1. What is data compression and how does it relate to the human brain?

Data compression is the process of reducing the size of data by encoding it in a more efficient way. This allows for easier storage, transmission, and processing of information. The human brain also uses data compression techniques to store and process large amounts of information efficiently.

2. How does the human brain compress information?

The human brain uses various techniques to compress information, including pattern recognition, association, and categorization. For example, instead of remembering every detail about a person, the brain compresses this information by categorizing them as a friend, family member, or acquaintance based on patterns and associations.

3. Can data compression affect memory and learning abilities?

Yes, data compression can have both positive and negative effects on memory and learning abilities. On one hand, data compression allows for the brain to store and process large amounts of information efficiently, which can enhance memory and learning abilities. On the other hand, if too much information is compressed, it can lead to memory loss or difficulty in learning new information.

4. How does data compression impact decision making and problem solving?

Data compression can impact decision making and problem solving by allowing the brain to quickly retrieve and process relevant information. This can lead to more efficient decision making and problem solving. However, if information is compressed too much, it can limit the brain's ability to consider all relevant factors, leading to potential errors in decision making and problem solving.

5. Is data compression a natural or learned process in the human brain?

Data compression is a natural process in the human brain. Our brains are wired to efficiently process and store information, and data compression is a key component of this process. However, certain techniques of data compression can also be learned and improved through practice and experience.

Similar threads

  • Biology and Medical
Replies
13
Views
2K
  • Programming and Computer Science
Replies
1
Views
170
Replies
19
Views
1K
Replies
4
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
8
Views
1K
  • Programming and Computer Science
Replies
10
Views
4K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • Computing and Technology
Replies
4
Views
1K
  • Programming and Computer Science
Replies
11
Views
959
  • General Discussion
Replies
1
Views
1K
Back
Top