The world from a bat's perspective (sonic images)

In summary: In particular, if the acoustic eye is submerged in water then the water will act as a perfect reflector and everything on the surface will be visible. Interestingly, the closer an object is to the surface the more it will be visible. So for example if we have a submarine with its periscope raised then everything on the surface will be visible from the submarine – but if the submarine submerges then the periscope will only show pictures of objects much deeper below the surface. And if we raise the periscope even further then it will be able to penetrate the water’s surface and view objects right up to the surface – like in a bath tub.
  • #1
Killtech
344
35
TL;DR Summary
drafting a concept for a framework to describe the world as perceived through acoustic eye, and rendering it via a 3d engine
Disclaimer: This will be a lengthy post and it will take me a little time to outline how this is related to this forum, so please hang in with me here please – and for everyone that makes it to the end, thanks for taking the time.

So let’s start with the premise that we want to make a 3d game where the protagonist is blind and relies entirely on his sharp hearing sense for orientation, like Marvel’s Daredevil – or more realistically a bat. So the objective is to visualize/render the world as it is perceived from the protagonist’s perspective. In term of a game it would be a huge win to depict just like 80% accurately (the number is symbolic) but enough to highlight the real uniqueness of that view.

Now sonic waves have enough similarities to light waves to reuse an existing 3d engine to visualize that world. In particular one could theoretically employ acoustic lensing effect to build a sonic wave detector with some spatial resolution that works like an acoustic analog to an eye. Sure, it’s a stretch that a human ear would be capable of that but for the context of a game let’s go with the premise anyway.

Having in mind how a 3d rendering engines work it would require having all objects internally modeled as they appear acoustically and more importantly where they are positioned for the acoustic eye – because that’s what will be pipelined to the 3d engine to do the render. And here is the problem: since sonic waves have still quite different properties then light does, the positioning will be off in some cases. And this is also very much needed if a technology like ray tracing is to be employed – since those rays are then sonic in nature. But to do all calculation with real positions and then transforming everything to perceived positions every single frame would be very costly in term of performance.

Hence the idea to try explore the option to skip real positioning altogether and do everything within perceived position space… which comes with a long list of interesting challenges and implications.

So let’s start in clarifying what the perceived position actually is. So the rules of projective geometry apply to a acoustic eye in just the same way as they do for a optical one. Distance from the eye makes objects appear smaller the further away they are – but here it’s not the actual distance which is relevant for this effect but rather the travel time of the acoustic wave. Hence, the idea of this approach is to use this definition for the perceived distance. I will deliberately leave out a lot of details like if this should be defined as a one-way or two-way distance because in practice this gets a lot more complicated when the medium is not very uniform… and in context of a game we just want to focus on the cases where it remains simple enough to realize it (and in the other cases we might employ other tricks). Still, it’s furthermore quite important to build the entire framework well enough to even be able understand which interesting effects there are to visualize.

With that settled it’s funny to explore how different some things are perceived acoustically (and what should be covered by our game). For example, sound travels very slowly in comparison… so as with light very distant objects (like stars) are actually images from the past. But for a bat this creates a lot of funny effects in normal life situations already. Let’s assume for now the medium is perfectly at rest to our bat. Looking/listening at a distant incoming object it appears as it was in the past but as it closes in will be perceived closer to its actual present state. So what that means is that everything happening on that object will be perceived to happen in accelerated pace to catch up with the present as it approaches our bat. In particular observing an on-board clock it will look like it runs faster. The opposite will be true when the object passes by and moves away showing again a past picture of itself. Due to the definition of perceived distances it would also mean that distances on the object in the direction facing the observer will be squeezed for incoming objects and stretched for outgoing ones – while movement parallel to the observer will cause the object to appear sheared along the perceived depth axis (since the front (I.e. side facing the bat) of the object appears closer to its present state while its back is more in the past).

Taking a closer look at the role of the medium, things get a bit more exotic. For one gravity acting on it creates a gradient in air pressure and thus in the speed of sound. So this has the consequence that two wave fronts starting in parallel won’t necessarily maintain and can actually intersect after some time. Keeping in mind how perceived distances are defined this actually means that the parallel postulate isn’t satisfied in the perceived world – i.e. its geometry is inherently not Euclidean. This of course is a no go for any 3d engine which builds exclusively on Euclidean geometry. Therefore some trickery will be required to visualize any kind of effects resulting from an inhomogeneous medium.

For example a interesting aspect for visualization of the above mention perception may be that sound leaving earth’s surface at an angle will be deflected ever downwards due to the gradient in air pressure/speed of sound until it eventually turns around and meets the ground again. This means however that looking (hearing) upwards into the sky one should be able to perceive sounds from far away. So together this should create the perception that Earth surface is concave rather than convex, giving the bat an impression of living on the inside of a sphere – a bit like in a Dyson sphere.

I have talked with various mathematicians about this and the agreement was so far that differential geometry should be able provide the mathematical framework to describe the perceived world. This would be done by equipping the set of real world coordinates with this new perceived metric defining a new distance measure on the set to make it a proper manifold. Because the new metric should be still topologically equivalent (i.e. the identity function on the set is should be a non-isometric diffeomorphism) to the original real metric (in most scenarios relevant for the game) a transformation of physics into a formulation of how it’s perceived acoustically should be possible (meaning it should be potentially possible to calculate physics entirely in that new framework – by using said diffeomorphism to pullback physical laws). Problem is that no one so far could point me towards papers doing something quite like this. However, given that this would share a common mathematical calculus with general relativity and also aims to do physics in it, I thought it might be worthwhile to post this here for feedback and ideas.
 
Physics news on Phys.org
  • #2
This does not directly address your thesis, but I have thought along similar lines but arrived at a different result.

I dream of setting up a real-world "obstacle course" (using shipping containers) in pitch dark, where your only navigatory input is sound.

You create your own sounds using a microphone, and they are fed back to you from a network of speakers placed all over the space. Long, low vocalizations such as "boomerang" would be hard to use, but short, sharp sounds such as "pop" would work well.

Your vocalizations are individually enhanced before being fed back to you from each speaker in the network - they are slowed down and exaggerated so that our relatively poor hearing can detect them. Nearby objects are increased in amplitude and have virtually no delay, whereas distant objects are decreased in amplitude with a pronounced delay.

I am confident one could make out, for example, a long oblique wall by hearing the smooth progression from loud to quiet echos, spread over time:

POPPOPPOPPOPPOPPOPPOP

You could spot a door like this:

POPPOPPOP . . . POPPOPPOP
 
  • #3
There are already several mature and well-developed technologies for sonic imaging: ultrasound, anti-submarine and fish-tracking sonar, oil/gas exploration. They take interestingly different approaches to “rendering” but all of them integrate the properties of the medium.
 
  • #4
(No pun intended) Sounds like “sonar tracking”... something the Navy might be interested in [to, say, track submarines], as well those who study bats [to understand their senses, perception, and strategies].

From
https://www.google.com/search?q="active+sonar+tracking"
here is one of many possibly interesting links
https://macsphere.mcmaster.ca/handle/11375/24758

It seems to me...
Although some of the mathematical tools [like differential geometry, projective geometry, differential equations, numerical methods, etc] used in general relativity could be useful,
the speed of sound is not an invariant like the speed of light is. So, there is no
principle of relativity with the speed of sound. So, the connection to relativity is limited.
 
  • #5
Nugatory said:
There are already several mature and well-developed technologies for sonic imaging: ultrasound, anti-submarine and fish-tracking sonar, oil/gas exploration. They take interestingly different approaches to “rendering” but all of them integrate the properties of the medium.
robphy said:
(No pun intended) Sounds like “sonar tracking”... something the Navy might be interested in [to, say, track submarines], as well those who study bats [to understand their senses, perception, and strategies].
Sorry, but I fear you entirely misunderstand this. All of the mentioned imaging technologies are well known but follow the completely opposite objective: they try to visualize everything where it really is positioned and merely use sonic waves to achieve that. After all you need to know the real position of your fish, oil deposits or whatnot in order to do something with them. But in my case i am explicitly not interested in that - after all the world would look exactly the same as we perceive it, except that you view it in a "different wave spectrum". But i have to visualize everything in the exact way it is perceived. These perceived positions however have no practical use i could think of hence there was so far very little motivation to build such a framework. Trying to do something like this in a game adds perhaps a first use case but as this classifies as "art" it's not really a practical one.

Take it like this: when glass has a refractive index, for all practical purposes one would like to correct that data to allow proper position of objects observed through that glass. But from an art perspective, if you are about to render glass on a screen you absolutely want all the effects of the refractive index the changes the path of light being perfectly visible. you want to all those complex reflection that can make the picture more stunning.

And also... unlike all those imagining techs, don't forget we don't have a reality to work with. There is no real sensor detecting acoustic waves which we are trying to visualize. So the starting point of our problem is very different.

robphy said:
It seems to me...
Although some of the mathematical tools [like differential geometry, projective geometry, differential equations, numerical methods, etc] used in general relativity could be useful,
the speed of sound is not an invariant like the speed of light is. So, there is no
principle of relativity with the speed of sound. So, the connection to relativity is limited.
I hope you are well enough versed in differential geometry so you can understand this: if you pullback equations via a non-isometric diffeomorphism onto a different geometry such statements won't generally hold up anymore. And that's the huge issue here: as all physics has to be pulled back it won't look like it is normally. And it cannot work the same as space is suddenly added a lot of artificial/perceived curvature.

The reason for this approach is this: The way a 3d engine works, it's totally setup to handle light interaction, not acoustic one. So if we want to repurpose it (and it's insanely cost intensive to write an own engine for such an exotic purpouse), we have to make sonic waves behave as much as light waves as possible. That's why we don't use real distances but perceived ones. Look at how those are defined. Basically it means that a sound wave is perceived to always cover exactly the same "perceived" distance in a specified amount of time, regardless of direction, medium or whatnot. This is by definition. So... that makes sonic waves also perfectly isotrope within the perceived geometry - basically through that mathematical trick we make them act almost like light waves.

Well, going a bit more into detail, DirectX and has actually technology to handle refractive index - but only for the case when two different materials meet... like glass in air. But if we have the case of an inhomogeneous air with non constant refractive index for sound waves, we are lost. Hence the idea to define distance in a way such that we can handle air as vacuum. that moves the properties of air being handled by geometry rather than a refractive index in the equations.
 
Last edited:
  • #6
Killtech said:
So let’s start with the premise that we want to make a 3d game where the protagonist is blind and relies entirely on his sharp hearing sense for orientation, like Marvel’s Daredevil – or more realistically a bat.
A bat doesn't just use passive hearing but mainly an active sonar. That makes a difference in what kind of information you get. The information a bat uses could be visualized with a depth image, but modulated but how much sound is reflected from different surfaces. You could also add passively received sounds and/or Doppler effect using color.

Killtech said:
So the objective is to visualize/render the world as it is perceived from the protagonist’s perspective. In term of a game it would be a huge win to depict just like 80% accurately (the number is symbolic) but enough to highlight the real uniqueness of that view.
For a human it's impossible to know how other humans actually perceive the world, let alone how some hypothetical creature does. The only thing you can aim for is to present only the information that certain sensors could provide, but there are always multiple different ways to render the same information.
 
  • #7
A.T. said:
A bat doesn't just use passive hearing but mainly an active sonar. That makes a difference in what kind of information you get. The information a bat uses could be visualized with a depth image, but modulated but how much sound is reflected from different surfaces. You could also add passively received sounds and/or Doppler effect using color.
I admit using a bat as a example was a very bad idea in heidsight since that way it uses acoustics for orientation, as ingenious and refined as it is, it's still totally primitive compared to the capabilities of an acoustic eye that we want to employ.

If i remember correctly a bat uses time difference in arriving sounds to determine distances, where the sounds it hears are mainly the reflections of the original sound it emitted, hence an active sonar. The problem with that is that it is nothing like an acoustic eye works. After all a bat has just two ears making for a total spatial resolution of two pixels that can however catch a wide range of "base colors" (frequencies). So it practically has no spatial resolution which is why it needs to employ other tricks to be able to use the minimal amount of data available for orientation at all.

By contrast our acoustic eye has by premise a 4k resolution (with each pixel being sensitive in only 3 different frequency ranges it can detect i.e. "colors" - that's because this is a restriction of using a 3d engine. it doens't have more base colors) and works just like the human eye does, but for audio waves. Yes, the materials needed to achieve the proper acoustic lensing and high resolution are maybe utopic... but we are doing a game here, so anything goes. So the way it perceives distances is actually none of any known methods for sound based positioning. Because it in fact uses just projective geometry for that - i.e. distant objects simply appear smaller (so if you knew the actual size of an object you could derive it's distance from the image u get. or if you had a pair of acoustic eyes...).

Anyhow, we are not interested to measure distances at all. we only want to visualize them and in order to do that we need to reverse engineer the proper positioning (for a 3d engine using projective geometry) to achieve that. So don't focus too too much on any know sonic imaging methods because we are going in the reverse direction.

A.T. said:
For a human it's impossible to know how other humans actually perceive the world, let alone how some hypothetical creature does. The only thing you can aim for is to present only the information that certain sensors could provide, but there are always multiple different ways to render the same information.
This may be an interesting discussion in terms of philosophy, but really, when making a game and talking of art in general, this isn't such a big concern - yes, especially given the hypothetical protagonist. That's why we exactly defined the premise of what we want to visualize and that is written in the original post - the world seen through an acoustic eye. And as you said yourself, presenting the info that certain sensors provide, is the best what we can aim for... which is exactly what i meant with the premise of the acoustic eye. I don't understand why this isn't clear from my original post and why i have to explain it here again. Sorry, English is my third language so i may have worded my original post not well enough, so it might help to point out which parts weren't clear enough or misleading.
 
  • #8
Killtech said:
By contrast our acoustic eye has by premise a 4k resolution (with each pixel being sensitive in only 3 different frequency ranges it can detect i.e. "colors"
Regarding the active sonar:
You can render a greyscale depth image with any resolution you want, you don't have to stick to what a bat achieves. You can use the colors to show extra info: doppler-shift from relative motion or passively received sounds from other sound sources.

Killtech said:
That's why we exactly defined the premise of what we want to visualize and that is written in the original post - the world seen through an acoustic eye.
Regarding the passive eye equivalent:
A passive acoustic eye requires constant "illumination" of the world by some other sound sources. To "see" a wall, something else has to continuously make sound, that gets reflected from the wall. Is this your idea of how it is supposed to work?
 
  • #9
A.T. said:
Regarding the passive eye equivalent:
A passive acoustic eye requires constant "illumination" of the world by some other sound sources. To "see" a wall, something else has to continuously make sound, that gets reflected from the wall. Is this your idea of how it is supposed to work?
Yeah, you could take a constant sine generator for that, since if you depict the sound visually you are not likely to develop an tinnitus from it ;).

Then again, our normal world isn't that silent to begin with. Sounds get generated from various sources all the time and get scattered by everything. Maybe it's not as much as this happens with light, but if you happen to have an unrealistically sensitive detectors and would magically be able to tell sound pressure changes from thermic fluctuations (we could assume the game world to be much colder in terms of the thermal physics maybe) you wouldn't maybe even need a "flashlight" (sine-sound generator). Instead if there are enough natural sound source the entirety of scattered sound waves should create something equivalent to what DirectX calls "ambient light". I figure this should be the case when you are in the midst of a city at rush hour or when it's simply raining (yeah, rain will "light up" all surfaces it hits) - the sheer amount of different sources and scattering should make for a bright enough picture without the need to add additional sound to it.

Anyhow, depicting all of this is one thing. Simulating some curious effects that come along with it is another. And this is kind of my starting point: i need a framework that can help me to describe it properly. in particular as sound waves travel noticeably slowly... the information they carry about the object they last scattered from is always delayed/from the past - the longer the travel time the longer the time gap. To express what that means for an acoustic observer it might be the easiest to just think of what happens when he's watching clocks in different locations and moving frames. To make it easier for discussion we could just isolate this particular effect from how we want to visualize - so we could go for a much simpler case that "there is a radio where someone is constantly shouting the current time of day" instead of a clock.

Simply put, if that audio clock is 20km away, it will be observed to be a minute delayed, no? (i rounded the speed of sound a bit for convenience) And if it starts moving towards me such that in 10min it will arrive at my position at which point its time delay will be zero from my clock (i will hear its sound instantly as it is emitted)... that clock cannot have run with the same time as mine did - in terms of my acoustic perception/observation that is.

It's a trivial effect maybe, but i know of no framework that really bothers to properly describe it. And it is kind of relevant, because it means that visually everything happening on moving objects happens either in time lapse or slow motion - depending on the direction of movement. But as this kind of stuff is my focus i am lost as of how any sort of existing frameworks for sound imaging can help me with this - at least so far i havn't seen anything helpful in the answers here in this thread.
 
  • #10
Killtech said:
Simply put, if that audio clock is 20km away, it will be observed to be a minute delayed, no?
...
It's a trivial effect maybe, but i know of no framework that really bothers to properly describe it.
To simulate signal delay you need "4D raytracing" or "spacetime ray tracing".
 
  • #11
A.T. said:
To simulate signal delay you need "4D raytracing" or "spacetime ray tracing".
Nope, that is insanely performance intensive and out of the discussion.

Do you really think that a full 4D raytracing is required to calculate something trivial like what time the observer will perceive the moving audio clock to show in that simple example? Sometimes it's useful to take into consideration what you really need to know and see what's the simplest way to get it rather then start with the most general approach.

Instead of time dependent raytracing it's far more efficient to employ a trick: don't calculate the stuff when it really happens but only when the information reaches the observer. This means that when calculating physics for distant objects you don't calculate the changes to their current state but rather for their past one. Also it's their past state is also exactly what you need to render to the screen. So you really just need the information about every object from a single point in time, though it's a different one for every object.

Anyhow, this approach works well when there is just a single frame for which all calculations are done and there is never a second one to synchronize with (no multiplayer) - otherwise it gets insanely hard to keep sync given that for different observers an object is at different points in time and physics calculations have to be kept consistent. There is also a problem when there is significant interaction that happens faster then the speed of sound but for now we can assume that's not the case. Finally there is the environment including the most important sound medium (air). If it's assumed to not depend on time the 4D raytracing can be simplified by cutting out the time dimension giving you the framework i am looking for.

But ultimate, while this trick saves massive performance costs, it also makes it necessary to take into account that clocks run differently for different objects (frames), hence any physics simulation must be fully aware of the changed clocks. That's why it's imperative to have a framework that correctly describes the perceived flow of time in every frame.
 
Last edited:
  • #12
Killtech said:
Nope, that is insanely performance intensive and out of the discussion.
Maybe, but computational optimization is not really a physics question.

Killtech said:
Sometimes it's useful to take into consideration what you really need to know...
Which depends on the gameplay design and has nothing to do with physics.
 
  • #13
A.T. said:
Maybe, but computational optimization is not really a physics question.
Really? Have you ever seen anything that is done with numerical approaches to quantum field theory like lattice QCD? It feels as is if most resources spend for research in such directions are spend looking for ingenious mathematical trickery and computational optimizations to calculate anything from the underlying physical equations that are otherwise impossible to approach. And when it comes to experimental physics finding good approximations to get the math down that is needed to be able to extract the quantities of interest from the measured data... is kind of the bread an butter of the business to achieve anything. and what do you call the identifying of terms with little impact in particular context and eliminating out of equations if not "computational optimization".

Unlike pure mathematics physics is usually interested in getting actual results i.e. numbers and any method goes that proofs to be reliant enough to yield the accuracy necessary. So building a simplified framework to describe physics in this particular practical context sounds to be exactly along the line what physics does.

Therefore, i beg to differ. i need practical solutions for a physical problem, not a mathematicians statement "a general solution exists for the problem within this theory but whether it's realistic to calculate any numbers with that approach within a finite time is not my problem".
 
Last edited:
  • #14
Killtech said:
i need practical solutions ...
If you are not getting them, you might be asking in the wrong place. There are sub-forums on this site dedicated to computation.
 
  • #15
A.T. said:
If you are not getting them, you might be asking in the wrong place. There are sub-forums on this site dedicated to computation.
the thread was moved here by a mod. So far the best math solution i had was to model everything in differential geometry of a Riemann manifold that uses a special custom metric - and then finding an Euclidean geometry approximation that looks mostly the same when projected down onto the sphere (i.e. the projective geometry a 3d engine uses). That's why i posted it originally in the relativity theory section since i thought most people familiar with these kind of calculation would be there.

Anyhow, i used an actual metric and left the time a sperate dimension rather then a pseudo-metric incorporating time like relativity does. i wasn't sure if that makes things easier or more difficult. my biggest issue so far was calculating the pullback of physical laws, like Newtons equations of motion: due to the distorted time representation the pullbacked laws must undo to keep their equivalence to their natural representation in the actual real metric. That is part of modelling before any calculations can be done and is perhaps also not well placed in the computation related section either. Instead i made a separate thread on that in the math/diff geo section but that subforum isn't much active and i didn't get much response apart that it should more or less work. again i suppose there is no good place this topic perfectly fits in as it intertwines quite different areas into one subject.
 
Last edited:
  • #16
Killtech said:
Anyhow, i used an actual metric and left the time a sperate dimension rather then a pseudo-metric incorporating time like relativity does. i wasn't sure if that makes things easier or more difficult. my biggest issue so far was calculating the pullback of physical laws, like Newtons equations of motion: due to the distorted time representation the pullbacked laws must undo to keep their equivalence to their natural representation in the actual real metric.
The first question I would ask myself before going through all that trouble. Are you sure that the result will create an interesting gameplay experience, and not just be an annoying and confusing distortion for the player?
 
  • #17
A.T. said:
The first question I would ask myself before going through all that trouble. Are you sure that the result will create an interesting gameplay experience, and not just be an annoying and confusing distortion for the player?
Nope, i am not. But that's the idea of scouting out the avenues of approach to figure out what tools (technical, artistic and so on) would be at disposal for the final product such that a broader pool of people can gain an understanding what that means so they can judge what this can be used for.

One aspect of this is therefore to even understand (or at least get a glimpse of an idea) what a the physical reality of the whole thing actually is.

And of course there is personal curiosity. Both in terms of the mathematics and physics. After all this is a very unusual perspective to take and i have learned that viewing things from an entirely different angle can often reveal a lot of curiosities you would have otherwise missed. i mean this is even true in a much more general context when you try to understand the views/opinions of people you normally totally disagree with. It often comes with those little moments of enlightenment when you expand your understanding, and you know... that's fun all on its own. Finally, there is the thing that walking along well traveled roads get's a little boring so getting the chance to explore new paths is always an exciting prospect - even if they turn out to be dead ends. At least that's something you learn from the journey.
 
  • #18
Killtech said:
One aspect of this is therefore to even understand (or at least get a glimpse of an idea) what a the physical reality of the whole thing actually is.
Are you familiar with this game?

http://gamelab.mit.edu/games/a-slower-speed-of-light/

In the video around 0:20 the distorted Gate looks like the signal delay effects, similar to what you get by 4D raytracing used for the animations on this site:

https://www.spacetimetravel.org/aur/aur.html

The engine for the MIT-game is open source:

http://gamelab.mit.edu/research/openrelativity/

Ideally, you would just have to get rid of the relativistic effects, while using the signal delay part.
 
  • Like
Likes Killtech
  • #19
A.T. said:
Are you familiar with this game?

http://gamelab.mit.edu/games/a-slower-speed-of-light/

In the video around 0:20 the distorted Gate looks like the signal delay effects, similar to what you get by 4D raytracing used for the animations on this site:

https://www.spacetimetravel.org/aur/aur.html

The engine for the MIT-game is open source:

http://gamelab.mit.edu/research/openrelativity/

Ideally, you would just have to get rid of the relativistic effects, while using the signal delay part.
Wow, thank you! that's great! No, i didn't know of it though i have seen some pictures in some documentary that visualized some of those effects. I must absolutely try it and look how they wrote their engine. So 4d raytracing might be a viable alternative after all.

I'm just not entirely sure how much/which parts of this directly corresponds to sonic waves and which part is different. it's somewhat confusing that Lorentz invariance of an equation as a mathematical concept also technically applies to the acoustic wave equation - just for a different Lorentz group associated with the corresponding ##c## of that equation. Sure, having a medium makes for a big difference though, because once that is in motion the equation gets a lot more complicated rendering the one-way speed of sound different then the two way... then again this is a weird problem for light for different reasons. And i must admit that the physics of sound waves wasn't any topic i learned much about so far, so i am not entirely clear about the full extend to the role of the medium.

That said, i may still come back to my original idea later because i am not yet fully convinced that 4d raytracing is the best or most convenient way go about this. Well, i guess one has to investigate both options to some degree figure that out.
 
  • #20
Killtech said:
Wow, thank you! that's great! No, i didn't know of it though i have seen some pictures in some documentary that visualized some of those effects. I must absolutely try it and look how they wrote their engine. So 4d raytracing might be a viable alternative after all.
To be honest, I'm not sure if the MIT-Game gets the signal delay right in real time. The distortions could also be from simulating aberration, which affects only where you see the objects. Whether they actually show an old state of the object is hard to say with static objects that don't change. You would have to look into the code of the engine or the documentation / papers associated with it.
 
  • #21
A.T. said:
To be honest, I'm not sure if the MIT-Game gets the signal delay right in real time. The distortions could also be from simulating aberration, which affects only where you see the objects. Whether they actually show an old state of the object is hard to say with static objects that don't change. You would have to look into the code of the engine or the documentation / papers associated with it.
That's what i intend to do since it's open source. But such things aren't a simple "read" so it'll take a little time to get through it.
 
  • #22
Killtech said:
That's what i intend to do since it's open source. But such things aren't a simple "read" so it'll take a little time to get through it.

Potentialy also relevant:



Code:
https://github.com/HackerPoet/NonEuclidean
 
  • Like
Likes Killtech

1. How do bats use echolocation to navigate and find prey?

Bats emit high-frequency sounds, which bounce off objects and return to their ears. By interpreting the time it takes for the sound to return and the frequency of the echo, bats can create a sonic map of their surroundings and locate their prey.

2. Can bats see in the dark?

No, bats are not able to see in complete darkness. However, their echolocation abilities allow them to "see" in the dark by using sound waves to create a visual image of their surroundings.

3. How do bats communicate with each other using sound?

Bats use a variety of sounds to communicate with each other, including high-frequency calls, chirps, and songs. These sounds can convey information about food sources, territory, and mating.

4. Do different species of bats have different sonar frequencies?

Yes, different species of bats have evolved to use different frequencies for their echolocation. This allows them to avoid interference from other bats and also helps them to find different types of prey.

5. How have bats adapted to use echolocation as their primary sense?

Bats have evolved to have large ears and specialized vocal cords that allow them to produce and detect high-frequency sounds. They also have a highly developed auditory cortex in their brains, which helps them process and interpret the returning echoes.

Similar threads

Replies
13
Views
979
  • Special and General Relativity
Replies
12
Views
811
Replies
29
Views
2K
  • Sci-Fi Writing and World Building
Replies
1
Views
555
Replies
35
Views
1K
Replies
4
Views
770
  • Classical Physics
Replies
6
Views
1K
  • General Math
Replies
2
Views
965
Replies
2
Views
886
Back
Top