- #1
RagingPineapple
- 21
- 0
Hi all,
I'd appreciate a little help here. I'm making a video game, and one of the features is the ability to slow down time - a kind of Matrix-esque Bullet Time effect. I know there's a computing section, and a physics section, but I felt the issue is primarily mathematical, which is why I've put it here.
The problem is with acceleration by a fixed amount over time.
The system works this way:
The player's character contains a record of his position (we'll look at only one axis for simplicity, the Y axis). He contains a separate record for his current speed.
Internally, the flow of time is stored as a floating-point value which defaults to 1.0. A value of 0.5 means that time is going twice as slow (or half as fast, if you're a 'cup-half-full' kind of person) and 2.0 means it's going twice the speed of normal time.
The player's speed is always stored relative to normal time, but when the player is moved on the screen, he is moved by his speed in pixels per frame * the time ratio.
So if he's falling at a rate of 2px/f (pixels per frame), and the time ratio is 0.25, he will appear to move 2*0.25=0.5 pixels on each frame.
When the game is slowed down, obviously there are more frames being processed per game second, which means that timed events are divided by the time ratio to make their delays still coincide with everything.
The problem is with acceleration. Acceleration must be applied on every frame so that it appears smooth, but each time I update his acceleration, the percentage reduction for the time ratio compounds on each frame, gradually creating inaccurate results.
It's probably clearer if I show a worked example. Below shows a comparison between a scenario at 100% normal time and 50% of normal time, and how the two data sets begin to separate.
In this case, the character starts with a negative Y inertia (he's jumping upwards). Gravity is applied at 1 screen pixel per frame at normal time (0.5 pixels at 50% time).
Can anyone clear this up for me? I understand that it's basically to do with the fact that 50% applied twice on a reducing-balance basis won't equal the same as one lot of 100%, so I need to find some way of countering this so that the figures are consistent.
Please note that I'm not trying to simulate physics with any real-world accuracy here - I just want my little character to move consistently at different frame rates:
I'd appreciate a little help here. I'm making a video game, and one of the features is the ability to slow down time - a kind of Matrix-esque Bullet Time effect. I know there's a computing section, and a physics section, but I felt the issue is primarily mathematical, which is why I've put it here.
The problem is with acceleration by a fixed amount over time.
The system works this way:
The player's character contains a record of his position (we'll look at only one axis for simplicity, the Y axis). He contains a separate record for his current speed.
Internally, the flow of time is stored as a floating-point value which defaults to 1.0. A value of 0.5 means that time is going twice as slow (or half as fast, if you're a 'cup-half-full' kind of person) and 2.0 means it's going twice the speed of normal time.
The player's speed is always stored relative to normal time, but when the player is moved on the screen, he is moved by his speed in pixels per frame * the time ratio.
So if he's falling at a rate of 2px/f (pixels per frame), and the time ratio is 0.25, he will appear to move 2*0.25=0.5 pixels on each frame.
When the game is slowed down, obviously there are more frames being processed per game second, which means that timed events are divided by the time ratio to make their delays still coincide with everything.
The problem is with acceleration. Acceleration must be applied on every frame so that it appears smooth, but each time I update his acceleration, the percentage reduction for the time ratio compounds on each frame, gradually creating inaccurate results.
It's probably clearer if I show a worked example. Below shows a comparison between a scenario at 100% normal time and 50% of normal time, and how the two data sets begin to separate.
In this case, the character starts with a negative Y inertia (he's jumping upwards). Gravity is applied at 1 screen pixel per frame at normal time (0.5 pixels at 50% time).
Can anyone clear this up for me? I understand that it's basically to do with the fact that 50% applied twice on a reducing-balance basis won't equal the same as one lot of 100%, so I need to find some way of countering this so that the figures are consistent.
Please note that I'm not trying to simulate physics with any real-world accuracy here - I just want my little character to move consistently at different frame rates:
Code:
At 50% Gravity 1px per frame At 100%
Inertia Position Inertia Position
-5 1000 -5 1000
-4.5 997.75
-4 995.75 -4 996
-3.5 994
-3 992.5 -3 993
-2.5 991.25
-2 990.25 -2 991
-1.5 989.5
-1 989 -1 990
-0.5 988.75
0 988.75 0 990
0.5 989
1 989.5 1 991
1.5 990.25
2 991.25 2 993
2.5 992.5
3 994 3 996
3.5 995.75
4 997.75 4 1000
4.5 1000
5 1002.5 5 1005
5.5 1005.25
6 1008.25 6 1011