- Thread starter
- #1

So basically i'm given a scenario where I'm provided the data of an actual height vs time points of a vertical ball drop and it's bounce up and back down etc.

Question starts off where I have to model the height and time of the bounce using the points given to the equation f(x)=|a*sin(b(x-c))|

Hence i work out a, b and c a=0.5, b=3, c=0.6 and drew the graph.

Commented on the fit of the model.

Next I am asked to create a new function s(x) where it is created by multiplying the original f(x) function by an exponential function e(x) i.e. s(x)=e(x)*f(x).

This provides a decaying effect of the height, hence more realistic.

Finally, I am asked to draw a new function, h(x), whereby i only remove the value a from the f(x) function so h(x)=e(x)*|sin(b(x-c))|.

I am then asked, Having removed a=0.5 in f(x), why does this give a more accurate model than s(x).

My thoughts?

I believe it is because that since the value of a is less than 1, the height of the ball bounce is proportionally decreasing unnecessarily when comparing the h(x) and s(x) models respectively.

Thoughts guys?

Thanks in advance.