How can scalars like voltage, impedance, etc. be added like vectors?

  • #1
Aurelius120
152
16
TL;DR Summary
Confusion regarding phasors and a proof on why scalars can be added like vectors and still give correct answers. Especially in ##L-C-R## circuits connected to Alternating Current.
So we learnt about the different types of circuits and their behaviour when connected to an alternating current source.
(DC was treated as an AC with 0 frequency and/or infinte time period).
For purely inductive and purely capacitive circuits we were shown the derivation and why things are the way they are.
Then came Series L-C-R circuit.

In this they said that:
For an AC source,
##V=V_○\sin(\omega t)## which is the equation of a wave.
So to make calculations easier, "We treat them as vectors".
$$\vec{V}=\vec{V_{R}}+\vec{V_{L}}+\vec{V_{C}}$$
Then this
1000015744.jpg

And this
1000015745.jpg

And finally these:
$$V=\sqrt{V_R^2+(V_L-V_C)^2}$$
$$Z=\sqrt{R^2+(X_L-X_C)^2}$$
$$\tan\phi=\frac{|V_L-V_C|}{V_R}=\frac{|X_L-X_C|}{R}$$
And then we moved to question solving.
But all of this doesn't make a scalar quantity a vector quantity(that was just an assumption to make things easier)
The best explanation I got was that it is the way it is and adding them as ##V=V_R+V_L-V_C## gives wrong answers.


So I tried to add them like the scalars they are and subsequently obtain/prove the vector equations from that. But I couldn't do it.

Can you provide an explanation and/or proof of what happens that makes scalar additions (in reality) give same answers as vector addition (an assumption for ease of calculations)?

A similar thing is observed with amplitude of waves in superposition and got no explanations there either.
 
Physics news on Phys.org
  • #3
DaveE said:
I do get the importance of phasors(to an extent, especially in SHM)
I don't get why they work the way the do?
I don't understand why:
If voltage across inductor is ##V_L##, across capacitor is ##V_C## and resistor is ##V_R##
Why is $$|\vec V|=V_○\sin(\omega t) |\hat n|=|\vec V_R+\vec V_L+\vec V_C| $$
$$\implies V=V_○\sin(\omega t) =\sqrt{V_R^2+(V_L-V_C)^2}$$

Doesn't this violate Kirchoff's Rule?
According to which ##V=V_○\sin(\omega t)=V_L+V_C+V_R##
 
  • #4
Aurelius120 said:
Doesn't this violate Kirchoff's Rule?
According to which ##V=V_○\sin(\omega t)=V_L+V_C+V_R##
Yes but you then have to do the next step of describing V_L, et. al. using the impedance definitions:
##V_L(t) = L \frac{di}{dt}## etc.

This is where phasor notion is really useful. Because ##\frac{d(sin(\omega t))}{dt} = \omega cos(\omega t)= \omega sin(\omega t + \pi/2)##. When you express this with complex numbers (i.e. vectors) then this appears as a 90o phase shift, or multiplication by ##i##.

It's not *just* vectors, it's 2-D vectors. You can think of this as just a mathematical representation of an ordered pair of real numbers. But there are a couple of magical things that make everyone want to do this. First, a 2-D vector can also be described as a complex number (not true for 3-D etc.), which makes calculations easy. Second, if you represent everything as sinusoids (think Fourier & Laplace...) operators like integration and differentiation are simple changes to the gain and phase of the sinusoids. i.e. sinusoids in the domain are mapped to sinusoids in the range with pretty simple rules. In particular, the frequency isn't changed. This allows a "phasor" notation where everything is expressed as gain and phase changes to an underlying oscillation.

Your underlying question is good. Yes, they are just voltages (currents, impedance, etc.). But it sure is convenient to use a mathematical model that allows for easy comprehension and calculation. There is nothing in phasor notation that can't be done with rigorous, explicit, calculus, but no one wants to actually do that very often. We prefer simple algebra to memorized trig identities. This is particularly useful when you move on to Laplace transform solutions to the resulting DEs, which also work really well for transient problems and non-sinusoidal conditions.

PS: I think you'll really understand it if you go through the complete rigorous solution to a simple circuit DE keeping all of the ##sin(\omega t)## terms. Then use trig identities to express the answer in the simplest form ##Asin(\omega t + \phi)##. Frankly, it's a PIA compared to phasors.
 
Last edited:
  • Informative
  • Like
Likes Klystron and Aurelius120
  • #5
DaveE said:
Yes but you then have to do the next step of describing V_L, et. al. using the impedance definitions:
##V_L(t) = L \frac{di}{dt}## etc.
So are ##V_L## and ##V_C## the voltages across ##L## and ##C## or are their sine components the voltage acrosss ##L## and ##C##?
 
  • #6
DaveE said:
This is where phasor notion is really useful. Because ##\frac{d(sin(\omega t))}{dt} = \omega cos(\omega t)= \omega sin(\omega t + \pi/2)##. When you express this with complex numbers (i.e. vectors) then this appears as a 90o phase shift, or multiplication by ##i##.

It's not *just* vectors, it's 2-D vectors. You can think of this as just a mathematical representation of an ordered pair of real numbers. But there are a couple of magical things that make everyone want to do this. First, a 2-D vector can also be described as a complex number (not true for 3-D etc.), which makes calculations easy. Second, if you represent everything as sinusoids (think Fourier & Laplace...) operators like integration and differentiation are simple changes to the gain and phase of the sinusoids. i.e. sinusoids in the domain are mapped to sinusoids in the range with pretty simple rules. In particular, the frequency isn't changed. This allows a "phasor" notation where everything is expressed as gain and phase changes to an underlying oscillation.

Your underlying question is good. Yes, they are just voltages (currents, impedance, etc.). But it sure is convenient to use a mathematical model that allows for easy comprehension and calculation. There is nothing in phasor notation that can't be done with rigorous, explicit, calculus, but no one wants to actually do that very often. We prefer simple algebra to memorized trig identities. This is particularly useful when you move on to Laplace transform solutions to the resulting DEs, which also work really well for transient problems and non-sinusoidal conditions.

PS: I think you'll really understand it if you go through the complete rigorous solution to a simple circuit DE keeping all of the ##sin(\omega t)## terms. Then use trig identities to express the answer in the simplest form ##Asin(\omega t + \phi)##. Frankly, it's a PIA compared to phasors.
How to know when to use phasors?

Do they work for all sinusoidal waves?

How did they know that if the source is a sinusoidal wave, then ##V_L## and ##V_C## will also be sinusoids just with different phase? (and not logarithmic or exponential or some other weird function)
 
  • #7
Aurelius120 said:
How to know when to use phasors?
Linear circuits with sinusoidal excitation. Unless you prefer different analysis approach. You don't HAVE to use them.

Aurelius120 said:
Do they work for all sinusoidal waves?
Yes. Sinusoids can always be expressed in the form ##f(t)=Asin(\omega t + \phi)##, for example.

Aurelius120 said:
How did they know that if the source is a sinusoidal wave, then VL and VC will also be sinusoids just with different phase? (and not logarithmic or exponential or some other weird function)
This is is always true for linear circuits. Where the components V, I characteristics are defined as simple linear equations, like ##v(t) = Ri(t), v(t)=L\frac{di}{dt}##, etc. It absolutely doesn't work for non-linear circuits, like ##v(t) = a_0+a_1i(t)+a_2i^2(t)+....##. In those cases you will generate additional harmonic frequencies and/or sum and difference frequencies that aren't necessarily present in the excitation waveform(s).

The other great thing about linear circuits is superposition. By definition we know f(ax(t)+by(t))=af(x(t))+bf(y(t)), where a,b are constants.
 
Last edited:
  • Like
Likes Aurelius120
  • #8
Aurelius120 said:
Do they work for all sinusoidal waves?
Also, if your excitation is composed of multiple frequencies, like ##f(t)=a_0+a_1sin(\omega_1 t + \phi_1)+a_2sin(\omega_2 t + \phi_2)+...##, you would want to use superposition to solve separately for each term (each frequency) and add the solutions together at the end. Phasors work for the solution of each term. But you have to go back to the original form at the end, with a solution like ##g(f(t))=b_0+b_1sin(\omega_1 t + \theta_1)+b_2sin(\omega_2 t + \theta_2)+...##.
In this case ##g(a_nsin(\omega_n t + \phi_n)) = b_nsin(\omega_n t + \theta_n)##

But, again, only for linear circuits!
 
  • #9
Aurelius120 said:
TL;DR Summary: Confusion regarding phasors and a proof on why scalars can be added like vectors and still give correct answers. Especially in ##L-C-R## circuits connected to Alternating Current.

For an AC source,
○V=V○sin⁡(ωt) which is the equation of a wave.
Not strictly a wave because there is no distance involved.
The equation of a wave has the form
V = V0sin(ωt -kx)
Without the kx, it's just an oscillation in time. Perhaps the term 'waveform' would make things ok.
 
  • #10
Aurelius120 said:
Doesn't this violate Kirchoff's Rule?
According to which ##V=V_○\sin(\omega t)=V_L+V_C+V_R##
No. Each of the three terms is the value of a function of ##t##. Each value varies with ##t##. There is no value of ##t## for which those three values are a maximum, for example.
 
  • #11
Mister T said:
No. Each of the three terms is the value of a function of ##t##. Each value varies with ##t##. There is no value of ##t## for which those three values are a maximum, for example.
Yeah but what I meant was
How can ##V=V_L+V_R+V_C=\sqrt{V_R^2+(V_L-V_C)^2}##?
##\implies (V_L+V_R+V_C)^2=V_R^2+(V_L- V_C)^2## for all values of t

For three numbers, ##a,b,c## how is it possible that ##(a+b+c)^2=a^2+(b-c)^2?##
 
  • #12
sophiecentaur said:
Not strictly a wave because there is no distance involved.
The equation of a wave has the form
V = V0sin(ωt -kx)
Without the kx, it's just an oscillation in time. Perhaps the term 'waveform' would make things ok.
But it's represented by a wave in graph and called AC wave?
 
  • #13
Aurelius120 said:
But it's represented by a wave in graph and called AC wave?
It's a terminology thing.

In physics, 'wave' has a special meaning - it's a pattern moving through space and caused by oscillations. E.g. a sound wave, a water wave or a light wave.

The 'wavy' shape of a graph of (for example) ##V = V_0 \cos (\omega t + \phi)## is called 'sinusoidal'. A sinusoidal voltage-time graph is often called a sinusoidal waveform.

Also, we don't refer to an 'AC wave'. The term 'AC signal' wold be better,
 
  • Like
Likes Klystron
  • #14
Aurelius120 said:
Yeah but what I meant was
How can ##V=V_L+V_R+V_C=\sqrt{V_R^2+(V_L-V_C)^2}##?
##\implies (V_L+V_R+V_C)^2=V_R^2+(V_L- V_C)^2## for all values of t
The problem is caused by using the symbols ##V_R, V_L## and ##V_C## with two different meanings. Hope the following isn't too long...

We need to distinguish between instantaneous values and peak values (amplitudes). (If preferred, we could consider RMS values rather than amplitudes but let's use amplitudes here.)

For R, L and C in series with an applied voltage ##V(t) = V_0 \sin(\omega t)##, Kirchhoff's (2nd) law tells us that at any time ##t##:
##V(t) = V_R (t) + V_L (t) + V_C (t)##
No problem,

But each component has its own sinusoidal voltage across it. Each voltage has its own amplitude and phase:
##V_R(t) = V_{0,R} \sin (\omega t)##
##V_L(t) = V_{0,L} \sin (\omega t + \phi_L)##
##V_C(t) = V_{0,C} \sin (\omega t + \phi_C)##

Note that the three amplitudes don’t add to equal the amplitude of the supply. It is not true that ##V_0 = V_{0,R} + V_{0,L} +V_{0,C}## because of the different phases.

When you do the maths, the correct relationship between the amplitudes is:
##V_0 = \sqrt {V_{0,R}^{~2} + (V_{0,L}-V_{0,C})^2}##

The confusion arises when the above equation is abbreviated to
##V = \sqrt {V_R^2 + (V_L- V_C)^2}## without explanation.
 
  • Like
  • Informative
Likes Mister T, gmax137, Klystron and 2 others
  • #15
And when we say current in a circuit, we refer to :
$$I_○=\frac{V_○}{Z}=\frac{V_{L_○}}{X_L}=\frac{V_{C_○}}{X_C}=\frac{|V_{L_○}-V_{C_○}|}{|X_L-X_C|}=\frac{V_{R_○}}{R}$$
 
  • #16
Aurelius120 said:
And when we say current in a circuit, we refer to :
$$I_○=\frac{V_○}{Z}=\frac{V_{L_○}}{X_L}=\frac{V_{C_○}}{X_C}=\frac{|V_{L_○}-V_{C_○}|}{|X_L-X_C|}=\frac{V_{R_○}}{R}$$
Yes - providing we're talking about a series RLC circuit of course.

And don't forget the handy result ##Z = \sqrt{R^2+ (X_L-X_C)^2}##
 
  • #17
Just one more question:
How do we know that current is in phase with ##V_{R_○}##? Is this derived from the Differential Equation or from phasors?
 
  • #18
Aurelius120 said:
How do we know that current is in phase with ##V_{R_○}##? Is this derived from the Differential Equation or from phasors?
It is not correct to say that the current is in phase with ##V_{R_○}## because current, ##I(t)##, is a time-varying quantity but ##V_{R_○}## is a constant - it is the amplitude of ##V_R(t)##.

But we can say that for an ideal resistor, the instantaneous current through it is in phase with the instantaneous voltage across it. The phase-difference between the current and voltage for a resistor is zero: ##V_R(t) = I_R(t) \times R## - which is just saying that (for an ideal resistor) Ohm's law works even for oscillating voltages and currents. (But see note* below.)

I'd say the fundamental physical reason is this. Inductors and capacitors, operating in an AC circuit, cyclically store/return energy from/to the circuit. This is what gives rise to the phase-difference between their voltage and current.

This is not true of a resistor. As a consequence there is no phase-difference between a resistor's voltage and current.

This is not 'derived from' a differential equation or phasors. It's the other way around. When you set up differential equations or phasors, you make sure they represent the actual behaviour of the resistors, inductors and capacitors.

*Note: Warning! Under some conditions (e.g. very high frequencies) real resistors do not behave as ideal ones. The same goes for other components. It's worth being aware of this.
 
  • Like
Likes DaveE and Aurelius120

Similar threads

  • Introductory Physics Homework Help
Replies
4
Views
5K
  • Differential Geometry
Replies
12
Views
3K
  • Introductory Physics Homework Help
Replies
5
Views
7K
Back
Top