Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Phasor
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Arithmetic== {{see also|Complex number#Relations and operations}} ===Multiplication by a constant (scalar)=== Multiplication of the phasor <math>A e^{i\theta} e^{i\omega t}</math> by a complex constant, <math>B e^{i\phi}</math>, produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid: <math display="block">\begin{align} &\operatorname{Re}\left( \left(A e^{i\theta} \cdot B e^{i\phi}\right) \cdot e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( \left(AB e^{i(\theta + \phi)}\right) \cdot e^{i\omega t} \right) \\ ={} &AB \cos(\omega t + (\theta + \phi)). \end{align}</math> In electronics, <math>B e^{i\phi}</math> would represent an [[electrical impedance|impedance]], which is independent of time. In particular it is ''not'' the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid. === Addition === [[File:sumafasores.gif|thumb|right|The sum of phasors as addition of rotating vectors]] The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency: <math display="block">\begin{align} &A_1\cos(\omega t + \theta_1) + A_2\cos(\omega t + \theta_2) \\[3pt] ={} &\operatorname{Re}\left( A_1 e^{i\theta_1}e^{i\omega t} \right) + \operatorname{Re}\left( A_2 e^{i\theta_2}e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( A_1 e^{i\theta_1}e^{i\omega t} + A_2 e^{i\theta_2} e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( \left(A_1 e^{i\theta_1} + A_2 e^{i\theta_2}\right) e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( \left(A_3 e^{i\theta_3}\right) e^{i\omega t} \right) \\[3pt] ={} &A_3 \cos(\omega t + \theta_3), \end{align}</math> where: <math display="block">A_3^2 = (A_1 \cos\theta_1 + A_2 \cos \theta_2)^2 + (A_1 \sin\theta_1 + A_2 \sin\theta_2)^2,</math> and, if we take <math display="inline"> \theta_3 \in \left[-\frac{\pi}{2}, \frac{3\pi}{2}\right]</math>, then <math>\theta_3</math> is: * <math display="inline">\sgn(A_1 \sin(\theta_1) + A_2 \sin(\theta_2)) \cdot \frac{\pi}{2},</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 = 0,</math> with <math>\sgn</math> the [[signum function]]; * <math>\arctan\left(\frac{A_1 \sin\theta_1 + A_2 \sin\theta_2}{A_1 \cos\theta_1 + A_2 \cos\theta_2}\right),</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 > 0</math>; * <math>\pi + \arctan\left(\frac{A_1 \sin\theta_1 + A_2 \sin\theta_2}{A_1 \cos\theta_1 + A_2 \cos\theta_2}\right),</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 < 0</math>. or, via the [[law of cosines]] on the [[complex plane]] (or the [[trigonometric identity#Angle sum and difference identities|trigonometric identity for angle differences]]): <math display="block"> A_3^2 = A_1^2 + A_2^2 - 2 A_1 A_2 \cos(180^\circ - \Delta\theta) = A_1^2 + A_2^2 + 2 A_1 A_2 \cos(\Delta\theta), </math> where <math>\Delta\theta = \theta_1 - \theta_2.</math> A key point is that ''A''<sub>3</sub> and ''θ''<sub>3</sub> do not depend on ''ω'' or ''t'', which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In [[angle notation]], the operation shown above is written: <math display="block">A_1 \angle \theta_1 + A_2 \angle \theta_2 = A_3 \angle \theta_3.</math> Another way to view addition is that two '''vectors''' with coordinates {{math|[''A''<sub>1</sub> cos(''ωt'' + ''θ''<sub>1</sub>), ''A''<sub>1</sub> sin(''ωt'' + ''θ''<sub>1</sub>)]}} and {{math|[''A''<sub>2</sub> cos(''ωt'' + ''θ''<sub>2</sub>), ''A''<sub>2</sub> sin(''ωt'' + ''θ''<sub>2</sub>)]}} are [[vector (geometric)#Addition and subtraction|added vectorially]] to produce a resultant vector with coordinates {{math|[''A''<sub>3</sub> cos(''ωt'' + ''θ''<sub>3</sub>), ''A''<sub>3</sub> sin(''ωt'' + ''θ''<sub>3</sub>)]}} (see animation). [[Image:destructive interference.png|thumb|right|Phasor diagram of three waves in perfect destructive interference]] In physics, this sort of addition occurs when sinusoids [[interference (wave propagation)|interfere]] with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral [[triangle]], so the angle between each phasor to the next is 120° ({{frac|2{{pi}}|3}} radians), or one third of a wavelength {{frac|{{var|λ}}|3}}. So the phase difference between each wave must also be 120°, as is the case in [[three-phase power]]. In other words, what this shows is that: <math display="block">\cos(\omega t) + \cos\left(\omega t + \frac{2\pi}{3}\right) + \cos\left(\omega t - \frac{2\pi}{3}\right) = 0.</math> In the example of three waves, the phase difference between the first and the last wave was 240°, while for two waves destructive interference happens at 180°. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength <math>\lambda</math>. This is why in single slit [[diffraction]], the minima occur when [[light]] from the far edge travels a full wavelength further than the light from the near edge. As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2{{pi}} radians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, {{math|1=''t'' = 0}}. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°. Likewise, when the tip of the vector is vertical it represents the positive peak value, ({{math|+''A''<sub>max</sub>}}) at 90° or {{frac|{{pi}}|2}} and the negative peak value, ({{math|−''A''<sub>max</sub>}}) at 270° or {{frac|3{{pi}}|2}}. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represents a scaled voltage or current value of a rotating vector which is "frozen" at some point in time, ({{mvar|t}}) and in our example above, this is at an angle of 30°. Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time {{math|1=''t'' = 0}} with a corresponding phase angle in either degrees or radians. But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference, {{var|Φ}} of the waveform. Consider the diagram below from the previous Phase Difference tutorial. === Differentiation and integration === The time [[derivative]] or [[integral]] of a phasor produces another phasor.{{efn|This results from <math display="inline"> \frac{d}{dt} e^{i \omega t} = i \omega e^{i \omega t},</math> which means that the [[complex exponential]] is the [[eigenfunction]] of the derivative operator.}} For example: <math display="block">\begin{align} &\operatorname{Re}\left( \frac{\mathrm{d}}{\mathrm{d}t} \mathord\left(A e^{i\theta} \cdot e^{i\omega t}\right) \right) \\ ={} &\operatorname{Re}\left( A e^{i\theta} \cdot i\omega e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( A e^{i\theta} \cdot e^{i\pi/2} \omega e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( \omega A e^{i(\theta + \pi/2)} \cdot e^{i\omega t} \right) \\ ={} &\omega A \cdot \cos\left(\omega t + \theta + \frac{\pi}{2}\right). \end{align}</math> Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant <math display="inline">i \omega = e^{i\pi/2} \cdot \omega</math>. Similarly, integrating a phasor corresponds to multiplication by <math display="inline">\frac{1}{i\omega} = \frac{e^{-i\pi/2}}{\omega}.</math> The time-dependent factor, <math>e^{i\omega t},</math> is unaffected. When we solve a [[linear differential equation]] with phasor arithmetic, we are merely factoring <math>e^{i\omega t}</math> out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the [[capacitor]] in an [[RC circuit]]: <math display="block">\frac{\mathrm{d}\, v_\text{C}(t)}{\mathrm{d}t} + \frac{1}{RC}v_\text{C}(t) = \frac{1}{RC} v_\text{S}(t).</math> When the voltage source in this circuit is sinusoidal: <math display="block">v_\text{S}(t) = V_\text{P} \cdot \cos(\omega t + \theta),</math> we may substitute <math>v_\text{S}(t) = \operatorname{Re}\left( V_\text{s} \cdot e^{i \omega t} \right).</math> <math display="block">v_\text{C}(t) = \operatorname{Re}\left(V_\text{c} \cdot e^{i \omega t} \right),</math> where phasor <math>V_\text{s} = V_\text{P} e^{i\theta},</math> and phasor <math>V_\text{c}</math> is the unknown quantity to be determined. In the phasor shorthand notation, the differential equation reduces to: <math display="block">i \omega V_\text{c} + \frac{1}{RC} V_\text{c} = \frac{1}{RC}V_\text{s}.</math> {{math proof|title=Derivation|proof= {{NumBlk2|:|<math> \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Re}\left(V_\text{c} \cdot e^{i\omega t}\right) + \frac{1}{RC}\operatorname{Re}(V_\text{c} \cdot e^{i\omega t}) = \frac{1}{RC}\operatorname{Re}\left(V_\text{s} \cdot e^{i\omega t}\right)</math> |Eq.1}} Since this must hold for all <math>t</math>, specifically: <math display="inline">t - \frac{\pi}{2\omega},</math> it follows that: {{NumBlk2|:|<math> \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Im}\left(V_\text{c} \cdot e^{i\omega t}\right) + \frac{1}{RC} \operatorname{Im}\left(V_\text{c} \cdot e^{i\omega t}\right) = \frac{1}{RC}\operatorname{Im}\left(V_\text{s} \cdot e^{i\omega t}\right). </math> |Eq.2}} It is also readily seen that: <math display="block">\begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Re}\left( V_\text{c} \cdot e^{i\omega t} \right) &= \operatorname{Re}\left(\frac{\mathrm{d}}{\mathrm{d}t} \mathord\left( V_\text{c} \cdot e^{i\omega t} \right)\right) = \operatorname{Re}\left(i\omega V_\text{c} \cdot e^{i\omega t} \right) \\ \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Im}\left( V_\text{c} \cdot e^{i\omega t} \right) &= \operatorname{Im}\left(\frac{\mathrm{d}}{\mathrm{d}t} \mathord\left( V_\text{c} \cdot e^{i\omega t} \right) \right) = \operatorname{Im}\left(i\omega V_\text{c} \cdot e^{i\omega t} \right). \end{align}</math> Substituting these into {{EquationNote|Eq.1}} and {{EquationNote|Eq.2}}, multiplying {{EquationNote|Eq.2}} by <math>i,</math> and adding both equations gives: <math display="block">\begin{align} i\omega V_\text{c} \cdot e^{i\omega t} + \frac{1}{RC}V_\text{c} \cdot e^{i\omega t} &= \frac{1}{RC}V_\text{s} \cdot e^{i\omega t} \\ \left(i\omega V_\text{c} + \frac{1}{RC}V_\text{c} \right) \!\cdot e^{i\omega t} &= \left(\frac{1}{RC}V_\text{s}\right) \cdot e^{i \omega t} \\ i\omega V_\text{c} + \frac{1}{RC}V_\text{c} &= \frac{1}{RC}V_\text{s}. \end{align}</math> }} Solving for the phasor capacitor voltage gives: <math display="block">V_\text{c} = \frac{1}{1 + i \omega RC} \cdot V_\text{s} = \frac{1 - i\omega R C}{1 + (\omega RC)^2} \cdot V_\text{P} e^{i\theta}.</math> As we have seen, the factor multiplying <math>V_\text{s}</math> represents differences of the amplitude and phase of <math>v_\text{C}(t)</math> relative to <math>V_\text{P}</math> and <math>\theta.</math> In polar coordinate form, the first term of the last expression is: <math display="block">\frac{1 - i\omega R C}{1 + (\omega RC)^2}=\frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot e^{-i \phi(\omega)},</math> where <math>\phi(\omega) = \arctan(\omega RC)</math>. Therefore: <math display="block">v_\text{C}(t) =\operatorname{Re}\left(V_\text{c} \cdot e^{i \omega t} \right)= \frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot V_\text{P} \cos(\omega t + \theta - \phi(\omega)).</math> === Ratio of phasors === A quantity called complex [[electrical impedance|impedance]] is the ratio of two phasors, which is not a phasor, because it does not correspond to a sinusoidally varying function.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)