Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Phasor
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Complex number representing a particular sine wave}} {{other uses}} {{distinguish|phaser (disambiguation){{!}}phaser}} {{redirect|Complex amplitude|the quantum-mechanical concept|Complex probability amplitude}} [[Image:Wykres wektorowy by Zureks.svg|thumb|An example of series [[RLC circuit]] and respective '''phasor diagram''' for a specific {{mvar|ω}}. The arrows in the upper diagram are phasors, drawn in a phasor diagram ([[complex plane]] without axis shown), which must not be confused with the arrows in the lower diagram, which are the reference polarity for the [[voltage]]s and the reference direction for the [[electric current|current]].]] In [[physics]] and [[engineering]], a '''phasor''' (a [[portmanteau]] of '''phase vector'''<ref name="FoxBolton2002">{{cite book|author1=Huw Fox|author2=William Bolton|title=Mathematics for Engineers and Technologists|url=https://archive.org/details/mathematicsforen00foxh_204|url-access=limited|year=2002|publisher=Butterworth-Heinemann|isbn=978-0-08-051119-1|page=[https://archive.org/details/mathematicsforen00foxh_204/page/n36 30]}}</ref><ref name="Rawlins2000">{{cite book|author=Clay Rawlins|title=Basic AC Circuits|url=https://archive.org/details/basicaccircuits00mscl|url-access=limited|year=2000 |publisher=Newnes|isbn=978-0-08-049398-5|page=[https://archive.org/details/basicaccircuits00mscl/page/n134 124]|edition=2nd}}</ref>) is a [[complex number]] representing a [[sine wave|sinusoidal function]] whose [[amplitude]] {{mvar|A}} and [[Phase (waves)|initial phase]] {{mvar|θ}} are [[time-invariant system|time-invariant]] and whose [[angular frequency]] {{mvar|ω}} is fixed. It is related to a more general concept called [[analytic signal|analytic representation]],<ref name=Bracewell>Bracewell, Ron. ''The Fourier Transform and Its Applications''. McGraw-Hill, 1965. p269</ref> which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as a '''phasor''', or '''complex amplitude''',<ref name="Kumar2008">{{cite book|author=K. S. Suresh Kumar|title=Electric Circuits and Networks|year=2008|publisher=Pearson Education India|isbn=978-81-317-1390-7|page=272}}</ref><ref name="ZhangLi2007">{{cite book|author1=Kequian Zhang|author2=Dejie Li|title=Electromagnetic Theory for Microwaves and Optoelectronics|year=2007|publisher=Springer Science & Business Media|isbn=978-3-540-74296-8|page=13|edition=2nd}}</ref> and (in older texts) '''sinor'''<ref name="Hindmarsh2014"/> or even '''complexor'''.<ref name="Hindmarsh2014">{{cite book|author=J. Hindmarsh|title=Electrical Machines & their Applications|year=1984|edition=4th|publisher=Elsevier|isbn=978-1-4832-9492-6|page=58}}</ref> A common application is in the steady-state analysis of an [[electrical network]] powered by [[Alternating current|time varying current]] where all signals are assumed to be sinusoidal with a common frequency. Phasor representation allows the analyst to represent the amplitude and phase of the signal using a single complex number. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be represented as a linear combination of phasors (known as '''phasor arithmetic''' or '''phasor algebra<ref name=":02">{{Cite book |last=Gross |first=Charles A. |title=Fundamentals of electrical engineering |date=2012 |publisher=CRC Press |others=Thaddeus Adam Roppel |isbn=978-1-4398-9807-9 |location=Boca Raton, FL |oclc=863646311}}</ref>{{Rp|page=53}}''') and the time/frequency dependent factor that they all have in common. The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for [[Euclidean vector|vectors]] is possible for phasors as well.<ref name="Hindmarsh2014"/> An important additional feature of the phasor transform is that [[derivative|differentiation]] and [[integral|integration]] of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple [[algebraic operation]]s on the phasors; the phasor transform thus allows the [[network analysis (electrical circuits)|analysis]] (calculation) of the [[alternating current|AC]] [[steady state (electronics)|steady state]] of [[RLC circuit]]s by solving simple [[algebraic equation]]s (albeit with complex coefficients) in the phasor domain instead of solving [[differential equation]]s (with [[real number|real]] coefficients) in the time domain.<ref name="Eccles2011">{{cite book|author=William J. Eccles|title=Pragmatic Electrical Engineering: Fundamentals|year=2011| publisher=Morgan & Claypool Publishers|isbn=978-1-60845-668-0|page=51}}</ref><ref name="DorfSvoboda2010">{{cite book| author1=Richard C. Dorf|author2=James A. Svoboda|title=Introduction to Electric Circuits|url=https://archive.org/details/introductiontoel00dorf_304|url-access=limited|year=2010|publisher=John Wiley & Sons|isbn=978-0-470-52157-1|page=[https://archive.org/details/introductiontoel00dorf_304/page/n680 661]|edition=8th}}</ref>{{Efn|name="ac-circuits"|Including analysis of the AC circuits.{{r|:02|pp=53}}}} The originator of the phasor transform was [[Charles Proteus Steinmetz]] working at [[General Electric]] in the late 19th century.<ref name="RobbinsMiller2012">{{cite book|author1=Allan H. Robbins|author2=Wilhelm Miller|title=Circuit Analysis: Theory and Practice|year=2012| edition=5th| publisher=Cengage Learning|isbn=978-1-285-40192-8|page=536}}</ref><ref name="YangLee2008"/> He got his inspiration from [[Oliver Heaviside]]. Heaviside's operational calculus was modified so that the variable p becomes jω. The complex number j has simple meaning: phase shift.<ref name="BasilMahon2017">{{cite book|author1=Basil Mahon|title=The Forgotten Genius of Oliver Heaviside |year=2017| edition=1st| publisher=Prometheus Books Learning|isbn=978-1-63388-331-4|page=230}}</ref> Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the [[Laplace transform]] (limited to a single frequency), which, in contrast to phasor representation, can be used to (simultaneously) derive the [[transient response]] of an RLC circuit.<ref name="DorfSvoboda2010"/><ref name="YangLee2008">{{cite book|author1=Won Y. Yang|author2=Seung C. Lee|title=Circuit Systems with MATLAB and PSpice|year=2008|publisher=John Wiley & Sons|isbn=978-0-470-82240-1|pages=256–261}}</ref> However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.<ref name="YangLee2008"/> [[File:unfasor.gif|thumb|right|Fig 2. When function <math>A \cdot e^{i(\omega t + \theta)}</math> is depicted in the complex plane, the vector formed by its [[complex number|imaginary and real parts]] rotates around the origin. Its magnitude is ''A'', and it completes one cycle every 2{{pi}}/ω. ''θ'' is the angle it forms with the positive real axis at {{math|1=''t'' = 0}} (and at {{math|1=''t'' = ''n'' 2''π''/''ω''}} for all [[integer]] values of {{mvar|n}}).]] ==Notation== {{see also|Vector notation}} '''Phasor notation''' (also known as '''angle notation''') is a [[mathematical notation]] used in [[electronics engineering]] and [[electrical engineering]]. A vector whose [[polar coordinates#Complex numbers|polar coordinates]] are magnitude <math>A</math> and [[angle]] <math>\theta</math> is written <math>A \angle \theta.</math><ref>{{cite book |last1=Nilsson |first1=James William |url=https://books.google.com/books?id=sxmM8RFL99wC |title=Electric circuits |last2=Riedel |first2=Susan A. |publisher=Prentice Hall |year=2008 |isbn=978-0-13-198925-2 |edition=8th |page=338}}, [https://books.google.com/books?id=sxmM8RFL99wC&pg=PA338 Chapter 9, page 338]</ref> <math>1 \angle \theta</math> can represent either the [[Euclidean vector|vector]] <math>(\cos \theta,\, \sin \theta)</math> or the [[complex number]] <math>\cos \theta + i \sin \theta = e^{i\theta}</math>, according to [[Euler's formula]] with <math>i^2 = -1</math>, both of which have [[magnitude (mathematics)|magnitudes]] of 1. The angle may be stated in [[degree (angle)|degrees]] with an implied conversion from degrees to [[radian]]s. For example <math>1 \angle 90</math> would be assumed to be <math>1 \angle 90^\circ,</math> which is the vector <math>(0,\, 1)</math> or the number <math>e^{i\pi/2} = i.</math> Multiplication and division of complex numbers become straight forward through the phasor notation. Given the vectors <math>v_1 = A_1 \angle \theta_1</math> and <math> v_2 = A_2 \angle \theta_2 </math>, the following is true:<ref>{{cite book |last1=Rawlins |first1=John C. |title=Basic AC Circuits |date=2000 |publisher=Newnes |isbn=9780750671736 |pages=427-452 |edition=Second |url=https://www.sciencedirect.com/science/article/pii/B9780750671736500146}}</ref> :<math> v_1 \cdot v_2 = A_1 \cdot A_2 \angle (\theta_1 + \theta_2) </math>, :<math>\frac{v_1}{v_2} = \frac{A_1}{A_2} \angle (\theta_1 - \theta_2)</math>. ==Definition== A real-valued sinusoid with constant amplitude, frequency, and phase has the form: :<math>A\cos(\omega t + \theta),</math> where only parameter <math>t</math> is time-variant. The inclusion of an [[imaginary part|imaginary component]]: :<math>i \cdot A\sin(\omega t + \theta)</math> gives it, in accordance with [[Euler's formula]], the factoring property described in the lead paragraph: :<math>A\cos(\omega t + \theta) + i\cdot A\sin(\omega t + \theta) = A e^{i(\omega t + \theta)} = A e^{i \theta} \cdot e^{i\omega t},</math> whose real part is the original sinusoid. The benefit of the complex representation is that linear operations with other complex representations produces a complex result whose real part reflects the same linear operations with the real parts of the other complex sinusoids. Furthermore, all the mathematics can be done with just the phasors <math>A e^{i \theta},</math> and the common factor <math>e^{i\omega t}</math> is reinserted prior to the real part of the result. The function <math>Ae^{i(\omega t + \theta)}</math> is an ''[[analytic representation]]'' of <math>A\cos(\omega t + \theta).</math> Figure 2 depicts it as a rotating vector in the complex plane. It is sometimes convenient to refer to the entire function as a ''phasor'',<ref>{{cite book |last1=Singh |first1=Ravish R |title=Electrical Networks |date=2009 |publisher=Mcgraw Hill Higher Education |isbn=978-0070260962 |page=4.13 |chapter=Section 4.5: Phasor Representation of Alternating Quantities}}</ref> as we do in the next section. ==Arithmetic== {{see also|Complex number#Relations and operations}} ===Multiplication by a constant (scalar)=== Multiplication of the phasor <math>A e^{i\theta} e^{i\omega t}</math> by a complex constant, <math>B e^{i\phi}</math>, produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid: <math display="block">\begin{align} &\operatorname{Re}\left( \left(A e^{i\theta} \cdot B e^{i\phi}\right) \cdot e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( \left(AB e^{i(\theta + \phi)}\right) \cdot e^{i\omega t} \right) \\ ={} &AB \cos(\omega t + (\theta + \phi)). \end{align}</math> In electronics, <math>B e^{i\phi}</math> would represent an [[electrical impedance|impedance]], which is independent of time. In particular it is ''not'' the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid. === Addition === [[File:sumafasores.gif|thumb|right|The sum of phasors as addition of rotating vectors]] The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency: <math display="block">\begin{align} &A_1\cos(\omega t + \theta_1) + A_2\cos(\omega t + \theta_2) \\[3pt] ={} &\operatorname{Re}\left( A_1 e^{i\theta_1}e^{i\omega t} \right) + \operatorname{Re}\left( A_2 e^{i\theta_2}e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( A_1 e^{i\theta_1}e^{i\omega t} + A_2 e^{i\theta_2} e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( \left(A_1 e^{i\theta_1} + A_2 e^{i\theta_2}\right) e^{i\omega t} \right) \\[3pt] ={} &\operatorname{Re}\left( \left(A_3 e^{i\theta_3}\right) e^{i\omega t} \right) \\[3pt] ={} &A_3 \cos(\omega t + \theta_3), \end{align}</math> where: <math display="block">A_3^2 = (A_1 \cos\theta_1 + A_2 \cos \theta_2)^2 + (A_1 \sin\theta_1 + A_2 \sin\theta_2)^2,</math> and, if we take <math display="inline"> \theta_3 \in \left[-\frac{\pi}{2}, \frac{3\pi}{2}\right]</math>, then <math>\theta_3</math> is: * <math display="inline">\sgn(A_1 \sin(\theta_1) + A_2 \sin(\theta_2)) \cdot \frac{\pi}{2},</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 = 0,</math> with <math>\sgn</math> the [[signum function]]; * <math>\arctan\left(\frac{A_1 \sin\theta_1 + A_2 \sin\theta_2}{A_1 \cos\theta_1 + A_2 \cos\theta_2}\right),</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 > 0</math>; * <math>\pi + \arctan\left(\frac{A_1 \sin\theta_1 + A_2 \sin\theta_2}{A_1 \cos\theta_1 + A_2 \cos\theta_2}\right),</math> if <math>A_1 \cos\theta_1 + A_2 \cos\theta_2 < 0</math>. or, via the [[law of cosines]] on the [[complex plane]] (or the [[trigonometric identity#Angle sum and difference identities|trigonometric identity for angle differences]]): <math display="block"> A_3^2 = A_1^2 + A_2^2 - 2 A_1 A_2 \cos(180^\circ - \Delta\theta) = A_1^2 + A_2^2 + 2 A_1 A_2 \cos(\Delta\theta), </math> where <math>\Delta\theta = \theta_1 - \theta_2.</math> A key point is that ''A''<sub>3</sub> and ''θ''<sub>3</sub> do not depend on ''ω'' or ''t'', which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In [[angle notation]], the operation shown above is written: <math display="block">A_1 \angle \theta_1 + A_2 \angle \theta_2 = A_3 \angle \theta_3.</math> Another way to view addition is that two '''vectors''' with coordinates {{math|[''A''<sub>1</sub> cos(''ωt'' + ''θ''<sub>1</sub>), ''A''<sub>1</sub> sin(''ωt'' + ''θ''<sub>1</sub>)]}} and {{math|[''A''<sub>2</sub> cos(''ωt'' + ''θ''<sub>2</sub>), ''A''<sub>2</sub> sin(''ωt'' + ''θ''<sub>2</sub>)]}} are [[vector (geometric)#Addition and subtraction|added vectorially]] to produce a resultant vector with coordinates {{math|[''A''<sub>3</sub> cos(''ωt'' + ''θ''<sub>3</sub>), ''A''<sub>3</sub> sin(''ωt'' + ''θ''<sub>3</sub>)]}} (see animation). [[Image:destructive interference.png|thumb|right|Phasor diagram of three waves in perfect destructive interference]] In physics, this sort of addition occurs when sinusoids [[interference (wave propagation)|interfere]] with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral [[triangle]], so the angle between each phasor to the next is 120° ({{frac|2{{pi}}|3}} radians), or one third of a wavelength {{frac|{{var|λ}}|3}}. So the phase difference between each wave must also be 120°, as is the case in [[three-phase power]]. In other words, what this shows is that: <math display="block">\cos(\omega t) + \cos\left(\omega t + \frac{2\pi}{3}\right) + \cos\left(\omega t - \frac{2\pi}{3}\right) = 0.</math> In the example of three waves, the phase difference between the first and the last wave was 240°, while for two waves destructive interference happens at 180°. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength <math>\lambda</math>. This is why in single slit [[diffraction]], the minima occur when [[light]] from the far edge travels a full wavelength further than the light from the near edge. As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2{{pi}} radians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, {{math|1=''t'' = 0}}. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°. Likewise, when the tip of the vector is vertical it represents the positive peak value, ({{math|+''A''<sub>max</sub>}}) at 90° or {{frac|{{pi}}|2}} and the negative peak value, ({{math|−''A''<sub>max</sub>}}) at 270° or {{frac|3{{pi}}|2}}. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represents a scaled voltage or current value of a rotating vector which is "frozen" at some point in time, ({{mvar|t}}) and in our example above, this is at an angle of 30°. Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time {{math|1=''t'' = 0}} with a corresponding phase angle in either degrees or radians. But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference, {{var|Φ}} of the waveform. Consider the diagram below from the previous Phase Difference tutorial. === Differentiation and integration === The time [[derivative]] or [[integral]] of a phasor produces another phasor.{{efn|This results from <math display="inline"> \frac{d}{dt} e^{i \omega t} = i \omega e^{i \omega t},</math> which means that the [[complex exponential]] is the [[eigenfunction]] of the derivative operator.}} For example: <math display="block">\begin{align} &\operatorname{Re}\left( \frac{\mathrm{d}}{\mathrm{d}t} \mathord\left(A e^{i\theta} \cdot e^{i\omega t}\right) \right) \\ ={} &\operatorname{Re}\left( A e^{i\theta} \cdot i\omega e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( A e^{i\theta} \cdot e^{i\pi/2} \omega e^{i\omega t} \right) \\ ={} &\operatorname{Re}\left( \omega A e^{i(\theta + \pi/2)} \cdot e^{i\omega t} \right) \\ ={} &\omega A \cdot \cos\left(\omega t + \theta + \frac{\pi}{2}\right). \end{align}</math> Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant <math display="inline">i \omega = e^{i\pi/2} \cdot \omega</math>. Similarly, integrating a phasor corresponds to multiplication by <math display="inline">\frac{1}{i\omega} = \frac{e^{-i\pi/2}}{\omega}.</math> The time-dependent factor, <math>e^{i\omega t},</math> is unaffected. When we solve a [[linear differential equation]] with phasor arithmetic, we are merely factoring <math>e^{i\omega t}</math> out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the [[capacitor]] in an [[RC circuit]]: <math display="block">\frac{\mathrm{d}\, v_\text{C}(t)}{\mathrm{d}t} + \frac{1}{RC}v_\text{C}(t) = \frac{1}{RC} v_\text{S}(t).</math> When the voltage source in this circuit is sinusoidal: <math display="block">v_\text{S}(t) = V_\text{P} \cdot \cos(\omega t + \theta),</math> we may substitute <math>v_\text{S}(t) = \operatorname{Re}\left( V_\text{s} \cdot e^{i \omega t} \right).</math> <math display="block">v_\text{C}(t) = \operatorname{Re}\left(V_\text{c} \cdot e^{i \omega t} \right),</math> where phasor <math>V_\text{s} = V_\text{P} e^{i\theta},</math> and phasor <math>V_\text{c}</math> is the unknown quantity to be determined. In the phasor shorthand notation, the differential equation reduces to: <math display="block">i \omega V_\text{c} + \frac{1}{RC} V_\text{c} = \frac{1}{RC}V_\text{s}.</math> {{math proof|title=Derivation|proof= {{NumBlk2|:|<math> \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Re}\left(V_\text{c} \cdot e^{i\omega t}\right) + \frac{1}{RC}\operatorname{Re}(V_\text{c} \cdot e^{i\omega t}) = \frac{1}{RC}\operatorname{Re}\left(V_\text{s} \cdot e^{i\omega t}\right)</math> |Eq.1}} Since this must hold for all <math>t</math>, specifically: <math display="inline">t - \frac{\pi}{2\omega},</math> it follows that: {{NumBlk2|:|<math> \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Im}\left(V_\text{c} \cdot e^{i\omega t}\right) + \frac{1}{RC} \operatorname{Im}\left(V_\text{c} \cdot e^{i\omega t}\right) = \frac{1}{RC}\operatorname{Im}\left(V_\text{s} \cdot e^{i\omega t}\right). </math> |Eq.2}} It is also readily seen that: <math display="block">\begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Re}\left( V_\text{c} \cdot e^{i\omega t} \right) &= \operatorname{Re}\left(\frac{\mathrm{d}}{\mathrm{d}t} \mathord\left( V_\text{c} \cdot e^{i\omega t} \right)\right) = \operatorname{Re}\left(i\omega V_\text{c} \cdot e^{i\omega t} \right) \\ \frac{\mathrm{d}}{\mathrm{d}t} \operatorname{Im}\left( V_\text{c} \cdot e^{i\omega t} \right) &= \operatorname{Im}\left(\frac{\mathrm{d}}{\mathrm{d}t} \mathord\left( V_\text{c} \cdot e^{i\omega t} \right) \right) = \operatorname{Im}\left(i\omega V_\text{c} \cdot e^{i\omega t} \right). \end{align}</math> Substituting these into {{EquationNote|Eq.1}} and {{EquationNote|Eq.2}}, multiplying {{EquationNote|Eq.2}} by <math>i,</math> and adding both equations gives: <math display="block">\begin{align} i\omega V_\text{c} \cdot e^{i\omega t} + \frac{1}{RC}V_\text{c} \cdot e^{i\omega t} &= \frac{1}{RC}V_\text{s} \cdot e^{i\omega t} \\ \left(i\omega V_\text{c} + \frac{1}{RC}V_\text{c} \right) \!\cdot e^{i\omega t} &= \left(\frac{1}{RC}V_\text{s}\right) \cdot e^{i \omega t} \\ i\omega V_\text{c} + \frac{1}{RC}V_\text{c} &= \frac{1}{RC}V_\text{s}. \end{align}</math> }} Solving for the phasor capacitor voltage gives: <math display="block">V_\text{c} = \frac{1}{1 + i \omega RC} \cdot V_\text{s} = \frac{1 - i\omega R C}{1 + (\omega RC)^2} \cdot V_\text{P} e^{i\theta}.</math> As we have seen, the factor multiplying <math>V_\text{s}</math> represents differences of the amplitude and phase of <math>v_\text{C}(t)</math> relative to <math>V_\text{P}</math> and <math>\theta.</math> In polar coordinate form, the first term of the last expression is: <math display="block">\frac{1 - i\omega R C}{1 + (\omega RC)^2}=\frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot e^{-i \phi(\omega)},</math> where <math>\phi(\omega) = \arctan(\omega RC)</math>. Therefore: <math display="block">v_\text{C}(t) =\operatorname{Re}\left(V_\text{c} \cdot e^{i \omega t} \right)= \frac{1}{\sqrt{1 + (\omega RC)^2}}\cdot V_\text{P} \cos(\omega t + \theta - \phi(\omega)).</math> === Ratio of phasors === A quantity called complex [[electrical impedance|impedance]] is the ratio of two phasors, which is not a phasor, because it does not correspond to a sinusoidally varying function. == Applications == === Circuit laws === With phasors, the techniques for solving [[direct current|DC]] circuits can be applied to solve linear AC circuits.{{Efn|name="ac-circuits"}} ; Ohm's law for resistors: A [[resistor]] has no time delays and therefore doesn't change the phase of a signal therefore {{math|1=''V'' = ''IR''}} remains valid. ; Ohm's law for resistors, inductors, and capacitors: {{math|1=''V'' = ''IZ''}} where {{mvar|Z}} is the complex [[electrical impedance|impedance]].<!-- we probably want a justification of this somewhere--> ; [[Kirchhoff's circuit laws]]: Work with voltages and current as complex phasors. In an AC circuit we have real power ({{mvar|P}}) which is a representation of the average power into the circuit and reactive power (''Q'') which indicates power flowing back and forth. We can also define the [[complex power]] {{math|1=''S'' = ''P'' + ''jQ''}} and the apparent power which is the magnitude of {{mvar|S}}. The power law for an AC circuit expressed in phasors is then {{math|1=''S'' = ''VI''<sup>*</sup>}} (where {{math|1=''I''<sup>*</sup>}} is the [[complex conjugate]] of {{math|1=''I''}}, and the magnitudes of the voltage and current phasors {{math|1=''V''}} and of {{math|1=''I''}} are the [[Root mean square#Definition|RMS]] values of the voltage and current, respectively). Given this we can apply the techniques of [[analysis of resistive circuits]] with phasors to analyze single frequency linear AC circuits containing resistors, capacitors, and [[inductor]]s. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components (using [[Fourier series]]) with magnitude and phase then analyzing each frequency separately, as allowed by the [[superposition theorem]]. This solution method applies only to inputs that are sinusoidal and for solutions that are in steady state, i.e., after all transients have died out.<ref>{{Cite book|title=Introduction to electromagnetic compatibility| last=Clayton|first=Paul| publisher=Wiley|year=2008|isbn=978-81-265-2875-2|pages=861}}</ref> The concept is frequently involved in representing an [[electrical impedance]]. In this case, the phase angle is the [[phase difference]] between the voltage applied to the impedance and the current driven through it. ===Power engineering=== In analysis of [[three phase]] AC power systems, usually a set of phasors is defined as the three complex [[cube roots of unity]], graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of [[symmetrical components]]. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in [[Degree (angle)|degree]]s, and the magnitude in [[Root Mean Square|RMS]] value rather than the peak amplitude of the sinusoid. The technique of [[synchrophasor]]s uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Differences among the phasors indicate power flow and system stability. === Telecommunications: analog modulations === [[File:Modulation phasors.svg|thumb|A: phasor representation of amplitude modulation, B: alternate representation of amplitude modulation, C: phasor representation of frequency modulation, D: alternate representation of frequency modulation]] The rotating frame picture using phasor can be a powerful tool to understand analog modulations such as [[amplitude modulation]] (and its variants<ref name=IJRES>de Oliveira, H.M. and Nunes, F.D. ''About the Phasor Pathways in Analogical Amplitude Modulations''. International Journal of Research in Engineering and Science (IJRES) Vol.2, N.1, Jan., pp.11-18, 2014. ISSN 2320-9364</ref>) and [[frequency modulation]]. <math display="block">x(t) = \operatorname{Re}\left( A e^{i \theta} \cdot e^{i 2\pi f_0 t} \right),</math> where the term in brackets is viewed as a rotating vector in the complex plane. The phasor has length <math>A</math>, rotates anti-clockwise at a rate of <math>f_0</math> revolutions per second, and at time <math>t = 0</math> makes an angle of <math>\theta</math> with respect to the positive real axis. The waveform <math>x(t)</math> can then be viewed as a projection of this vector onto the real axis. A modulated waveform is represented by this phasor (the carrier) and two additional phasors (the modulation phasors). If the modulating signal is a single tone of the form <math>Am \cos{2\pi f_m t} </math>, where <math>m</math> is the modulation depth and <math>f_m</math> is the frequency of the modulating signal, then for amplitude modulation the two modulation phasors are given by, <math display="block">{1 \over 2} Am e^{i \theta} \cdot e^{i 2\pi (f_0+f_m) t},</math> <math display="block">{1 \over 2} Am e^{i \theta} \cdot e^{i 2\pi (f_0-f_m) t}.</math> The two modulation phasors are phased such that their vector sum is always in phase with the carrier phasor. An alternative representation is two phasors counter rotating around the end of the carrier phasor at a rate <math>f_m</math> relative to the carrier phasor. That is, <math display="block">{1 \over 2} Am e^{i \theta} \cdot e^{i 2\pi f_m t},</math> <math display="block">{1 \over 2} Am e^{i \theta} \cdot e^{-i 2\pi f_m t}.</math> Frequency modulation is a similar representation except that the modulating phasors are not in phase with the carrier. In this case the vector sum of the modulating phasors is shifted 90° from the carrier phase. Strictly, frequency modulation representation requires additional small modulation phasors at <math>2f_m, 3f_m</math> etc, but for most practical purposes these are ignored because their effect is very small. == See also == * [[In-phase and quadrature components]] ** [[Constellation diagram]] * [[Analytic signal]], a generalization of phasors for time-variant amplitude, phase, and frequency. ** [[Complex envelope]] * [[Phase factor]], a phasor of unit magnitude == Footnotes== {{notelist}} == References == {{reflist}} == Further reading == * {{cite book | author=Douglas C. Giancoli | title=Physics for Scientists and Engineers | publisher=Prentice Hall | year=1989 | isbn=0-13-666322-2}} * {{cite book | last1 = Dorf| first1 = Richard C. | first2 = Ronald J. | last2 = Tallarida | title =Pocket Book of Electrical Engineering Formulas | publisher =CRC Press | edition =1 | date =1993-07-15 | location =Boca Raton, FL | pages =152–155 | isbn =0849344735 }} == External links == {{commons category|Phasors}} {{Sister project|project = wikiversity|text = [[Wikiversity]] has a lesson on '''''[[v:Phasor algebra|Phasor algebra]]'''''}} * [http://www.jhu.edu/~signals/phasorapplet2/phasorappletindex.htm Phasor Phactory] * [http://resonanceswavesandfields.blogspot.com/2007/08/phasors.html Visual Representation of Phasors] * [http://www.allaboutcircuits.com/vol_2/chpt_2/5.html Polar and Rectangular Notation] * [http://www.de.ufpe.br/~hmo/AM_phasor_diagrams.html Phasor in Telecommunication] [[Category:Electrical circuits]] [[Category:AC power]] [[Category:Interference]] [[Category:Trigonometry]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Commons category
(
edit
)
Template:Distinguish
(
edit
)
Template:Efn
(
edit
)
Template:EquationNote
(
edit
)
Template:Frac
(
edit
)
Template:Math
(
edit
)
Template:Math proof
(
edit
)
Template:Mvar
(
edit
)
Template:Notelist
(
edit
)
Template:NumBlk2
(
edit
)
Template:Other uses
(
edit
)
Template:Pi
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Var
(
edit
)