Switching Power Supplies A to Z.pdf
Switching Power Supplies A to Z This Page Intentionally Left Blank Switching Power Supplies A to Z Sanjaya Maniktala AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO Newnes is an imprint of Elsevier Newnes is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright 2006, Elsevier Inc. All rights reserved. No part of this publication may be reproduced, stored in a retri system, or transmitted in any or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Permissions may be sought directly from Elsevier’s Science that is, within the same package as the mosfet. This greatly reduces the parasitic inductances between the low-side mosfet and the diode, and allows the current to quickly steer away from the low-side mosfet and into the parallel diode during the deadtime preceding the high-side turn-on. Question 44 What limits our ability to switch a mosfet fast Answer When talking about a switching device transistor, as opposed to a converter, the time it spends in transit between states is referred to as its “switching speed.” The ability to switch fast has several implications, including the obvious minimization of the V-I crossover losses. Modern mosfets, though considered very “fast” in comparison to bjts, nevertheless do not respond instantly when their drivers change state. That is because, fi rst, the driver itself has a certain non-zero “pull-up” or “pull-down” resistance through which the drive current must fl ow and charge/discharge the internal parasitic capacitances of the mosfet, so as to cause it to change state. In the process, there is a certain delay involved. Second, even if our external resistances were zero, there still remain parasitic inductances associated with the PCB traces leading up from the gate drivers to the gates, that will also limit our ability to force a large gate current to turn the device ON or OFF quickly. And further, hypothetically, even if we do achieve zero external impedance in the gate section, there remain internal impedances within the package of the mosfet itself before we can get to its parasitic capacitances to charge or discharge them as desired. Part of this internal impedance is inductive, consisting of the bond wires leading from the pin to the die, and part of it is resistive. The latter could be of the order of several ohms in fact. All these factors come into play in determining the switching speed of the device. Question 45 What is ‘cross-conduction’ in a synchronous stage Answer Since a mosfet has a slight delay before it responds to its driver stage, though the square-wave driving signals to the high- and low-side mosfets might have no intended “overlap,” in reality the mosfets might actually be conducting simultaneously for a short duration. That is called ‘cross-conduction’ or ‘shoot-through.’ Even if minimized, it is enough to impair overall effi ciency by several percentage points since it creates a short across the terminals limited only by various intervening parasitics. This situation is aggravated if the two mosfets have signifi cant “mismatch” in their switching speeds. In fact, usually, the low-side mosfet is far more “sluggish” than the high-side mosfet. That is because the low-side mosfet is chosen primarily for its low forward resistance, ‘RDS.’ But to achieve a low RDS, a larger die-size is required, and this usually leads to higher internal parasitic capacitances, which end up limiting the switching speed. Question 46 How can we try and avoid cross-conduction in a synchronous stage Answer To avoid cross-conduction, a deliberate delay needs to be introduced between one mosfet turning ON and the other turning OFF. This is called the converter’s or controller’s 198 The Topology FAQ ‘deadtime.’ Note that during this time, freewheeling current is maintained via the diode present across the low-side mosfet. Question 47 What is ‘adaptive dead-time’ Answer Techniques for implementing dead-time have evolved quite rapidly as outlined below. ■ First Generation Fixed Delay The fi rst synchronous IC controllers had a fi xed delay between the two gate drivers. This had the advantage of simplicity, but the set delay time had to be made long enough to cover the many possible applications of the part, and also to accommodate a wide range of possible mosfet choices by customers. The set delay had often to be further offset made bigger because of the rather wide manufacturing variations in its own value. However, whenever current is made to fl ow through the diode rather than the low-side mosfet, we incur higher conduction losses. These are clearly proportional to the amount of dead-time, so we don’t want to set too large a fi xed dead-time for all applications. ■Second Generation Adaptive Delay Usually this is implemented as follows. The gate voltage of the low-side mosfet is monitored, to decide when to turn the high-side mosfet ON. When this voltage goes below a certain threshold, it is assumed that the low-side mosfet is OFF a few nanoseconds of additional fi xed delay may be included at this point, and then the high-side gate is driven high. To decide when to turn the low-side mosfet ON, we usually monitor the switching node in “real-time” and adapt to it. The reason for that is that after the high-side mosfet turns OFF, the switching node starts falling in an effort to allow the low-side to take over the inductor current. Unfortunately, the rate at which it falls is not very predictable, as it depends on various undefi ned parasitics, and also the application conditions. Further, we also want to implement something close to zero-voltage switching, to minimize crossover losses in the low-side mosfet. Therefore, we need to wait a varying amount of time, until we have ascertained that the switching node has fallen below the threshold before turning the low-side mosfet ON. So the adaptive technique allows “on-the-fl y” delay adjustment for different mosfets and applications. ■Third Generation Predictive Gate Drive™Technique The whole purpose of adaptive switching is to intelligently switch with a delay just large enough to avoid signifi cant cross-conduction and small enough so that the body-diode conduction time is minimized and to be able to do that consistently, with a wide variety of mosfets. However, the “predictive” technique, introduced by Texas Instruments, is often seen by their competitors as “overkill.” But for the sake of completeness it is mentioned here. Predictive Gate Drive™technology samples and holds ination from the previous switching cycle to “predict” the minimum delay time for the 199 Chapter 4 next cycle. It works on the premise that the delay time required for the next switching cycle will be close to the requirements of the previous cycle. By using a digital control feedback system to detect body-diode conduction, this technology produces the precise timing signals necessary to operate very near the threshold of cross-conduction. Question 48 What is low-side current sensing Answer Historically, current sensing was most often done during the on-time of the switch. But nowadays, especially for synchronous buck regulators in low output voltage applications, the current is being sensed during the off-time. One reason for that is that in certain mobile computing applications for example, a rather extreme down-conversion ratio is being required nowadays say 28 V to 1 V at a minimum switching frequency of 300 kHz. We can calculate that this requires a duty cycle of 1/28 3.6. At 300 kHz, the time period is 3.3 s, and so the required high-side switch on-time is about 3.6 3.3/100 0.12 s i.e. 120 ns. At 600 kHz, this on-time falls to 60 ns, and at 1.2 MHz it is 30 ns. Ultimately, that just may not give enough time to turn ON the high-side mosfet fully, “de-glitch” the noise associated with its turn-on transition ‘leading edge blanking’, and get the current limit circuit to sense the current fast enough. Further, at very light loads we may want to be able to skip pulses altogether, so as to maximize effi ciency since switching losses go down whenever we skip pulses. But with high-side current sensing we are almost forced into turning the high-side mosfet ON every cycle just to sense the current For such reasons, low-side current sensing is becoming increasingly popular. Sometimes, a current sense resistor may be placed in the freewheeling path for the purpose. However, since low-resistance resistors are expensive, the forward drop across the low-side mosfet is often used for the purpose. Question 49 Why do some non-synchronous regulators go into an almost chaotic switching mode at very light loads Answer As we decrease the load, conventional regulators operating in CCM continuous conduction mode see Chapter 1 enter discontinuous conduction mode DCM. The onset of this is indicated by the fact that the duty cycle suddenly becomes a function of load unlike a regulator operating in CCM, in which the duty cycle depends only on the and output voltages to a fi rst order. As the load current is decreased further, the DCM duty cycle keeps decreasing, and eventually, many regulators will automatically enter a random pulse-skipping mode. That happens simply because at some point, the regulator just cannot decrease its on-time further, as is being demanded. So the energy it thereby puts out into the inductor every on-pulse starts exceeding the average energy per pulse requirement of the load. So its control section literally “gets confused,” but nevertheless tries valiantly to 200 The Topology FAQ regulate by stating something like “oops ... that pulse was too wide sorry, just couldn’t help it, but let me cut back on delivering any pulses altogether for some time hope to compensate for my actions.” But this chaotic control can pose a practical problem, especially when dealing with current-mode control CMC. In CMC, usually the switch current is constantly monitored, and that ination is used to produce the internal ramp for the pulse-width modulator PWM stage to work. So if the switch does not even turn ON for several cycles, there is no ramp either for the PWM to work off. This chaotic mode is also a variable frequency mode of virtually unpredictable frequency spectrum and therefore unpredictable EMI and noise characteristics too. That is why fi xed-frequency operation is usually preferred in commercial applications. And fi xed frequency basically means no pulse-skipping The popular way to avoid this chaotic mode is to “pre-load” the converter, that is, place some resistors across its output terminals on the PCB itself, so that the converter “thinks” there is some minimum load always present. In other words, we demand a little more energy than the minimum energy that the converter can deliver before going chaotic. Question 50 Why do we sometimes want to skip pulses at light loads Answer In some applications, especially battery-powered application, the ‘light-load effi ciency’ of a converter is of great concern. Conduction losses can always be decreased by using switches with low forward drops. Unfortunately, switching losses occur every time we actually switch. So the only way to reduce them is by not switching, if that is possible. A pulse-skipping mode, if properly implemented, will clearly improve the light-load effi ciency. Question 51 How can we implement controlled pulse-skipping in a synchronous buck topology, to further improve the effi ciency at light loads Answer In DCM, the duty cycle is a function of the load current. So on decreasing the load suffi ciently, the duty cycle starts to “pinch off” from its CCM value. And this eventually leads to pulse-skipping when the control runs into its minimum on-time limit. But as mentioned, this skip mode can be fairly chaotic, and also occurs only at extremely light loads. So one of the ways this is being handled nowadays is to not “allow” the DCM duty cycle to pinch off below 85 of the CCM pulse width. Therefore now more energy is pushed out into a single on-pulse than under normal DCM and without waiting to run into the minimum on-time limits of the controller. However, now because of the much-bigger-than-required on-pulse, the control will skip even more cycles for every on-pulse. Thereafter, at some point, the control will detect that the output voltage has fallen too much, and will command another big on-pulse. So this forces pulse-skipping in DCM, and thereby enhances the light-load effi ciency by reducing the switching losses. 201 Chapter 4 Question 52 How can we quickly damage a boost regulator Answer The problem with a boost regulator is that as soon as we apply power, a huge inrush current fl ows to charge up the output capacitor. Since the switch is not in series with it, we have no control over it either. So ideally, we should delay turning ON our switch until the output capacitor has reached the level of the voltage inrush stops. And for this, a soft-start function is highly desirable in a boost. However, if while the inrush is still in progress, we turn the switch ON, it will start diverting this inrush into the switch. The problem with that is in most controllers, the current limit may not even be working for the fi rst 100 to 200 ns after turn-on that being deliberately done to avoid falsely triggering ON the noise generated during the switch transition “leading edge blanking”. So now the huge inrush current gets fully diverted into the switch, with virtually no control, possibly causing failure. One way out of that is to use a diode directly connected between the supply rail and the output capacitor cathode of this diode being at the positive terminal of the output capacitor. So the inrush current bypasses the inductor and boost diode altogether. However, we have to be careful about the surge current rating of this extra diode. It need not be a fast diode, since it “goes out of the picture” as soon as we start switching gets reverse- biased permanently. Note also, that a proper ON/OFF function cannot be implemented on a boost topology as is. For that, an additional series transistor is required, to completely and effectively disconnect the output from the . 202 C H A P T E R 5 Conduction and Switching Losses This Page Intentionally Left Blank CHAPTER 5 Conduction and Switching Losses As switching frequencies increase, it becomes of paramount importance to reduce the switching losses in the converter. These are the losses associated with the transition of the switch from its on-state to off-state, and back. The higher the switching frequency, the greater the number of time