Tweet

Posted on 01 December 2019

Processor-Controlled Power Conversion

Free Bodo's Power Magazines!

 

  

 

Control loop is implemented in an algorithm

Power converters have traditionally been designed around an analogue pulse width modulation (PWM) controller. Two types of control schemes for this architecture have emerged: voltage-mode control, in which the duty cycle is directly adjusted by the error on the output voltage, and current-mode control, in which the duty cycle is adjusted by limiting the current in the power switch or inductor to a value determined by the voltage error.

By Mario Aerden, TSM, Future Electronics, Belgium

 

Current-mode control has become the control method of choice in high-performance converter designs. The main reasons for designers to choose current-mode control over voltage-mode control include better loop response, line voltage feed-forward, inherent current limiting and simplified control loop compensation. Some years ago, power converter vendors found a use for bolting a microcontroller (MCU) alongside the analogue part of the design. In a first stage, the MCU was simply tasked with performing functions such as monitoring, data logging and interfacing to the outside world.

Next, the MCU began to be used for more intrusive tasks such as generating reference voltages, soft-start algorithms and power sequencing. These designs are commonly referred to as processor-assisted converter designs. Now, the latest designs are moving towards fully processor-controlled converters, in which the control loop is implemented in an algorithm, executed by the processor.

The advantages of such implementations over analogue PWM-based converters include independence of thermal drift, ageing, and component tolerance. These are the familiar advantages of changing from an analogue to a digital system. Processor-controlled systems also offer the ability to individually tune the converters in software routines instead of designing for production tolerances, and the designer can implement system knowledge in the control algorithm to improve performance.

On the other hand, the well known drawbacks of digital versus analogue design, such as quantisation errors and processing delay times, also apply here, and must be included in the error budget and stabilisation analysis respectively.

A basic block diagram of a processor-controlled converter is shown in Figure 1. The PWM block used is a microcontroller peripheral, which operates completely differently from an analogue PWM controller, as illustrated in Figure 2. In an analogue PWM controller, the duty cycle is generated by comparing an error voltage (generated by an internal error amplifier) to a ramp voltage, making a comparator change state on match. This approach implies an infinite resolution on duty cycle. In a digital system however, the duty cycle is calculated and the processor times the on and off times, so the duty cycle is of limited resolution due to the limited number of timer steps (quantisation errors).

Block diagram of a processor-controlled converter

The analogue-to-digital converter (ADC) used to measure output voltage and possibly other system parameters such as inductor current in current-mode converters is also of limited resolution, introducing a second factor of quantisation errors. As an example, take a converter design with a 1% output voltage accuracy specification. To be able to measure a 1% output voltage deviation, the minimal ADC resolution needed would be n bits, where: 2n=Vout/ΔVout or n=log2(Vout/ΔVout) = log2(1/0.01) = 6.64, rounded up to 7 bits.

However, this resolution should be considered as the effective number of bits (ENOB) needed in this converter design, which measures the entire data acquisition system from input to resulting data values, and includes the analogue front end. Analogue components in the measurement path, such as filters and channel-selecting multiplexers, introduce noise and distortion into the signal chain, making the least significant bits of the ADC useless. They also lead to a higher ADC resolution specification. In a noisy environment such as a power converter, this can have a serious impact on the ADC specification.

Analogue PWM versus microcontroller PWM

Another aspect that must be considered is the fact that the least significant bit taken into account reflects the minimum change in output needed before the feedback control loop sees an output error and starts reacting. Since the control loop algorithm has a limited bandwidth, the output voltage will further change and the output voltage error will therefore be larger than the least significant bit. In a real-world design, a 12 or 14-bit ADC will be needed to achieve the 1% output voltage accuracy.

The theoretical number of PWM states needed to achieve the desired resolution on the output is 2n+1, for which an n+1 bit counter is needed to generate the PWM signal. The rate at which this counter is clocked (usually the system clock) sets the maximum number of bits that can be generated in a fixed time interval (the PWM period): clock rate = switching frequency*2n+1; or maximum switching frequency = clock rate/2n+1.

If the PWM resolution is not sufficient, the control loop dithers between available values to achieve the desired result, causing ripple currents and unpredictable control loop behaviour. Therefore, the PWM resolution should be chosen at least one bit more than the ENOB of the measurement to have enough available output states. A low PWM clocking rate leads to low switching frequencies, demanding larger converter magnetics. Hence the need for high resolution PWM and high clock frequencies in microcontrollers used for power converter designs.

The microcontroller peripherals needed for power conversion designs are very similar to the ones found in MCUs targeted at motor control applications. However, care must be taken when selecting such an MCU. Motor control applications typically use switching frequencies in the range of 20kHz to 50kHz. Switching frequency requirements in power converter designs go up much higher; 100kHz to 500kHz are very common, and some converters are even designed with switching frequencies in the MHz range.

This increases the requirements on peripheral performance, such as ADC conversion times, as well as on processing performance. DSP cores with extended peripheral sets, such as the Freescale 56F8300 series, have proved to be the most suitable. Peripheral features such as hardware-shutdown of the PWM for overcurrent protection, and complementary PWM outputs with deadtime insertion for half-bridge control, are also very useful for both motor control and power conversion applications. Another feature of interest is the synchronisation of ADC and PWM, to be able to exactly time the measurement during the PWM period.

The control loop is usually a standard PI or PID algorithm, which can be used for both voltage and current-mode control loops. The design of the Proportional, Integral and Differential constants determines the system frequency response, which is equivalent to tuning the loop gain and phase shift in analogue converter designs. To ensure stability, a minimal phase margin of 45° and a minimal gain margin of 3dB are taken as a general guideline.

56F 8345 - 56 F8145 Block Diagram – 128 LQ FP

In digital designs, however, an extra factor of delay (calculation delay, sampling and conversion delays), causing an extra phase lag must be taken into account. A fixed time delay is equivalent to a linear phase lag in the bode plot: ω*tdelay=2πf*, tdelay in radians, or 360°*f*, tdelay in degrees.

In a digital control system, the Proportional, Integral and Differential constants in the algorithm can be altered depending on the operating conditions to improve system performance. In a universal input-voltage-rated off-line switcher for instance, the input voltage can be measured to be either 110Vac or 230Vac and the control loop parameters can be chosen accordingly, whereas in an analogue design, the designer must cope with the complete input-voltage range specification when tuning the control loop.

System knowledge can also be incorporated in the design by adding a feed-forward block in the control loop, improving overall performance in areas such as transient responses to line or load changes. Ideally, the feed-forward block is modelled to have the inverse of the system’s transfer function. In practice, an observer is often used in the feed-forward block to calculate the parameters of the used model from measurements.

Current-mode control can be interpreted as a basic form of feed-forward of the input voltage, because the rate of change of current through an inductor is proportional to the voltage over the inductor. In a processorcontrolled converter however, a more sophisticated feed-forward block can be implemented, since the designer has the potential to model the system in software.

As for the software implementation, the control loop is usually executed in an interrupt routine, where the interrupt is fired from a timer to ensure a fixed timing of the control loop algorithm. Most MCU development environments include such a standard PI and/or PID algorithm in their software libraries.

Additional processing power can be used to implement extra features in the main loop in which timing is not critical. These are not limited to adding functionality to the converter. By implementing monitoring functions, such as ambient temperature sensing, and appropriately adapting the control loop, the overall reliability of the converter can be greatly improved. This can be done by simply limiting the output power of the converter at elevated temperature, but more sophisticated techniques, such as implementing models for system components such as power MOSFET switches over temperature, are also possible.

With this approach, the optimal switching frequency with regards to overall losses, being a combination of conduction and switching losses, can be calculated at the measured operating temperature, and the control loop can be adapted accordingly.

Such features are especially beneficial in the higher-power converters, where the cost of the converter is mainly determined by the size of the heatsinks and by the measures that need to be taken to remove the dissipated heat.

 

 

VN:F [1.9.17_1161]
Rating: 0.0/6 (0 votes cast)

This post was written by:

- who has written 791 posts on PowerGuru - Power Electronics Information Portal.


Contact the author

Leave a Response

You must be logged in to post a comment.