I am coding to make a Fuzzy PID controller for a PWM motor driver for controlling speed. The feedback is a square wave from the hall-effect encoder fixed to the motor shaft.
My code can count the clock rising edges between two rising edges of the square wave from the encoder to calculate the time for one rotation. My function in my code can covert any given RPM to the time required for one rotation, and that is the setpoint for the controller.
The error is obviously the difference between the setpoint and current value (what should be the time for one rotation - what it currently is).
This goes into the back difference PID control algorithm and I get a number as output (which is basicall P*error + I*(sum of prev errors) + D (prev-current speed)).
This has to be mapped to a specific PWM duty-cycle percentage for the PWM driver process to increase or decrease PWM power output.
I am having conceptual issues with this mapping. How should i convert the output number to a percentage? I was thinking in the lines of calculating this value for max possible error and zero error and then mapping these to 100 and 1% duty cycle respectively.
I am looking for the concept and not the code. Thanks.
The output of the PID can be in the units required for the process.
Conceptually, think of the P (and the I and D) constants as the conversion factors between the measured error and the output/controlled variable. If you want output in percent from 0 to 100, and the input is in milliseconds, your P term should have units of percents/millisecond. The units of the I and D terms depend on your sampling interval.
If your process variable is "time for one rotation", and you increase the speed (reduce the time) by increasing the duty cycle, then you would need negative coefficients.