It's a shame that the PID values aren't exposed directly in text boxes but you can affect how the ball behaves (react faster/slower or cause under or overshoot) by adjusting the kp, ki and kd values.
I'm curious, is there any configuration of the PID parameters that would achieve optimal control for this simple physical system? I suppose the optimum looks like 100% thrust and switching to -100% thrust at just the right time to stop at the target position.
Yes. Reformulate the systems transfer function into a state space representation, and then solve the algebraic ricatti equation to find an optimal gain matrix. This is know and the LQR problem.
slightly relevant: I've just built a lighting desk that uses motorised faders for some of the control input. To get the PID values for the faders I built a genetic algorithm tool that would evolve the right numbers. Worked a treat but the first half hour of running had the fader slamming about like nobody's business: I burnt one out during testing... But the really interesting thing I found was that you really need a genetic algorithm tool to evolve the correct set of parameters for the goodness function: different goodness functions produced different results in different timeframes.
I've used desks with motorized faders! (and spent a couple days a week in high school running, maintaining, and programming a small half million dollar lighting setup for a big theater. ) Recently I've done some firmware programming on for theatrical/stage LEDs.
All this to say, did you ever consider writing a test sequence of various position and rate changes, and then analyze the actual vs commanded data offline to get an equation for the response? I've had some success doing that for a similar control problem.
This would be a really cool illustration if you could solve the closed loop transfer function, and display the locations of the closed loop zeros and poles.
Proportional-integral-derivative (PID) controller is a feedback loop algorithm for control. You give some desired goal (in this case is to reach a position) and it moves to that goal by iteratively attempting to minimise the error (in this case distance).
A real life example is the stabilisation of quadcopters ("drones" eugh I hate that word). Where instead of attempting to reach a location the target is to maintain a specific angle relative to the ground.
In short, you're pushing the ball around, and the force you use depends on three things:
- how far the ball is from the target (P, proportional)
- how far the ball has been from the target over time (I, integrator), so if you are very far even after a long time you try to push harder
- how quickly is the ball approaching the target (D, derivator), i.e. if you're approaching quickly then you decrease the force, if you're approaching too slow then you increase the force
PID is a way to minimize error in a system. Its usually used in industrial settings to control mechanical devices. An example would be you want an electric motor to rotate at a constant N degrees/second. If you just apply a voltage to the motor, you probably won't get an accurate movement because of the variable characteristics of the motor, the load on the motor, and other factors. To fix this, you can add a sensor that can determine how fast the motor is currently moving or how much the motor has moved (a rotary encoder). You then compute the error and use PID control to fix the error.
There are limitations of PID and what it can and cant be used for. There are also other ways of controlling systems (like state feedback).
I might have made some errors in the above statements because I haven't done control theory in a long time, but any control theory course at a university would cover PID control.
In relation to what the post is about, it seems like they've tuned a PID control loop to perform PID error calculations based on where you click and where the circle currently is, then move the circle to that position with a force (/acceleration) determined by the controller.
Is "state feedback" synonymous with P-only control?
I've got an intake valve at work that we're controlling with P-only control, It basically iterates based on incremental voltage being proportional to current error in valve position.
I think its a reference to state space control. Traditionally, PID controllers fell under 'classical control', which is basically the entirety of control theory before the rise cheap computing. They involved devising all sorts of "tricks" to make designing controllers tractable by hand (more or less) - stuff like frequency domain representation, pole-zero plots, root locus method.
State-space representation is a strictly time domain approach, where you break your model and controller into a bunch of 1st degree DEs. This method is really only tractable with computing support.
In general, state-space and frequency domain models can in theory accomplish the same goals. You could for many situations design a PID controller using classical methods, and then take the same situation and use a state space approach, and in the end result in the equivalent controller/behavior.
PID is the hello world of Control Theory which can be used to solve very complex control requirements by iteratively processing feedback. For example these quad-copters juggling use CT even though you may think this is only possible with some kind of machine learning algo.
Turns out to be important sometimes - like when your sensors don't quite give you what you want. But when things get to that level, I call in a specialist to make sure it's done right - got one on speed dial, he does controls full time all the time.
Your comment regarding the gravity was really insightful for me (I didn't learn PID formally). When I put large values for i, the ball just oscillated and I couldn't see how i is useful. After I added gravity (in my local build, not published), I began to see how the integral affects the control. That's awesome!