Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: 2D PID controller simulation (nikital.github.io)
73 points by nikital on May 2, 2015 | hide | past | favorite | 38 comments



It's a shame that the PID values aren't exposed directly in text boxes but you can affect how the ball behaves (react faster/slower or cause under or overshoot) by adjusting the kp, ki and kd values.


That's a good idea, I'll add them.

Edit: Done.


You should consider resetting your integral error when you accept new set points.


I agree, I pushed a new update.

Also, I adjusted the defaults for a snappier response with a bit of satisfying overshoot


This is a special case. If there were gravity acting on the ball the integrator would be required to maintain altitude.


Ya, but then you would also probably use a separate controller for each dimension.


I'm curious, is there any configuration of the PID parameters that would achieve optimal control for this simple physical system? I suppose the optimum looks like 100% thrust and switching to -100% thrust at just the right time to stop at the target position.


Yes. Reformulate the systems transfer function into a state space representation, and then solve the algebraic ricatti equation to find an optimal gain matrix. This is know and the LQR problem.

http://en.m.wikipedia.org/wiki/Linear-quadratic_regulator

Switching the output from 100% to -100% is another viable approach, known as bang-bang control, but it is not necessarily optimal


slightly relevant: I've just built a lighting desk that uses motorised faders for some of the control input. To get the PID values for the faders I built a genetic algorithm tool that would evolve the right numbers. Worked a treat but the first half hour of running had the fader slamming about like nobody's business: I burnt one out during testing... But the really interesting thing I found was that you really need a genetic algorithm tool to evolve the correct set of parameters for the goodness function: different goodness functions produced different results in different timeframes.


I've used desks with motorized faders! (and spent a couple days a week in high school running, maintaining, and programming a small half million dollar lighting setup for a big theater. ) Recently I've done some firmware programming on for theatrical/stage LEDs.

All this to say, did you ever consider writing a test sequence of various position and rate changes, and then analyze the actual vs commanded data offline to get an equation for the response? I've had some success doing that for a similar control problem.


For some definition of "optimal". I expect you can reach the "5% deviation from target" region much faster if you allow a bit of overshoot.


This would be a really cool illustration if you could solve the closed loop transfer function, and display the locations of the closed loop zeros and poles.


Can anyone explain what is this?


Proportional-integral-derivative (PID) controller is a feedback loop algorithm for control. You give some desired goal (in this case is to reach a position) and it moves to that goal by iteratively attempting to minimise the error (in this case distance).

A real life example is the stabilisation of quadcopters ("drones" eugh I hate that word). Where instead of attempting to reach a location the target is to maintain a specific angle relative to the ground.

As always wikipedia is your friend http://en.wikipedia.org/wiki/PID_controller.


In short, you're pushing the ball around, and the force you use depends on three things:

- how far the ball is from the target (P, proportional)

- how far the ball has been from the target over time (I, integrator), so if you are very far even after a long time you try to push harder

- how quickly is the ball approaching the target (D, derivator), i.e. if you're approaching quickly then you decrease the force, if you're approaching too slow then you increase the force


The article links the Wikipedia page: https://en.wikipedia.org/wiki/PID_controller

If the goal is to teach about PID controllers, then the demo could use a graph showing the parameters through time.


- Brown ball is physically simulated (has inertia)

- A program controls the ball only via two thrusters (x/y), force represented as orange bars

- Program's goal is to deliver the ball to a given point (last click position)

There is a link in the footer: https://en.wikipedia.org/wiki/PID_controller which explains what's a PID controller in this case.


PID is a way to minimize error in a system. Its usually used in industrial settings to control mechanical devices. An example would be you want an electric motor to rotate at a constant N degrees/second. If you just apply a voltage to the motor, you probably won't get an accurate movement because of the variable characteristics of the motor, the load on the motor, and other factors. To fix this, you can add a sensor that can determine how fast the motor is currently moving or how much the motor has moved (a rotary encoder). You then compute the error and use PID control to fix the error.

There are limitations of PID and what it can and cant be used for. There are also other ways of controlling systems (like state feedback).

I might have made some errors in the above statements because I haven't done control theory in a long time, but any control theory course at a university would cover PID control.

In relation to what the post is about, it seems like they've tuned a PID control loop to perform PID error calculations based on where you click and where the circle currently is, then move the circle to that position with a force (/acceleration) determined by the controller.


Is "state feedback" synonymous with P-only control?

I've got an intake valve at work that we're controlling with P-only control, It basically iterates based on incremental voltage being proportional to current error in valve position.

Just trying to brush up on my lingo.


I think its a reference to state space control. Traditionally, PID controllers fell under 'classical control', which is basically the entirety of control theory before the rise cheap computing. They involved devising all sorts of "tricks" to make designing controllers tractable by hand (more or less) - stuff like frequency domain representation, pole-zero plots, root locus method.

State-space representation is a strictly time domain approach, where you break your model and controller into a bunch of 1st degree DEs. This method is really only tractable with computing support.

In general, state-space and frequency domain models can in theory accomplish the same goals. You could for many situations design a PID controller using classical methods, and then take the same situation and use a state space approach, and in the end result in the equivalent controller/behavior.


PID is the hello world of Control Theory which can be used to solve very complex control requirements by iteratively processing feedback. For example these quad-copters juggling use CT even though you may think this is only possible with some kind of machine learning algo.

https://www.youtube.com/watch?v=3CR5y8qZf0Y


>> PID is the hello world of Control Theory...

While that's true, it's surprisingly effective and as a result is very widely used.


Is there something more complicated that's actually used?


>> Is there something more complicated that's actually used?

Yes, absolutely. It's less common but so are the people who can implement them.


This was my indirect way of asking for examples :).


And that was my indirect way of deferring to someone farther down the path than me. But Wikipedia has a detailed overview:

http://en.wikipedia.org/wiki/Control_theory

I was surprised not to see Kalman listed in the people section. His contribution:

http://en.wikipedia.org/wiki/Kalman_filter

Turns out to be important sometimes - like when your sensors don't quite give you what you want. But when things get to that level, I call in a specialist to make sure it's done right - got one on speed dial, he does controls full time all the time.


This is how cruise control in your car works, in almost all cases.


Also how temperature control in most ovens works!


Would be pretty nice to have some external force (like gravity), otherwise the i value is pretty much useless.

Also, some friction would be nice.


Your comment regarding the gravity was really insightful for me (I didn't learn PID formally). When I put large values for i, the ball just oscillated and I couldn't see how i is useful. After I added gravity (in my local build, not published), I began to see how the integral affects the control. That's awesome!


Looking at the source this is actually a PD controller, which makes sense for a simple simulation like this.


One way to make the integral term useful would be to add some static friction to the simulation, that would be interesting.


For a complete Python PID loop implementation and (console) demo:

    git clone https://github.com/pjkundert/ownercredit.git
    PYTHONPATH=$PWD python ./ownercredit/pid.py


Nice demo. Thanks. FYI http://nikital.github.io/ is 404.


Yeah, I didn't put anything there yet, but thanks for the heads-up.


Challenge HN: Make it travel in a perfect circle. Comment with your strats.


Make it neutrally stable, e.g. Kp = 1, Kd = 0, Ki = 0;

Click x to the right of its standstill position. When it is halfway there click x below its current position.


Very nice simulation!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: