OK, so on the weekend we got our third line follower wired up, and got the robot following lines. And it sort of worked. But every time the line curved, the robot would overshoot, then have to seek the line again. Not great. The line following code was essentially a loop that went something like this:
while true: sleep a bit if (centre over line): go forward else if (left over line): turn left else if (right over line): turn right else seek line
The problem here is that it’s very sensitive to the length of the “sleep a bit,” and quite frankly, I don’t want to be messing around trying to tune that parameter. Furthermore, if we want the robot to do other things as well, we don’t want the “sleep a bit” to be too small relative to the time spent doing stuff in the loop, or it might get a chance to do everything it needs to do. So I decided it was time to change my approach, to event-driven programming.
What is event-driven programming? Well, the easiest way to think about it is the way you interact with your computer. The processor isn’t actively watching for keyboard input, mouse movements or mouse clicks; instead, when one of these things happens, it generates an event, and the programme has in place handlers for different types of events. The job of the programmer becomes that of writing the handlers (and registering them to handle particular events).
Now, although I’ve done plenty of event-driven programming before, in all sorts of languages, I’m new to Python for this project, so it was with a little trepidation that I approached this task. To my relief though, I discovered that event handling is already part of the RPi.GPIO library, so I didn’t have to go back to basics. So now, the programme can be written in terms of events: in this case, the events are changes in the state of the centre line follower – when it finds the line, how it reacts, and when it loses the line, how it reacts.
When the robot is performing a simple task like line following, event-driven programming is fairly straightforward (just a simple reactive approach, where an event triggers an action). The next stage though is autonomous navigation, and we want to use multiple different sensors for that, combining the information to make a decision. For that, I think we’ll need to take a more layered approach, where certain events cause direct action (e.g. stop to avoid a collision), but others require higher-level reasoning. For those who know about these things, yes, I’m thinking in terms of Brook’s subsumption architecture, though I won’t necessarily be following that model exactly.
Now, I would love to be able to give you a little video of our performance, but when I went to do this, I discovered that we had a more serious problem: the motors aren’t running! And they’re not running even under remote control, for which the code is unchanged, so I think we have a hardware problem. Somewhere in the course of adding the third line following sensor, and the camera that was attached this week, we seem to have messed things up. Ho hum. Hopefully we’ll sort that out over the coming weekend.