Rewriting code to another language

9 posts / 0 new
Last post
Jamie's picture
Rewriting code to another language

Hi Everyone,

I was wondering if anyone has thought about or tried rewriting the code in a compiled language like c#. I know rewrote the code into Java. I'm wondering if there might be a performance gain to be found if the code was in a compiled language like c# instead of an interpreted language like Python.

I think it might be possible to do this in c#. I was able to write a very basic program (obligatory hello world program!) and compile it on the raspberry pi using mono ( and get it to run the .exe file in the pi terminal.

This would be a huge challenge to actually do, and not one I'm really sure I could, but I'm curious about what others thought or if they had any experience in trying this.

Arron Churchill
Arron Churchill's picture
Compiled programs for Formula Pi

Generally speaking a compiled program should work just as well as an interpreted one, so I do not see that as being a problem :)

From memory all of the bits needed to use OpenCV from C / C++ should already be installed. Other languages like C# will either need to use the C++ based library directly or have their own wrapper library added.

We are happy to help out where we can with any team trying to use a different language. If you need us to check if programs run or setup an SD card we should be able to find the time ;)

One more idea to think about. If you are intending to see if you can improve the speed of the code by having it compiled it might be worth looking at Cython as another possible option.


On a slightly different but related path, have you tried PyPy Aaron? I'm planning to give it a try in the future. My thoughts are that the only sticking point may be the Thunderborg library.

Jamie with something like PyPy the code is JIT compiled, so it compiles to native code as it executes and per thread performance is generally significantly higher than the standard python interpreter. It doesn't resolve some of the deeper limits in Python like the GIL which has some implications with multithreading.

Arron Churchill
Arron Churchill's picture

I have not tried PyPy, it would be interesting to see how well it would work.

The ThunderBorg library is essentially pure Python code using file I/O to communicate with the board. I do not suspect it would cause any issues, but it would need to be tested with an actual board to prove it is working correctly.

One thing to bear in mind with either Cython or PyPy is that the underlying implementation of OpenCV is already compiled from C++ code. This means that the difficult image processing tasks already run fairly fast.

Jamie's picture
Good point

That's a good point Arron. This would explain why the tests I ran showed speed improvements on things like basic math functions and opening and closing a log file. But when I tried to open, crop and save an image it actually slowed down using Cython (only fractions of a second).

I suspect the Java rewrite was more to do with preferred language rather than speed advantage. Though with two season wins, Thomas must be doing something right!

I think I'll leave this idea for now and focus on other areas to make improvements. I think I've got the image detection bugs worked out in my current code and my last race I actually managed to complete a reasonable amount of laps.

Arron, if you were to improve the base code, what would be your top 3 areas to focus on?

Arron Churchill
Arron Churchill's picture
Top three areas

There are two things that immediately stand out as being good areas to focus on:

  1. Detecting other robots in the way
  2. The actual overtaking of other robots

Picking the third option is harder, there are a few good contenders:

  • Recovery code when stuck
  • Detecting you are driving the wrong way at the edges of the track
  • Improve the driving when flipped over
  • Try to drive across the S-curve in a straighter line

Generally speaking the standard code works well when things are fine, but it struggles with obstacles, crashes, and mistakes.

Jamie's picture
Any suggestions

Do you have any suggestions on how to go about these? Is there something so obvious that we're all missing? I hope it's ok to ask and you're not giving any big hints to one team since everyone can see this?

Here is what I've done so far or think could be done. I'm share some of my ideas and strategies in the hopes that others might do the same.

    1. Detecting other robots. I've some one pretty good success using openCV's Haar Cascade. Though training it on what a monsterborg looks like took some time.
    2. Overtaking. At the moment I'm just using the base overtake, but I would probably want something like another mode to switch to that allowed for quicker lane changes
    3. Recovery code when stuck. Thats tough since there are many ways to be stuck. You'd have to look at the common ways people get stuck and look at what the camera sees; or doesn't see.
    4. Detecting you are driving the wrong way at the edges of the track. Perhaps counting the number of left and right turns you make. Since there should be more right than left you could tell if you're going the wrong direction if the count of left turns was suddenly higher.
    5. Improve the driving when flipped over. I guess you could fine tune the amount of cropping the image processing does depending on the orientation of the mosterborg. How far off vertical center the camera? Is there a big difference in height off the track when upside-down and right way up? I've had it said to me that I seem to run better upside-down :)
    6. Try to drive across the S-curve in a straighter line. The only way that comes to mind would be to use the WaitForDistance function and once a certain distance is reached disable the line following for a few seconds and drive straight. This could have interesting results.

Does anyone else have any ideas or strategies they'd be willing to share?

jtobinart's picture

Jamie, thanks for sharing some of your ideas and strategies and I also hope others will share as well.

  1. Detecting other robots. I veer away from any type of specific object detection because it can use up a considerable amount of the Pi’s hardware resources and have instead tried to detect all black objects instead, i.e. the walls and robot’s tires and dark shadows. It works fine until overtake takes over.
  2. Overtaking. I too am using the base overtake. I am planning on disabling it to see how the bot performs with just my obstacle avoidance code.
  3. Recovery code when stuck. I believe it was during the first challenge or race of last season my bot’s tire snagged another dead or troll bot. Because my bot’s camera was pointed in a clear direction and there was enough motion to keep it from going into stuck mode it just sat there spinning its tires. I now compare multiple frames over 10, 20 and 30 second intervals. If all three raise a stuck flag than my bot is programmed to try and reverse. I am going to update it to rock back and forth if the initial reverse fails to get it unstuck.
  4. Detecting you are driving the wrong way at the edges of the track. I didn’t know that this was an issue. I have noticed several times that the bot thinks it has flipped when it hasn’t. I was planning on tweaking the existing code.
  5. Improve the driving when flipped over. Cropping is definitely the way to go. I noticed a big improvement when I cropped the image differently for flipped versus not flipped.
  6. Try to drive across the S-curve in a straighter line. The best way to do this is not to use the lanes at all. I was thinking of using a localization method like a particle filter and a path planning algorithm to drive around the track. We can take distance measurements from the camera. We know that the outer wall is 150mm around and we can take distance measurements with it. We can also predict the distance of an object based on the number of pixels it is away. Doing some testing with the camera I can accurately predict the distance of an object within 2-3cm of its actual location for distances between 30 to 100cm. Greater than 100cm, the error rate goes up considerably. Less than 30cm, without sufficient lighting the error rate goes up.
  7. Other suggestions. Change your image size to 160x128. This has cleaned up a lot of the mess 160x120 created and it does not affect the base code.
My two cents

Thanks Arron for the suggestions and Jamie for asking the question. Also thank you Jamie and James T. for their ideas. Here are my thoughts.

Disclaimer... PicoHB has never used the standard code (although there is a lot of the DNA in it since that was my starting point) and it's taken two seasons to get close to being competitive. You might not have noticed because it went horribly wrong soon after but the last full lap that PicoHB did in the B-Final before breaking down was our personal best fastest ever lap, at something like 16.99 seconds, and that felt like a win in itself, even though we came last. :S

1. Detecting other robots.
Walls are black, tyres are black. Our strategy is to try to avoid objects that are black and aim for track that is less black. This doesn't work when you get very close to other robots or bright lights.

2. Overtaking.
PicoHB will rarely overtake a robot, which is an issue for us, and here is why. We tend to slow down to improve the avoidance of obstacles, so the closer we get to the back of a leading robot the slower we will go. All we can do is hope that the robot ahead makes a mistake which isn't ideal.

3. Recovery code when stuck.
We have a similar approach to JT, averaging the difference between a series of images over time (30 seconds does seem excessive?). But rather than going into reverse we send random 'wriggle' signals to try to jolt the robot to a position where it can make a fresh decision about how to proceed.

4. Driving the wrong way at the edges.
Until the tail end of last season PicoHB was using a method which included the track edges for wrong way detection, eg if you see black-green to the left of green-blue then turn around, however in the end I was getting too much false black-detection in the track, triggering too many false spins. So recently I have gone back to only using the red-green line for spin detection and am trying to avoid following lines at the edges. I feel that the field of view of the robot is about three lanes wide about halfway up the image so a small amount of oscillation in the steering should make sure we see the centre line at least every few seconds.

5. Driving whilst flipped.
As per James T's answer finding the best place to crop the image while flipped matters a lot. I feel like it should be necessary to have an alternative set of PID values for running upside down but I only use the natural PID of the track so I can't suggest any changes to values. In the standard code (from memory) there is a value which multiplies the distance between lines based on how high up the image the scan is, this value would definitely need to vary between flipped/non-flipped - the 'vanishing point' of a flipped image is much lower down than in an upright image.

6. Cutting the chicane.
Shush this is where PicoHB works best, don't want to give away our secrets! Buy yes, as JT says less reliance on line-following seems to be the key.

7. Other suggestions.
Absolutely critical seems to be restarting quickly after a battery disconnect/reboot. I feel like the standard code tends to exit the program whenever an error is detected, ours runs on an infinite loop of retries. Wrapping (python) code in try/except blocks helps to keep the robot at least doing something even if a minor exception is raised elsewhere. Also doing more logging does help, especially where you have future improvememts in mind, and capture as many images as you can for future testing. On image capturing, it seems sensible to move the image capturing to a separate thread and to not save images in-line with other processing. PicoHB will skip the saving of images if other processing is taking place, to prioritse better driving over image capturing. By using picamera rather than opencv videocapture you can get much better control over the camera, but that is a whole other discussion in itself.

8. Hold on, wasn't there only supposed to be 3!?


Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <pre>
  • Syntax highlight code surrounded by the <pre class="brush: lang">...</pre> tags, where lang is one of the following language brushes: bash, cpp, perl, python.
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Comment Images
Files must be less than 10 MB.
Allowed file types: png gif jpg jpeg.
Comment Attachments
Files must be less than 10 MB.
Allowed file types: txt pdf nfo doc docx rtf jpg png gif bmp zip tar gz csv xls.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.