Race Simulation

105 posts / 0 new
Last post
andythegrouch
Lap counter not working

Hi, I have successfully got the simulation running but the lap counter function always seems to return 0. Is any one else having this problem? I am running both sets of code on the same computer running Ubuntu 16.04 OS,is this an issue.

piborg
piborg's picture
Simulation version 1.0.1

Are you running the latest version of the simulation?

We updated the simulation in SourceForge yesterday to include the start marker which the code uses to determine it has completed a lap.
There is an image attached below of what the simulation should look like when first started.

Comment Images: 
virtualuk
Put a link or highlight it on the main website page

Folks - maybe I'm blind and can't see it but a couple of recommendations as I've been caught off guard:

* - Put a link, highlighted on the front page of the website to the simulation with instructions on how to obtain and build
* - Put key dates for code testing / race days / etc. on the front page of the website. Maybe I wasn't paying attention, but the first I heard of code testing for races was today and the submission is meant for tomorrow. Kinda feeling that some of these things are like the planning permission notices for Arthur Dent's demolition of his house to make way for a by pass.

*edit* - (please don't take this the wrong way - all comments made here are meant to help the not so enlightened as myself participate in this brilliant endeavor).

I'm going to hazard a guess that there's a fair few folks who have entered in for the races without a physical bot and are looking to use the great work that computernerd486 has put together with the simulation in order to get things going. In order to figure out where to get the simulation you have to dig around in the forum to find the github location. Once you've downloaded the bundle it's not entirely clear what's supposed to go where or how to launch the simulation. A quick look around and I see the jar and batch file and hey presto the simulation is up. I press the start button and zilch. After some more digging around on the forums it seems like you have to have a R-Pi to actually run some code (still not entirely sure what exactly) and the simulation is supposed to hook up to it.

All in all, that's a fair mount of digging around, with no real clear guidance for first timers working with the bots.

I'm happy to help pitch in and put together documentation and contribute to the cause that way to help it easier on newbies later on, but I do feel like documentation in general (not just for the simulation, but everything - what it takes to participate, what the minimum requirements are, how to get everything set up, etc.) is a little haphazard and could do with some improvement.

Again, really not meant with any malice, just trying to make it a better experience who might have got involved with Formula Pi because it would be a fun activity, one with possible education aspects for the family and might be coming in cold.

piborg
piborg's picture
Formula Pi getting started guide

I think you are right, we probably should have some kind of getting started guide for Formula Pi.
It would be great if you could help us with this, ideally it wants to be from a beginners point of view.

The dates are a bit tricky, based on what we have seen this morning it has caught a few people out.
At the moment they are on the "Dates" page on the top menu bar of the site (see here).
We did this because there are a couple of pages of dates and we did not want to make the front page too confusing.
Do you think there is some way we can make this easier for people to find?

You are right that most of our entrants do not have a physical YetiBorg to test with.
What we have done is added the simulation made by computernerd486 into our standard race code repository.
This is an already compiled version: https://sourceforge.net/p/formulapi/code/ci/master/tree/Simulation/
There is also an explanation of how to use the SimulationFull.py script with the simulation in the Guides directory: https://sourceforge.net/p/formulapi/code/ci/master/tree/Guides/Simulatio...

Most of our getting started information is also in the Guides directory: https://sourceforge.net/p/formulapi/code/ci/master/tree/Guides/
It sounds like this was either not obvious or a bit too technical, maybe even both.

We would like people of all levels of ability to feel they can compete in Formula Pi.
As this is our first attempt at running such a competition it is still a little rough around the edges.
The more help we can get to improve the end experience the better it will be for everyone :)

piborg
piborg's picture
Getting started guide

We have added a getting started guide as a link on the front page now.
It is a work in progress, we still have a fair amount to add like getting the simulation running.

You can view it here: Getting started guide.

virtualuk
I'm more than happy to help

I'm more than happy to help wherever I can. Am I right in thinking that the race code can only be ran on an r-pi, or can it be executed on say a windows machine? The reason for asking is that it would help simplify the getting started steps if everything (meaning race code and simulation) from a single machine, and then increase complexity to loading code onto the r-pi.

piborg
piborg's picture
Running the race code for the simulation

At the moment the SimulationFull.py script does not run correctly under Windows unfortunately.

I have a theory how we might get it working under Windows as well, but where we have been busy preparing for the first test round I have not had enough time to check if it will work.
When I can find some time I will see if it can be fixed, if so it would make things easier for people.

virtualuk
Will it work ok if the code

Will it work ok if the code was running on a VM or is there something specific about the R-Pi that's needed?

piborg
piborg's picture
Running SimulationFull.py

The script should run on any Linux system with Python, OpenCV, and Numpy all installed.
A VM should work fine as long as the networking setup allows TCP communication to the system running the simulation.

On Windows this is the line in the script that fails:

    Globals.capture.open()
Geoff Riley
Geoff Riley's picture
Running SimulationFull.py on Linux

I've had my RaspiZ running plugged into the usb as an ethernet gadget which works beautifully for me... once I found the right settings.

However, attempting to get SimulationFull.py to run on the Linux box at the same time as the simulator has proven unrealistically fast in response, it's probably because I've got 8 cores on my processors (and hyper-threading so it thinks there are 16!) and so all processes get their own threads.

As for the Globals.capture.open() problem; the only time I ever see that is when I forget to start the simulation server or I've change IP address because I'm on my lappy. Oh, I used to get it originally because I had to run the server on port 10001; I have a service already running on 10000.

Hope that helps.
Geoff

awood85
awood85's picture
Obstacle Detection

Is there a way to place an object (maybe yetiborg sized!) at various locations on the track in the simulation? It'd be great to simulate how the object detection and avoidance is working.

Adam

piborg
piborg's picture
Obstacle Detection

The current simulation does not have any options for that unfortunately.

If you have some knowledge of Java and OpenGL or are feeling like a challenge you could try and add some dark cubes to the rendering code.

The full source for the simulation can be found on computernerd486's GitHub page: https://github.com/computernerd486/FormulaPiSim

The relevant class will be TrackView, the draw function is responsible for going through the steps to render both the overhead and camera views each frame.

vitor.ramalho
Obstacle Detection

My knowledge is pretty limited on that subject, but I support you request and I will eventually spend some time trying to implement that. It would be awesome if @computernerd486 could help us out or even guide our trials.

computernerd486
Hello Again! I'm back to developing.

I've been pretty busy over here (4 kart races, and replacing the suspension/brake bits on the winter vehicle in the past couple weeks), so I haven't been able to make updates as I would have like.
To answer the question, yes, obstacle detection is on the plate!

I haven't even been able to get my competition code vetted and send across yet... (Hence missing from the first two races). But I have started work on this again!

First thing I've tackled was some inconsistency for high speed image reading, I've managed to bring that way down, it was previously at 20-30 ms per call, now its down to 4ms. This was an issue for the simulation <-> pi across the network, I was noticing a lot of dropped frames due to delay. I've included the screen to show the new and improved image serving timing.

Watching the first few races I agree the biggest thing that needs added is the obstacle detection.
Second is a "flip" option, to simulate the thing running upsidedown. Both of which was quite entertaining to watch.
I know there is some desire for the simulation to run on the PI itself, it may be limited to the PI3, but I can look at trying that here.

Is there any other items you guys would like wish-listed?

Comment Images: 
Geoff Riley
Geoff Riley's picture
Wish list?

Welcome back! I thought you'd gone quiet...

I had a look at your code myself, but personal problems (as well as trying to keep my OU studies going) prevented me from attempting to dive in to make any changes... many of your code comments remind me of me. :^)

Collision detection and prevention of exiting the track would be great additions to what is already a fantastic piece of code.

If you want to stretch things further, then allowing for two (or more) bots simultaneously would give some serious testing possibilities. A static car or two to simulate different blockage conditions is far more important though.

Kind regards,
Geoff

awood85
awood85's picture
Wish List

I also had no luck with the Java side of it - not my background at all.

Apart from the obstacle detection already discussed the next major thing (and maybe not possible) would be solid walls, its difficult to test more aggressive code that may cause the occasional wall grind or crash but it currently disappears into the void.

Fingers crossed we get off the start line tonight! See you all there - good luck to the other competitors.

Adam

* Edit - I see Geoff already mentioned that - great minds obviously. I second his suggestion then!

computernerd486
Lap Tracking

Hello Guys,

I'm not gonna knock you guys at all for being less than comfortable in the java, its not the java part that is painful, it's the fact you have to reference c++ opengl by version number and HOPE that the documentation is close to the api.

Anyway, not that this was asked for, but I felt it useful to me in my ongoing code development (I can't let you guys hog all the fun of the racing).

Lap Tracking!

Protip: Consistency is everything in racing
This little addition allows you to check that. It's showing the current lap in orange, and the previous lap in light blue. I'll add a checkbox to the side to turn it on/off.

Don't worry, I've started figuring out how to put other bots on the track for "avoidance" purpose.
Watching the races, being upside down seem to be more common than previously thought, so I'll be adding a checkbox to "flip" the bot too.

Comment Images: 
Geoff Riley
Geoff Riley's picture
Lap Tracking

That looks like a great addition: it's always difficult to visualise how one lap compares to the next, that will be a wonderful help.

On the matter of 'flipped' vehicles, don't forget that the driving profile changes quite considerably in that orientation. :)

As for the OpenGL api docs... they can be summed up by the phrase "Here be dragons." It's not the first time I've done battle with them, but my currently reduced agility makes them all the less friendly. (I never realised quite how often I used things like ctrl-click and shift-click to open tabs and windows! It's hard enough typing and using the mouse left handedly.)

computernerd486
Speaking of dragons....

That's a very apt description of the documentation. On the c++ side its not too bad, but once it's gone through the wrapper on java, well, "Noble knight, take this willow branch to fight the dragon with!". I've spent the night playing with my race code, hopefully that'll be sorted well for the next race. I've been playing with float buffers and trig all morning. (Maya would have been faster possibly, but last time trying to use VBO's for drawing, I lost 4 days of sanity)

Anyway, to get you guys excited about the dragon coming...

Comment Images: 
computernerd486
3D Magic

Some magic I've put together- There is a full(ish) 3d model now. Progress on allowing there to be "ghost" bots on the track is on its way. The image is an animated gif if you click on it.

Quick poll, would the the opponent bots following their respective lane at reduced speed be acceptable for testing?

Comment Images: 
Geoff Riley
Geoff Riley's picture
3D Magic

Absolutely! Just a static object would be enough, but any kind of moving object is a major bonus.

piborg
piborg's picture
Impressive :)

I am really impressed with how well the 3D model has come out, it really looks the part :D

I feel your pain with the OpenGL / Java troubles, every time I have used OpenGL myself in the past few years it seems to be with a different language and therefore library.
All of them seem to rely on the standard C++ documentation but give you no idea what version they are based on...
Just to further confuse things some of the libraries remove the typing on the functions (i/f/d) in favour of using overloading syntax, PyOpenGL and SharpGL spring to mind...

I agree with Geoff, most testing would only need static obstacles, so any moving targets are icing on the cake :)

computernerd486
GitHub update

Just as an fyi, the 3d model code is on GitHub. Should be as easy to add in as creating an instance, and passing in each bot and the texture for the lid. Hopefully I've made it easy to work into your graphic.

Since I'm close enough with having the extra bots run around, I'm just holding the update for the distribution until then.

Also as a note, I've noticed int[] creation on the pi is slow, I managed to shave 3-4 ms off my process logic by just reusing the same array. (Allocate once)

piborg
piborg's picture
Animation for the next race

We have the 3D model in our animation now ready for the next race :D
It is impressive just how much better it looks now with the 3D renderings of the robots instead of the basic flat versions.

The amount of smiles around the office here when we first tried it was great, I think it made everyone's day here.
Thank-you so much for all the effort, it really shows how much time you have put in to all of this.

Comment Images: 
computernerd486
Camera jitter

I'm not sure if anyone else has noticed this, but since I've got complete custom code and I'm doing straight processing on the raw image data, I've found noise on the image which played havoc with my lane detection.

I'm going to add in a png/jpg switch, the latter gives a bunch of noise to the image edges.
Another detection issue I had was with the lights, and having background on the lightbar slightly brighter caused that same issue, so I've adjusted the image to suit that.

Besides that, I've been a touch more focused on getting the race code ready this week than the sim. There is going to be a couple extra settings put in to the sim, brightness and saturation adjustments, for sure since I've learned more about color spaces and mathematical image adjustment than ever though necessary this week. That may be of more use to anyone crazy enough to write their own detection like me. I could see a good case for having blur too, and the opengl orange book is sitting in my car.

Don't worry, I don't have an edge up on anyone with the opponent detection, I'm putting the paths in right now to get the extra bots running around. I have a feeling mine is just gonna plow right into them.

There's also a thought of allowing a full screen / bigger size for the first person view, which i'll post a picture why.

Comment Images: 
piborg
piborg's picture
A few observations from actual camera footage

We had exactly the same kinds of problem ourselves, it can be quite difficult to get everything right.
Below are some thoughts based on problems we have seen when developing our version.

You can make more "noise" in the image by increasing the JPEG compression level.
We tried to save some disk space early on by compressing images heavily, turns out that it messes with the line edges quite a lot :)
It can also add some colour distortion which can be useful for testing.

The Pi camera loves to auto-tune while driving around.
In itself this is probably a good thing, but it has some quirks:

  1. The auto brightness is not quite the same rate for increasing and decreasing
  2. The lights can cause a bit of a colour shift in the camera.
    See the start light images below for an example of what I mean.
  3. Where the lights "dazzle" the camera it tends to see a slightly erroneous colour when they are turned off
  4. It can hold on to colour tuning for a few frames too long

There are bright and dark spots on the track.
In particular there is a shadow cast on the main straight, although it is fairly light now.
Be aware of this for any thresholds or auto-tuning, it has tripped us up plenty of times :)

Motion blur :)
This reduces the accuracy of the image in general
Slower frame rates suffer more in this case.

Unexpected points of colour.
When we do our thresholding to find each lane there are often some stray pixels which fall into the wrong camp.
This can be for quite a number of reasons, noise is only one of them.
We are using the erode function to reduce the area of each lane / wall, in doing so small blobs get shrunk to nothing and are removed.
The consequence to this is that the positions are out by half the erode width.
We fix this by averaging between the two edges which should meet.
Alternatively you can use a dilate of the same size after the erode to put the edges back where they were, this is usually referred to as "opening".

Comment Images: 
piborg
piborg's picture
Control oddities

On a related note there are two distinct control differences between the simulation and the real YetiBorg that may also trip you up since you are doing a complete re-write:

1. 0% is free-wheeling, not braking

As long as there is a slight amount of power applied to the wheels (say 1%) they should try and turn at the requested speed.
If you actually ask for 0% though the board essentially disconnects from the motor altogether, allowing it to free-wheel.
This has the effect that a power of say 100% left, 1% right turns hard to the right, but 100% left, 0% right turns lazily to the right.

In our standard example we have a setting called steeringClip which is set to 0.99 (99%).
This prevents a full 100% steering, which would result in setting 0% output on one side.

In your case when you send a COMMAND_SET_*_REV message to the ZeroBorg it is probably best to ensure the value sent is at least 1 unless you are actually intending to be stationary.

2. The turning circle in the simulation seems to be better than the actual YetiBorg

Put simply the turning circle seems to be much larger in reality than the simulation predicts.
The best turning circle without tank steering in practice takes something similar to a whole lane to do.
In the simulation it can turn without tank steering on the exact point where the wheels are.

We have cheated a little here to make the simulation match our real behaviour better.
In our code we have a final gain value applied to the steering after the control code has made its decision: steeringGain.
When the simulation is run there is an override for this value which is used instead: simulationSteeringGain.

In our code we found a 4:1 ratio seemed to match reality fairly well:

steeringGain = 2.0
simulationSteeringGain = 0.50

I am expecting you will have a similar mismatch between the real control and the simulation behaviour.

computernerd486
Control oddities

I've noticed a touch. we're going to see how my code fairs Monday or weds, whenever it goes. Hopefully it'll do a bit better and not just run into the wall this time. I think it was failure of lane detection that caused it. We'll see how if the PID is setup well enough to handle that.

My code has never gone below 50% motor that I've noticed, so hopefully the freewheel problem won't be in play.

This will play out nicely, the more I see the runs on the track, the more I figure how to make the sim "more helpful".

Major benefit of the ground up coding, I'm getting a better understanding of the inner workings and should have a better product to help everyone involved.

One day I'm going to get bored with some sidewalk chalk a basketball court once I hook up the accelerometer chip. Question on that related to the pi 3, hope much difference is there running with that? Anything movement wise? I'm assuming acceleration and turning should be unaffected.

computernerd486
A bit of UI Cleanup, and function to go...

I have the bots added in, and updaters set. They follow their respective path to a T at half speed. I need to add the reset function, start/stop global and individual.

Little bits of progress.

Thanks again for the help making the transparency work nicely on the wheels from the PiBorg team!

Comment Images: 
piborg
piborg's picture
Realistic textures

The rate at which you are making improvements is staggering, I cannot keep up :)

We had a bit of a think and decided that YetiBorg detection testing would probably benefit from some textures which are closer to the real robots.
To that end we have cut up some photos to cover the tyre tread, wheels, and chassis plate.

The first two should be drop-in replacements for the current copies if you want to have a quick look :)

Comment Images: 
computernerd486
Textures

Those are fascistic! I'll put them in on my side too. Mine were just throw together with inkscape and gimp. I'll get that brushed aluminum texture on the baseplate later tonight.

There's so much to do still, I may have to start using a change tracker to keep track of what I'm working on. Things have been prioritized on "what do I need, or would be most useful to develop the race code" minus the "that 3d model is gonna make that look so sick!" priority..

The code for making the bots follow the lanes in the sim is pretty slick, and theoretically, just a switch off the updater class if it's user or AI.

Feel free to suggest anything you see which would help.
Plan with the AI is to have check boxes for each to turn on/off, and have a separate start/stop per each so that you can test something like one is sitting at the apex in the way. Of course the automatic start on "go" too.

computernerd486
Camera view

Looks like something interesting, as I'm sitting trying to remember mode7 math to calculate distances to objects, and all the fun trig associated with that, some research had provided that the camera has a 65 degree FOV. I'm curious to how that plays out, I'll be changing it on the sim. Currently it's at 45, I wonder how much of a change it'll make. We may want to run though with the default code again to verity it acts the same.

For anyone who uses the extremes of the video, this would definitely affect them.

Geoff Riley
Geoff Riley's picture
Superspeed!!

Interesting... I pulled your latest code down from GitHub to see how your AI cars work and discovered that if you click on "Start AI" multiple times, the cars get faster and faster... I guess you got Barry Allen in to do the car moving? :)

Beside that, I get an error whenever I attempt to use the sim:

[png @ 0x1ca36a0] chunk too big
Traceback (most recent call last):
  File "./SimulationFull.py", line 325, in 
    Globals.capture.open()
TypeError: Required argument 'device' (pos 1) not found

The 'device' should already be set when the video capture interface is created using the imageUrl, so the error doesn't seem to make sense.

The code around that line is:

# Dummy startup
Globals.capture = cv2.VideoCapture(imageUrl)
if not Globals.capture.isOpened():
    Globals.capture.open()
    if not Globals.capture.isOpened():
        print 'Failed to open the simulation stream'
        sys.exit()

Checking variable values, imageUrl="http://127.0.0.1:10001/view.png" and the listening port on the sim has been set to 10001.

Attempting to load the page from Chrome correctly displays the image. I'm a little baffled, so before I spend much time on it, I thought I'd check if you had seen this situation before?

Kind regards,
Geoff

computernerd486
superspeed!

That's awesome, reminds me of the old slot cars. That was meant to be a stop gap before playing with getting the proper UI on screen for their control. So whats happening on that is when the button is clicked, it starts a new instance of the updater, so essentially there is #clicked * updates per tick.

Like a good programmer I try to commit when new functions are added and its "stable". I've lost code too many times, and some of the changes I'm playing with if they dont' work, its nicer to roll back to a previous state. In other words, some of the bleeding edge stuff is "not polished". I'm glad you've had luck taking and compiling it, helps out a bit if you find bugs before release checkpoints,

That error is interesting:
[png @ 0x1ca36a0] chunk too big

I was messing with the output a little bit which is how I found the crappy image actually helped my code.

OH! Check the resolution you're sending in the UI. I may have commited as 320x240 instead of the 240x160, or whatever the default was. I test at a much higher res. You'll find it saved in teh file under resources/settings/server.conf.

Geoff Riley
Geoff Riley's picture
Tracking down the bug..

Thanks for the feedback: no, it's not the resolution... so I started up wireshark to see what was happening in the data stream... and all became clear.

I used wget to make the request and found the following request packet:

GET /view.png HTTP/1.1
User-Agent: Wget/1.17.1 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: 127.0.0.1:10001
Connection: Keep-Alive

Absolutely fine request, no problems there, however, the header of the response came back as:

HTTP/1.1 200 OK
pragma: no-cache
content-type: image/png
content-length: [B@50b5b26e

...and that content-length is definitely not correct. Clearly Chrome ignores the error, but cv2 stream reader doesn't and interprets it as a bad number giving a false error response.

And the error was committed on git commit #4001cc7. I've added a note on the appropriate line, but basically, you lost .length off the end of buffer in the file src/sim/util/io.stream/RTSPStreamer.java around line 295.

Hope that helps.

Geoff

Geoff Riley
Geoff Riley's picture
See.. it's working!

It's not behaving exactly like the previous version... but this shows running the code I've sent in for this weeks races: If I achieve that waltz in the middle it'll really be spectacular! :)

Comment Images: 
computernerd486
Nice job with finding that

Nice job with finding that oops in my commit, I've merged the patch in. There's a new option on output type, png (high quality, lossless) and jpg (blurry/lossy). Should work fine, key point with the lossy jpg, it gives the blur the camera may encounter on race day.

I just need to put the auto start for the AI bots (backend code is there, maybe not fully committed) then we'll be ready fro the next milestone release.

That waltz would be impressive, I'm a touch nervous to how my code is going to fare in this round. I've made a couple changes to hopefully get the image even more exact. I've adjusted the FOV on the cam, pushed the camera forward from the center rotation point (hopefully makes it more exact) pointed the camera down a touch. This threw off my code a bit, so it'll be interesting to see if it has a massive negative affect compared to what I had submitted for Friday.

Can you guys check a current image from the starting line? I pulled the last one i saw from this thread for comparison.

Comment Images: 
piborg
piborg's picture
Camera FOV

I have grabbed a new image from one of our robots, it is very close, but not perfect.
I think the resolution used may have an effect on it, we are using 160x120 in our example.

I have attached the raw image, the camera calibration test, and a quick animated comparison of the image with yours.

On a side note the walls seem to be taller in the simulation than reality.
Does that affect your code at all?

Comment Images: 
computernerd486
Walls

I've noticed the walls are taller in the sim too, do you have the height handy? I just double check is scaled right. It's much closer than it was overall though.

The wall height won't necessarily affect my code, I have a feeling that detection will be too low when it runs on weds to matter. Í have to put some auto calibration code in place (similar to the start lights detection, which I hope works this time).

piborg
piborg's picture
Walls

Most of the walls are 150 mm high, there are some parts where it is only 100 mm high on the inside of the track to improve the possible camera angles :)

Good luck with the auto-calibration, it is a dragon I have had to fight with a good number of times in recent years.
Our standard code actually has an auto-calibration between the three colour channels, but I have it clipped to only allow a small range of correction.

computernerd486
Wall height

Thanks for the info, I previously had it at 20 cm, and just playing around set it at 15 cm which looked right. I'm glad it's close enough to scale that eyeballing it yielded the correct value, which you have confirmed.

Just double checking, for tomorrow's run, you are loading the code I've submitted as of Friday morning, correct?

Auto calibration is indeed a tricky thing, In my mind is more about finding bounds and processing handles than actual color processing. There was a bunch of time spent last week on color math, RGB <-> YUV conversion.

As to what my code actually sees, I have an example here:
https://youtu.be/lKc2xaTPtII

I went on the extremes running a 5 state processing, white, black, red, green, or blue.

piborg
piborg's picture
Submitted code

Yes, it the code you submitted on the 2nd December which will be run tomorrow.

The robot view looks good, it is great to see someone else's processing at work :)

computernerd486
Jump Start

I think i found why it jump started- That bit of red off to the left between the blue lines I'm pretty sure is what caused it. I'm going to reactivate the logging code so I can get a better idea instead of guessing.

Comment Images: 
Geoff Riley
Geoff Riley's picture
Robot vision

That's a great view illustrating your processing; I considered stitching some of my captures together to stream as a video for myself to watch through, but I've not got a workflow to put that into at the moment and no time to develop one.

The broken shoulder means that I'm having to spend far more time on my OU Masters work than I would normally: I've had to ask for an extra week to complete my current assignment as it is. I never realised how much I rely upon my right hand! LOL

We're both racing tomorrow... so good luck: I'm sure I'll get past the first bend this time, and not break the bot. :)

computernerd486
First bend

That bend was a doozy, at least mine didn't immediately jump start, it waited for green. I'm gonna get harassed about breaking the bot this time...

Geoff, Good job on getting around this time!

Geoff Riley
Geoff Riley's picture
That first bend is...

...mental. :)

I think there should be a special award for people who actually mange to break the bots: it's a very exclusive club. ;)

I took out the self righting code that I had tried because I didn't want to risk ending up rolling over and over again; sadly that resulted in me being upside down for almost the whole race, and that really put the kibosh on my manœuvrability.

After such a good start, I think I lost my way... perhaps I should just use the default code like the house robots!!

computernerd486
Turn 1 fun

Not saying someone should do this, but just doing a little circle in turn one instead of going around the track would be hilarious. It would make everyone go nuts.

House robots seem adept at hitting things there.

piborg
piborg's picture
Turn 1

It does seem to be a bit of a blind spot for detecting traffic :)

We are fairly convinced that if someone can write some code which drives slowly but avoids all the obstacles that it would do well.

piborg
piborg's picture
Camera calibration

If it helps check the simulation "camera" I have attached the framing we are using to check the real camera alignment on each robot.

The yellow frame is where the standard code looks for track lanes.
The cyan frame is where the standard code checks for the start marker.
The magenta frame is where the standard code checks for the starting lights.
The orange line is where it believes the robot is in the image.

The width of the magenta frame is such that it should be approximately the same as the light separation between lanes when sat at the start line.

The top of the yellow box is about where the bottom of the walls are when sat in the center of the track facing parallel to the walls.

The cyan box should sit towards the bottom of the start marker when sat at the start line.

Comment Images: 
computernerd486
Calibration

That'll be helpful, it'll let me line up the image. I'm a touch busy for work this week, so I haven't been able to put the autostart and controls for the AI on the UI.

I'll have it over for next week for sure, them out can go out to everyone.

Pages

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <pre>
  • Syntax highlight code surrounded by the <pre class="brush: lang">...</pre> tags, where lang is one of the following language brushes: bash, cpp, perl, python.
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Comment Images
Files must be less than 10 MB.
Allowed file types: png gif jpg jpeg.
Comment Attachments
Files must be less than 10 MB.
Allowed file types: txt pdf nfo doc docx rtf jpg png gif bmp zip tar gz csv xls.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.