How do we teach self-driving cars to avoid hitting people on bicycles?

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

This is a good conversation:

ChuckMcM :

“Now Johnny, when you ride your bike you must wear your I-am-a-bike vest and follow these patterns or the cars are likely to kill you.” :-)
Summary is that “people riding bikes” (PRB) is a much denser image set than “people in cars” (PIC) and “people walking or jogging” (PWJ), and the PRB objects have a much higher dynamic angular vector capability (they can change direction extremely quickly) combined with a wider dynamic velocity vector to PWJ they strain the ability of the predictive filters to reliably asses the collision threat. As a result you need either faster/better hardware or better algorithms to deal with that particular group.

I’m not particularly surprised by that, I expect the cars to also end up with small animal issues, as they can appear suddenly and aggressive evasive maneuvers to avoid hitting them may injure passengers at the cost of saving a squirrel’s life.

And all of that adds up to some of the many things that one has to think about when claiming victory here. It is going to be a long hard engineering slog to get to full autonomy. My question is you can build a computer that can drive a car fully autonomously, what other missions could you create for it? Some of those are kind of scary.

noobermin:

This is when the humans strike back against the machines.

Every example of a NN I’ve seen is good at one thing, the thing it is trained for. I haven’t yet seen a NN that can beat people at chess, recognize faces, drive, and make coffee, or more importantly, decide when to do one or the other. The closest thing I can think of is Watson from IBM.

People can drive (poorly) but they also recognize the value of life in a biker or a squirrel, and in some cases, override their usual learned behavior, for example, instead of continuing to drive, swerving, basically improvising.

Computers and programs again are good for what they have been input and now, what they already have learned. The synthesis part, I’m not quite sure even NN have reached that part yet. I may be wrong, the answer is probably “learn more” or “learn faster” (which is what you suggest), but it’s easier to synthesize like this when you have a mind with general knowledge, at which point the mind is less like a NN and more and more like a person.

hahajk:

There’s a saying that neural networks are the second-best way to do everything. Don’t be disappointed when their trajectory flattens out.

And, as for their role in autonomous vehicles, I don’t think they play a primary role (although I don’t design autonomous cars so I wouldn’t know for sure.) You can see in this Tesla video that a lot of the computer vision isn’t even reliant on neural networks: https://www.youtube.com/watch?v=BLlwm5Dq7Is

Post external references

  1. 1
    https://news.ycombinator.com/item?id=13734446
Source