We’re back from the North Carolina CED Tech conference, and it’s always a great mix of catching up with local entrepreneurs/investors and showing off the best that NC has to offer.  We’re delighted to support the event – our CFO Brad is on the advisory board, and I’ve been on stage a few times representing Epic Games and now Diveplane.  I even co-chaired the event once, which was, thank goodness, a mostly ceremonial position.

I always find it inspiring to connect with the growing tech economy here in Raleigh. This year was different for me. Not because I met so many new and interesting people, but because of the discussions and the buzz surrounding the event were so different. It’s clear that AI is a thread running through all things technology, but this year it was even more ubiquitous at CED. And I think that’s part of the reason the panel discussion I had with Robbie Allen, the CEO of Infinia ML, was so well received.

Of course, we talked about all the amazing benefits of AI, but we balanced it with the implications that come as a result of AI’s rapid rise. As Uncle Ben said, “With great power comes great responsibility” and that’s really the little-told story behind the AI narrative today. What can we do to really understand AI? How can we explain how it makes decisions or uncover its biases to ensure that this technology isn’t misinformed or misinforming? We’re scratching the surface, and there’s so much we must do to ensure that technology is transforming our lives in a positive way in the future.

Robbie isn’t at all a proponent of demanding AI explain its reasoning, which as one might imagine, led to some healthy debate.  His point, quite reasonable, is that we don’t ask human drivers or loan officers to perfectly explain their decisions – because they can’t.  We humans honestly don’t know why we take a certain action; instead, we’ve simply learned to be very good at ex post facto rationalization.  And guess what?  Explanation frameworks for neural networks have the same limitation.  They can’t truly explain a decision, but they can make a reasonable rationalization – i.e., a guess — after the fact.

I agreed with Robbie that we shouldn’t wait to ship autonomous vehicles until they can provide a better-than-human explanation.  Let’s ship today’s technology, that has better-than-human results, because it’ll mean saving tens of thousands of lives in the US every year.  But as soon as we have truly explainable vehicle AI, holy smokes, shouldn’t we use that instead?