If you’re in or around Toronto, please Save The Date – Thursday, May 2 – for a new kind of Event Experience!
I’ve written before about my very first gig…developing software for an Insurance company in Waterloo, Ontario.
When I started, I was merely ambivalent about the role…with enough time, I came to truly despise it. Having very little interest and only slightly greater aptitude for that type of work was a nasty combination.
Still, it wasn’t all bad. Waterloo was (and still is) a lovely place; I met some incredible people, many of whom I’m still friends with to this day. Even the job itself reaped one massive long-term benefit. It taught me various coding-related concepts, knowledge I would end up using in almost every single job to come, including the past two decades of Project Management consulting.
Looking back at it from 2019, the ‘latest and greatest’ technology of the 1990s seems so incredibly antiquated. With advances in computing, and Artificial Intelligence in particular, coding decisions aren’t merely logical in nature…they increasingly deal with moral questions and issues.
For example, if an autonomous vehicle were to encounter a situation where all outcomes resulted in pain and suffering, ethical considerations immediately jump to the forefront.
Say a passenger steps in front of a self-driving car. The vehicle’s programming will immediately have to kick in and decide whether to: a) swerve left, into another car; b) swerve right, into a cyclist; c) continue on, and hit the pedestrian; or d) slam on the brakes, potentially injuring the passenger, and possibly still hitting the pedestrian. Of course, these are the same choices a human driver would face; the difference is that the car (or to be more accurate, the programmer behind-the-scenes) would be making that decision on behalf of the individual. Some of you might be familiar with the ‘Trolley Problem’, which captures a few of these scenarios: Trolley Problem.
Many of the ethical dilemmas we’ll face aren’t new. The difference is that someone – the manufacturers, the government, the public – will have to collectively decide on a common set of rules…a massive departure from the way things have worked throughout the history of technology.
One intriguing twist is that it may force us to confront questions we’ve able to ignore. In that sense, it’s possible that building smarter machines may potentially make us better humans.
What Were They Thinking?
What may be even more concerning is that are already reports that coders and engineers are unable to understand/validate all the logic and conclusions of even rudimentary Artificial Intelligence. Some are suggesting that doing so isn’t even necessary, and that we should simply ‘trust the machines’.
Given the many cases of AI-related bias (which again, is essentially human-related bias), ‘trusting the machines’ sounds like a recipe for trouble, if not outright disaster. If we were to follow that path, surrendering our humanity to automation would not be a necessary evil…rather, it would be an evil of our own making.
Please join me each week for experiences, observations, and thoughts related to our upcoming project launch (March 2019). Your likes, comments, and shares are very much appreciated…and thanks for taking the time to stop by! Nigel Oliveira Nigel’s LinkedIn Profile