Just Because You Can, Does Not Mean You Should

Many have used this quote in books, movies, and even the apostle Paul in the Bible.

CRISPR technology allows scientists to alter our DNA sequences and modify gene function. Think of this technology as the ability to edit genes. The practical application might be to eradicate diseases, viruses, and human defects such as Down syndrome.

Wait, what?

You read that right. That means that those that have Down Syndrome are “defective,” according to pro-CRISPR scientists. There is also evidence that CRISPR can eradicate Asperger’s, a condition on the autistic spectrum. If so, we would not have people like Elon Musk, Bill Gates, and Dan Ackroyd, to name a few. This CRISPR thing doesn’t sound that great.

Well, maybe there are things that we can agree we should do and others that we shouldn’t. Maybe cancer is a good candidate to eradicate. How about …?

Closer to the Topic

I never doubted nor currently doubt that we will get to a place where AI will be commonplace. Technology continues to grow leaps and bounds, and the viability of AI seems to be getting closer each day. Yet, not with some deep growing pains that are not being addressed.

I was made aware of the recent AI Explainability Statement from HireVue. It’s the latest trend in corporate transparency. Maybe I’m an old skeptical curmudgeon, but that always catches my attention when a company volunteers to be transparent.

I took a deeper look into explainable AI, and the summary statement here is that this is an answer to a market that is worried about the decision matrix under the covers. From the article sourced, the direct quote defining explainable AI, says it’s a “…set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.” Given this definition, I can think of 20 other technologies that had explainability statements as well.

Amazon tried its hand on this back in 2018, and it failed. Their recruiting AI was biased against women, even more so than the manual recruiting practices it was trying to improve. Okay, that’s how we progress; we test and adjust; I get it.

The coding was done by biased humans and used past data to help predict future performance (where have I heard that?).

Addressing AI in a technical realm is just a small part (an insignificant part, in my opinion) of the new world we are entering. I will leave the technical issues to the masses to argue.

The Argument for the Use of AM (Artificial Morality)

In the philosophical discipline, evil needs good to exist. Take the rust out of the car; you have a clean car; take the car out of the rust; you have nothing. Take the rot out of the tree; you have a healthier tree; take the tree out of the rot; you have nothing. Take cancer out of the person; you have a healthy person; bring the person out of cancer, you have nothing. The horse is dead.

Here Comes the Problem

Here is the challenge. I gave actual examples, but nothing controversial. We have all decided that rust, rot, and cancer are bad. I can’t think of a context where these are good things. However, there are times when this is not the case.

An example is the social unrest currently in our society that proves this dilemma. Should we homogenize skin color and sexual orientation and remove all disabilities? What should be the correct skin color and sexual orientation? What are considered disabilities?

How to Fix This, Maybe

Einstein suggested that we cannot fix a problem at the same level it was created. The corollary to that statement would be that we need a standard outside of ourselves to indicate what is right and wrong to solve these issues. Who or what would be this standard? A collective view? Majority rules? The Council of Goodness? (okay, maybe not).

The Intersection of the Problems

SAP has hired people with Asperger’s to find bugs in their code. Those with Asperger’s have an uncanny skill for focus and detail than those not on the spectrum. SAP reported preliminary results to suggest that the organization’s neurodiverse testing teams are 30% more productive than the others. At SAP, a neurodiverse team helped develop a technical fix worth $40 million in savings. One percent of SAP’s workforce is neurodiverse, in line with the world’s population.

Where is the Line?

The lines of good and bad are constantly blurred by technology. At one point, your parents told you never to get in a car with a stranger; now, we hail a stranger’s car and are willing to get in it (Uber). CRISPR is looking forward to eradicating autism, while autism gives us highly skilled and detailed-oriented workers. Which is it?

Seeking Accountability

One cannot know a crooked line without knowing what a straight one looks like. We need a standard outside of ourselves to make us accountable. Currently, this accountability is the law. Will that be sufficient for our AI initiatives? Not without some severe growing pains.

Share your comments: