MT TL
Follow Us
ITA MEMBERSHIP

DATA

DATA

What's AI-Right and AI-Wrong?

What’s AI-right and AI-wrong? And can people even know the answer?

Having data? Seeing data? Knowing data? Using data? Not using data? Not knowing, seeing, having?

This blog is a DIY Data and AI Ethics Exam. Funny thing with ethics, there may be no right answer. It’s a framework for considering right and wrong, asking questions, and changing in changing times.

Many of us have never to this day actually taken a course or read a book on ethics. If you have to ask, you are one of them. No judgment, just a data point.

There is no certificate of accomplishment for ethics literacy, nor an update for when things change (or more emphatically, need to change). There are many who would argue that there are professional educational credits for such, but without an enforcement mechanism for accountability, it looks more like ethics washing to some.

The truth is stranger than fiction as markets around the globe and in every industry and situational context continue to sort out “yours, mine, ours, and theirs” issues on data ownership, privacy, protection, permission, uses, and what “fair” means. Rules, regs, and laws are coming (or not) depending on the politics, economics, and ethics in seemingly every context and jurisdiction.

It’s not just what people do to people with data, but the scary obviousness of what machines could do (or not do) to people with data. The scale of impact of platforms is in the billions of people. One autocratic leader can impact only the people below them, but a machine can network ceaselessly.

This sits in modern-day mindsets -- corporate, government, municipal, local, household, individual. One part fear, and one part greed. And the rest of the parts – ethical and moral debate.

LET’S START THE QUIZ – formulate your opinion on the topics below:

1. Privacy – Who owns the permanent record data in your car navigation system if the car is sold or salvaged? Who can view that data? Same thing for your smartphone, smart home, smart watch, social media account/content, etc.

2. “Extra” data like facial recognition is very useful for some things (like my iPhone unlocking). But is someone databasing biometrics, my race, ethnicity, gender, gender identity, orientation, religion, age, complexion, etc.? Is storing this sort of data anywhere ever a good idea? What if it's just to suggest my color palette in clothing?

3. Anonymity versus Identifiability -- You must show your face to access your phone, your apps, even the internet (already included if you did step one, except in some countries). If you don’t use your face or fingerprint, your GPS coordinates and movements and/or device handling characteristics are strong identifiers. Same with small networks, in a small area, or similar interest. The three combined can be as unique as a rooftop level 9-digit zip code or 10-digit phone number. Is it not PII (personally Identifiable Information) just because it is not easy to see, or link, a unique identity? [not part of this test - that's written down, but changes over time].

4. Real or Robot -- Should I be notified if I'm speaking with a synthetic AI? Should a robot voice be "gender-fied"? What is the motivation behind gender-fying a robot, or a spokesperson for that matter, even a cartoon spokes-animation?

  • I want to be included -- You built a great model on people not like me, so now what? Can I fit?
  • I want to be excluded -- You thought you built a great model on people like me, but I am not like them.
  • Forget you ever saw me – Can your model forget me, or once seen forever seen – like a jury? Did you warn me of that (and is all that EULA fine print even fair (okay, that’s an advanced exam question)?
  • I did not give you permission to see me – Are there any bounds on unintended disclosure, capture, or use of data about me (including public gatherings)?

5. Speaking of crowds -- Tell me what my friends and neighbors are doing. I heard all my friends and connections acted on this issue (like voting); should I take the same action, too? Wait, is that "fake news"? Am I being manipulated? Houses on my street used how much less water than me?

6. Means to an end -- If a “Pied Piper” existed in an algorithm, would it be okay if they helped me make "good choices." or should I always have informed consent and know I am the target of an influence campaign, nudge, or advertisement? Are techniques of psychological warfare fair in peacetime populations?

7. Past versus Present data -- All your model data use past outcomes that came from a society where segregation was the norm, education was spotty, literacy even spottier, and legal protections had not yet come into play. Why is it even relevant? Predicting from events in a biased past carries forward the inherent market conduct.

8. Moral line – No AI "live ammo" authorized (at least, by us, so far, but it’s a line to not cross, right? Or could I just use it for automated vermin hunting, where’s the harm in that?)

9. Might makes right -- Is it giving too much away to the "invisible hand of power" for someone to infer my wealth, cashflow, purchase preferences, and affinities – and then decide how to target, un-target, or re-target me? Is it different if it is my neighbor down the street versus an algorithm running in a foreign country?

10. Because I said so -- People, especially bullying and self-servicing autocrats, create cultures of situational ethics where "truth to power" equals termination with prejudice. Training an AI model on human decisions can create a "monster in the machine." You can give away your ethical objectivity by using subjective or biased data in the first place.

11. Companies are people, too -- Speaking of giving stuff away, hypothetical question: If I web scrape your picture, calculate your biometrics, and database them, do you have any rights to that data, or even to know I exist? Even if I only intend to use it for "good," like to verify your identity when you ask me to? What if I were bad?

12. No harm, no foul (can no foul be the harm) -- How many of the offers I never see, would have made me happy? Is there a virtual tarnish to reputation in the job market? Is “too expensive” or “over-qualified” even a thing, or a code word for age? What other hidden factors are being used in hidden decisions [ see “Extra Data” above ]?

13. I know it when I see it -- People have hidden biases, and also hide their biases. Would I know if an algorithm “thought” like that, or used data sources that had hidden influence? How are outcomes catalogued to show any differences by any type of “Extra Data” features?

14. The rule of law -- The problem with laws is that it’s not illegal if there is not a law, and it is forever illegal until a law is changed. Selective enforcement can be an issue, too. Ethics can be like laws, especially when big money could be at stake; but with nothing written down, it is way easier to rationalize an ethics violation, less penalty potential.

15. Gold is the rule -- Despite my observing you not drive, I am still charging you a flat rate of 10,000 miles a year (substitute any subscription service here)..

  • The rule is the gold - If I don't offer a mileage risk curve, then it does not matter if you drive less, but I will audit anyone if think is driving way too much.
  • The cost of gold - Even if I verify you are a safe driver and drive less, you must continue to pay for trip tracing and submit to tracking beyond its rating relevancy even if a dramatically cheaper and privacy protection option of photo capturing just miles from your odometer would do.
  • The value of gold - As long as I hope to find new ways to create new products and services to sell to you sometime in the future, then you should enjoy paying more now and having your private data permanently collected.

16. Does having knowledge carry any obligation? If an AI audit-bot could assess bias violations in models and people, what happens to the people, the models, and people using models?

END OF EXAM

BONUS QUESTION: Underlying all this, are you concerned that for some jobs, “people need not apply”?

Algorithm Want Ad: Seeking a algo-work-bot that never gets tired, does what I tell it, draws no salary, does not ask for a raise, never takes vacation, takes up no office space, has unlimited productivity potential, can create valuable data assets from its own work processes, and has the capacity to learn with bolt-on knowledge modules instantly. Plus, creates a permanent record of everything it does, and won’t complain if I just turn it off and never call it again.

And did I mention, AI never asks questions, but if it did, there's no law or compelling ethical reason to make you answer.

 


Featured articles

MT MR

Sapiens MR

ELECTRONIC CHAT

The Email Chat is a regular feature of the ITA Pro magazine and website. We send a series of questions to an insurance IT leader in search of thought-provoking responses on important issues facing the insurance industry.

ITA LIVE 2020

ITA LIVE 2020 –SAVE THE DATE!
April 5th – 7th, 2020
The Diplomat Resort
Hollywood, FL
Become a member today to receive updates – www.itapro.org/MR

BLOGS AND COLUMNS

only online

Only Online Archive

ITA Pro Buyers' Guide

Vendor Views

Partner News