AI World Executive Summit: Important to Ask the Right Questions 

By John P. Desmond, AI Trends Editor  Asking the right questions about AI activities matters, especially given the acceleration of AI adoption driven by the pandemic. Specifically, thinking about which questions to answer is a focus of AI experts and practitioners who are managing through the adoption of AI in the enterprise a recent survey […]

Nov 30, -0001 - 00:00
 0
AI World Executive Summit: Important to Ask the Right Questions 
Techatty Supportive Club
Techatty Supportive Club

By John P. Desmond, AI Trends Editor 

Asking the right questions about AI activities matters, especially given the acceleration of AI adoption driven by the pandemic. Specifically, thinking about which questions to answer is a focus of AI experts and practitioners who are managing through the adoption of AI in the enterprise a recent survey from McKinsey shows. 

Of respondents at AI high-performing companies, 75% report that AI spending across business functions has increased because of the pandemic, according to the Global Survey on AI from McKinsey for 2020. These organizations are using AI to generate value, which is increasingly coming in the form of new revenue.  

Introducing Google Cloud

Three experts discussed the implications of this growth with AI Trends in interviews in anticipation of the AI World Executive Summit: The Future of AI, to be held virtually on July 14, 2021.  

David Bray, PhD, is Inaugural Director of the nonprofit Atlantic Council GeoTech Center, and a contributor to the event program; 

Anthony Scriffignano  PhD, is senior VP & Chief Data Scientist with Dun & Bradstreet; 

And Joanne Lo, PhD, is the CEO of Elysian Labs. 

What do you want to emphasize at the AI World Executive Summit? 

David Bray, PhD, Inaugural Director of the Atlantic Council GeoTech Center

David: “AI is at its best when it helps us identify what questions we should be asking it to answer. We live in a world transforming at a rapid rate, in some ways we are not aware of the full extent of these changes yetespecially during the COVID-19 pandemic. Knowing the right questions to ask will help us work toward a better world. AI can help hold up a digital mirror to how we operate as companies, governments, and societies — and strive to be better versions of ourselves.”  

He notes that if an AI system produces a biased result, “It reflects the data we feed into it, which is a reflection of us. Part of the solution is to change the data it’s getting exposed to.”  

Joanne: “When you have an approximate idea of what you want to look for, the AI helps you refine your question and get there. Think of it like a smart version of an auto complete. But instead of completing the sentence, it is completing the whole idea.” 

As an example, maybe tell your digital assistant that you want to go on a drive tomorrow. Knowing what you like, your history and your age group, it comes back with a suggestion that you go to the beach tomorrow.  “You need to ask yourself what that means. Is your decision-making process a collaboration with the machine? How much are you willing to work with a machine on that? How much are you willing to give up? The answer is very personal and situation-dependent.”  

Introducing Google Cloud
Introducing Google Cloud

She adds, “I might want the machine to tell me my optimal vacation location, but I might not want the machine to pick the name of my child. Or maybe I do. It’s up to you. The decision is personal, which means the question you should be asking is how much are you willing to give up? What is your boundary?”  

And the questions you ask AI to answer should be questions not simple enough to Google. “You are pretty sure Google can’t help you with the question of where you should send your child to school, to the language immersion program or the math immersion program, or STEM research programThat’s up to you.” 

 

Lessons Learned in Pursuit of Ethical AI 

What lessons have we learned so far from the experiences of Timnit Gebru and her boss Margaret Mitchell, the AI ethicists who are no longer with Google? 

Anthony Scriffignano, PhD, senior VP & Chief Data Scientist with Dun & Bradstreet

Anthony: “Well if industry doesn’t take the lead in trying to do something, the regulators will. The way for industries to work well with regulators is to self-regulate. Ethics is an enormous area to take on and requires a lot of definition.  

“The OECD [Organization for Economic Cooperation and Development, for which Anthony serves as an AI expert] is working on principles of AI and ethics. Experts all over the world are really leaning into this. It’s not as simple as everyone wants to make it. We better lean into it, because it’s never going to be easier than it is today.” 

Echoing the thoughts of Lo, he said, “We already take some direction from our digital agents. When Outlook tells me to go to a meeting, I go. The question is, how much are we willing to give up? If I think the AI can make a better decision for me, or free me up to do something else, or protect me from my own bad decision, I’m inclined to say yes.” However if he has to think about ethics and marginalization, it gets more complicated.   

He added, “In the future, we will not be able to just have the computer tell us what to do. We’ll have to work with it. AI will converge on advice we are more likely to take.” 

David: Recognizing that often the real concerns and nuances of the issues aren’t covered in depth, he notes, “we are hearing what both sides want to tell.” Going forward, he would like to see some degree of participation or oversight going on with experts outside the company. “If the public does not feel like they have some participation in data and AI, people will fill the space with their own bias and there will be disinformation around it. This points to a need for companies to think proactively from the start about how to involve different members of the public, like ombudsmen. We need to find ways to do AI with people so that when a hiccup happens, it’s not, ‘I don’t know what’s happening behind the curtain.’  

He advises, “Assume everyone is striving to do the best they can. The incentives to motivate them might be in different places. If everybody thinks they are doing the right thing, how do you make a structural solution for following out data and AI that gives people confidence that the structural system will come out less biased? It’s a nice thing to work toward, data trust. The first step is, you need to feel like you have agency of choice and control over your data.”  

“If an organization’s business is built around the exclusiveness of the data they have, that may make it harder to navigate the future of doing AI “with” people vs. “to” people. If a company says, pay no attention to the wizard behind the curtain, that makes it hard to engender trust.”  

He noted that European countries are considering a stricter standard for data privacy and other digital topics including AI. “European efforts are well-intended and have to be balanced.” European efforts to define privacy standards around healthcare data he was advised will be worked out over 10 to 15 year of court cases, raising questions about whether that might stifle or discourage innovation in healthcare. At the same time, “China’s model is that your data belongs to the government which is not a future either the United States or Europe want to pursue.”   

He added, “We need to find some general principles of operating that engender trust, and one way might be through human juries to review AI activities.” 

 

A Way to Review AI Malpractice Needed 

On the idea of an ‘AI Jury’ to review AI malpractice:  

Joanne Lo, PhD, is the CEO of Elysian Labs

Joanne: “The most important lesson for me [from what we can learn from the recent Google ethics experience] is that government and policymaking has been lagging in technology development for years if not decades. I’m not talking about passing regulations, but about one step before that, which is to understand how technology is going to impact society, and specifically, the democracy of America, and what the government has to say about that. If we get to that point, we can talk about policy.”   

Elaborating, she said, “The government is lagging in making up its mind about what technology is in our society. This delay in the government’s understanding has evolved into a national security issue. What happens when Facebook and all the social media platforms develop the way they did without government intervention, is to eventually become a platform that allows adversarial counties to take advantage of and attack the very foundation of democracy.”   

“What is the government going to do about it? Is the government going to stand with the engineers who say this is not okay, that we want the government to step in, we want better laws to protect whistleblowers, and better organizations to support ethics? Is the government actually going to do something?” 

Anthony: “That’s interesting. You could agree on certain principles and your AI would have to be auditable to prove it has not violated those principles. If I accuse the AI of being biased, I should be able to prove or disprove itwhether it’s racial bias, or confirmation bias, or favoring one group over another economically. You might also conclude that the AI was not biased, but there was bias in the data.” 

“This is a very nuanced thing. If it was a jury of 12 peers, ‘peer’ is important. They would have to be similarly instructed and similarly experienced. Real juries come from all walks of life.”  

Learn more at the AI World Executive Summit: The Future of AI, where these discussions and others will continue. 

Techatty Connecting the world of tech differently! Read, Learn, Thrive, and Make an informed decision without distractions. We are building tech media and publication networks to connect YOU and everyone to reliable information, opportunities, and resources to achieve greater success.
Web and Cloud LLC - talk to us and let's discuss your needs.
Let's help transform your business