
Following the announcement of Google Cloud's AI Hub and Kubeflow Pipelines tools, Rajen Sheth, director of product management for Cloud AI, has outlined how the technology giant is working to ensure that its AI work is ethical and fair.
In a blog post titled 'steering the right course for AI', he outlined what is seen as the main industry challenges to be overcome in order to make AI not just a reality, but one that is for the net good of society.
These were placed under four main headers:
Unfair bias: "How can we be sure our machine learning models treat every user fairly and justly?"
Interpretability: "How can we make AI more transparent, so we can better understand its recommendations?"
Changing workforce: "How can we responsibly harness the power of automation while ensuring today’s workforce is prepared for tomorrow?"
Doing good: "Finally, how can we be sure we’re using AI for good?"
Unfair Bias
Engaging with each of these in turn, he first suggests that unfair, or confirmation bias must be tackled "on multiple fronts," starting with awareness.
"To foster a wider understanding of the need for fairness in technologies like machine learning, we’ve created educational resources like ml-fairness.com and the recently-announced fairness module in our ML education crash course," he writes.
Google is also encouraging thorough documentation "as a means to better understand what goes on inside a machine learning solution". Within Google this takes the form of 'model cards': "a standardised format for describing the goals, assumptions, performance metrics, and even ethical considerations of a machine learning model."
Embedded documentation tools from Google Cloud, like the Inclusive ML Guide, integrated throughout AutoML, and TensorFlow Model Analysis (TFMA) and the What-If Tool all help with this.
"I’m proud of the steps we’re taking, and I believe the knowledge and tools we’re developing will go a long way towards making AI more fair," he said, before reiterating that this is an industry-wide problem to be tackled.
"No single company can solve such a complex problem alone. The fight against unfair bias will be a collective effort, shaped by input from a range of stakeholders, and we’re committed to listen. As our world continues to change, we’ll continue to learn," he adds.
Interpretability
Next is the issue of interpretability, better known as the 'black box' algorithm problem.
"Since their inception, many deep learning algorithms have been treated like black boxes, as even their creators struggle to articulate precisely what happens between input and output," he writes, while admitting that neural networks by their nature are almost impossible to truly examine.
"We cannot expect to gain peoples’ trust if we continue to treat AI like a black box, as trust comes from understanding."
While Sheth believes that the industry has progressed to establish best practices around gaining better interpretability, he really digs into Google's own efforts, starting with image classification.
"For instance, recent work from Google AI demonstrates a method to represent human-friendly concepts, such as striped fur or curly hair, then quantify the prevalence of those concepts within a given image," he writes.
"The result is a classifier that articulates its reasoning in terms of features most meaningful to a human user. An image might be classified “zebra”, for instance, due in part to high levels of “striped” features and comparatively low levels of “polka dots”.
"In fact, researchers are experimenting with the application of this technique to diabetic retinopathy diagnosis, making output more transparent—and even allowing the model to be adjusted when a specialist disagrees with its reasoning."
This aligns with what Lord Clement-Jones, the Chairman of The House of Lords Select Committee on Artificial Intelligence, told our sister title Techworld.
"Those sorts of very sensitive areas we think should either have clear ex-ante explainability in the highest case of sensitivity, or at the very least explainability after the event, and we think it's only fair and right that those should be explainable and transparent," he said.
"But there are others where that level of transparency may not be quite so necessary."
Changing workforce
The next issue on Sheth's checklist is a big one: the threat of AI displacing huge swathes of human jobs. His response is pretty rote for the industry at this point and closely reflects that of other important industry figures like Satya Nadella and Bill Gates, namely that AI will augment, not replace, human roles.
"I don’t see the future of automation as a zero-sum game," Sheth writes. "It’s also important to remember that jobs are rarely monolithic.
"Most consist of countless distinct tasks, ranging from high-level creativity to repetitive tasks, each of which will be impacted by automation to a unique degree.
"In radiology, for instance, algorithms are playing a supporting role; by automating the evaluation of simple, well-known symptoms, a human specialist can focus on more challenging tasks, while working faster and more consistently," he writes.
In terms of practical solutions, Google.org has established a $50 million fund to support nonprofits preparing for the future of work, focused on: providing lifelong training and education to keep workers in demand; connecting potential employees with ideal job opportunities based on skills and experience; and supporting workers in low-wage employment.
Doing good
Lastly, on a more broadly ethical point, Sheth writes about how companies can ensure their use of AI technology is "for good."
He starts out by pointing to some of Google's most PR friendly AutoML case studies, one of which we have covered here at Computerworld UK on how the Zoological Society of London is using the Cloud AutoML platform to track wildlife by automatically analysing millions of images captured by cameras in the wild in an effort to cut down on poaching.
"But there’s an enormous grey area, especially with controversial areas like AI for weaponry, which represents one application of this technology we have decided not to pursue as stated in our AI principles," he admits.
"Our customers find themselves in a variety of places along the spectrum of possibility on controversial use cases, and are looking to us to help them think through what AI means for their business."
Google Cloud has hired independent technology ethicist Shannon Vallor as a consultant and is focusing on "internal educational programs on best practices in AI ethics."
However he doesn't go much beyond the idea that ethics is a priority for Google, instead of detailing how it ensures ethical principles are built into its model design.
"For example, ethical design principles can be used to help us build fairer machine learning models," he writes.
"Careful ethical analysis can help us understand which potential uses of vision technology are inappropriate, harmful, or intrusive.
"And ethical decision-making practices can help us reason better about challenging dilemmas and complex value tradeoffs—such as whether to prioritise transparency or privacy in an AI application where providing more of one may mean less of the other."
Industry support
In conclusion, Sheth writes: "For all the uncertainties that lie ahead, one thing is clear: the future of AI will be built on much more than technology. This will be a collective effort, equally reliant on tools, information, and a shared desire to make a positive impact on the world.
"That’s why this isn’t a declaration—it’s a dialogue. Although we’re eager to share what we’ve learned after years at the forefront of this technology, no one knows the needs of your customers better than you, and both perspectives will play a vital role in building AI that’s fair, responsible and trustworthy.
"After all, every industry is facing its own AI revolution, which is why every industry deserves a role in guiding it. We look forward to an ongoing conversation with you on how to make that promise a reality."
Hetan Shah, executive director at the Royal Statistical Society says that he is heartened to see Google engage with issues of ethics in AI, but he still sees a couple of blind spots in Sheth's post.
"One of the things that continues to worry me is that technology companies are not adequately involving civil society to help them respond to these issues. The blog promises a dialogue, but doesn’t tell us how anyone can participate in it," he told Computerworld UK in an email.
"The other blind spot is the issue of data monopolies: what are the implications of a very small number of tech companies owning so much of the world’s data? It doesn’t surprise me that Google is not keen to raise this."