Jump to Content
AI & Machine Learning

Trust and AI: How organizations are fostering digital responsibility in the AI era

November 4, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/gen-ai-social-services-legal-aid-immigrati.max-2000x2000.jpg
Matt A.V. Chaban

Senior Editor, Transform

Ana Vidal

Contributing Writer

Nonprofits are building AI models and agents to break down barriers and boost trust — and uncovering business lessons for organizations everywhere.

Try Gemini 1.5 models

Google's most advanced multimodal models are now widely available.

Got AI?

Pretty much every corner of our lives has gone digital — online shopping, banking, education, social media, global gaming, supply chains — you name it.

With this shift has come a growing need for digital responsibility.

Now that we’re in the AI era, we’re seeing an even greater focus on online ethics and e-civics. Like many of the technologies before it, what makes AI, and especially generative AI, so interesting and challenging is the way it presents both problems and solutions to digital responsibility. In a way, we’re hoping we can “right” AI with AI.

As organizations around the world adopt AI and adapt to the world it’s shaping, digital responsibility has taken on a sense of urgency for many of them. Brand trust is under pressure not least because of the proliferation of fake accounts and spam that some AI has accelerated. Chatbots are popping up everywhere, easing the route to self-service, but this can also create roadblocks, especially for the less digitally savvy or those without reliable technology access.

As more aspects of society move online, and more of our interactions are informed and directed through AI, organizations are already looking for ways to ensure existing technologies are equitable, as well as creating new ones to improve access and trust.

This is the third post in our series on Google.org’s inaugural Accelerator: Generative AI program, highlighting organizations using generative AI and other machine learning tools to address global challenges. Earlier, we explored how generative AI is expanding economic opportunities and how it’s helping us respond to crises and supporting sustainable futures. Now, we turn our focus to responsibility.

As millions of people around the world become familiar with AI, and as it continues to advance at breakneck speed, there are understandable concerns over whether civil society can keep pace. To embrace responsible change is to foreground the need to create safe, respectful, reliable environments for users, where their information remains authentic and private, where they get the help they need and trust those on the other side of the screen.

With so many people in need of good information and necessary public services — whether that’s immigration support, food assistance, or thousands of other routine things — there really is immense potential for AI to help. Our governments and nonprofits have almost always been short-staffed and underfunded, so bringing them the amplifying power of this new technology could truly help the lives of many.

Tackling misinformation with AI

The proliferation of deepfakes and sophisticated content creation tools are making it increasingly hard to distinguish between truth and falsehood. This challenge is exacerbated by the rapid spread of such misinformation through social media, which has become the primary source of news and information for many people.

For more than a decade, Full Fact, a London-based nonprofit organization, has been dedicated to tackling misinformation — it calls itself “the UK’s independent fact-checker.” Full Fact judiciously corrects the record on major topics in the nation, with a focus on issues that have the most potential for negative impact or wide reach. Wading through information to build a set of facts that can convincingly correct the record is generally the most time-consuming part of Full Fact’s work.

That’s why the organization, through its time with the Google.org gen AI accelerator, developed a tool that can analyze and summarize large volumes of health content. The aim was to improve long-term public health outcomes and save lives, as well as the belief that they could broaden out to other areas if they got this discreet use case right. With gen AI, Full Fact has been able to detect health misinformation in online videos 10-times faster than its workers could on their own, freeing them up to cover more stories — a growing need as AI also leads to potentially more misinformation. If you're curious, check out Full Fact’s demo to see it in action.

We are leveraging the power of Google's Gemini models to do the heavy lifting for fact checkers. It takes what would be 10,000 hours of monitoring and surfaces the most harmful content in a fraction of the time.

Kate Wilkinson, Senior Product Manager, Full Fact

In business: There are jobs across every industry that involve a great deal of combing through information, whether that’s investment analysts, academics, insurance adjustors, attorneys, or HR teams. The ability of AI to comb through vast amounts of data, surface relevant information, and even synthesize it through natural-language queries has proven to be a remarkable feat. Many organizations are already seeing 10x time savings — or more — of the kind Full Fact is experiencing with its health research tool.

Enhancing social service and social justice

Immigration is one of the most pressing issues of our day, with an estimated 281 million people on the move, according to the United Nations, an increase of nearly 20% since 2000. Justicia Lab is developing technology tools to help immigrants in the U.S. navigate the legal hurdles they commonly face, ranging from obtaining citizenship to fighting deportation.

During the Google.org Accelerator: Generative AI, the Justicia Lab team built iMMpath, a tool that uses vision AI tools to extract information from Notice to Appear paperwork, the document that tells individuals they are due in immigration court for a potential removal proceedings. The iMMpath tool can extract relevant information from the document, then provide instructions on the next steps to take, doing so in a number of languages. It can also identify legal advocates in the area to help individuals with their case. You can learn more about iMMpath or try it out here.

Through the use of AI, we're able to bring information more directly to people in their own language and in a way that they can easily digest and understand. We're able to bring some of the most advanced technology to the least resourced people.

Rodrigo Camarena, Director, Justicia Lab

In business: By some estimates, unstructured data represents between 80% to 90% of all data in the enterprise. Much of it is buried in PDFs and other docs that aren’t easily scannable — until now. With vision AI tools like the ones Justicia Lab is deploying to read scans or photos of legal documents from its users, any organization can quickly unlock and begin indexing information that previously was inaccessible, at least without extensive work by individuals. It’s a ripe opportunity to unlock information or even create fresh products like the one Justicia Lab built.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Gemini-at-work-7-minutes-yt-hero.max-1300x1300.jpg

Improving access to benefits around the globe

Social service organizations often struggle to provide timely support to clients in need. Due to overwhelming demand, clients may need to wait several days for an appointment to speak to a caseworker or several hours to get a response to a simple question. The lack of timely support may cause clients to give up on seeking assistance, moving onto other solutions or resigning to their needs not being met.

To take just one example, access to food remains a pressing concern in the U.S., where 13.5% of the population struggles with food insecurity. A major obstacle to obtaining government assistance, such as the Supplemental Nutrition Assistance Program, or SNAP, is the complex application process. Through the Google.org accelerator, mRelief created a generative AI chatbot assistant that can provide immediate answers to questions clients may have while they are filling out a SNAP application on their own. This has led to a 10% increase in completed applications — which may result in around 50,000 additional self-service applications being completed annually. You can see it in action now.

Those aren’t just impressive results but important ones benefiting a population in need.

It’s possible to develop AI solutions that can handle people with care, that can show up in a consistently helpful way, and that it’s possible to mitigate some of the risks we were concerned about. It really depends on implementation, though. I had this impression that generative AI technology could make things happen kind of like magic, but it's actually very hands-on and requires you to be really on top of the state of the technology.

Belinda Rodriguez, Product Manager, mRelief

In Europe, there is also a considerable need to manage large social programs. Bayes Impact, a nonprofit organization based in France, developed CaseAI during its time at the Google.org accelerator. CaseAI was designed as a customizable platform that integrates with case management systems across social services and nonprofits. It uses generative AI to provide real-time, actionable recommendations to beneficiaries seeking support, and they have been able to increase the time human work coaches spend with their beneficiaries by 25%. Feel free to watch their demo to learn more about how employment coaching works.

One key aspect of using generative AI is really understanding how much of a treasure trove of data and information you may have that's not being leveraged currently. Social organizations can have a ton of arcane institutional knowledge. With generative AI, the quality of the output is really what you put into it, so you have to capture as much knowledge first to build the best AI.

Paul Duan, President, Bayes Impact

In business: The work of both mRelief and Bayes Impact are another reminder of the impressive ways generative AI can amplify the abilities of overstretched workforces, helping deliver services either more deeply or broadly than would otherwise be possible. At the same time, they underscore the level of work and engagement that is still required to achieve such gains, in terms of tuning and tracking models and training workers and users to best use and trust them.

Curious to learn more? Check out this ebook about organizations leveraging generative AI to create a better world.

Posted in