Artificial Intelligence

A Manifesto for Responsible AI

The Collaborative Mindset, Forethought, and Tools We Need to Create Trustworthy AI
Published
April 10, 2023
Share
#
min read

As ChatGPT, DALL-E, and StableDiffusion made their way into the hands of consumers, the world’s experience of AI-enabled life put us on an irreversible course. The die is cast. 

Hacker culture is back, research publications are spawning by the second, and consumer experimentalism is reaching new heights. In the aftermath of ChatGPT’s release, Twitter captured our collective perception best with the resounding hymn: “Things are about to get really weird.”

As many have noted, this is an “iPhone moment.” We know everything after this point in time will be different, but aren’t yet sure exactly how. While this moment is important in and of itself, it’s perhaps even more significant in regard to what it means for the future: one of pervasive, public AI.

Although this feels like AI’s great debut, AI isn’t new. The term was coined nearly 70 years ago at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, and public attention to the technology has waxed and waned ever since. There have been breakthroughs—the expert systems era and the chatbot era, to name a few—but after periods of initial froth, they’ve left in their wakes a void of disillusionment until new breakthroughs reinvigorated the field once more. 

The latest breakthrough—the transformer—was revealed to the world by Google in 2017 in a seminal paper titled “Attention Is All You Need”. The transformer architecture outlined in this paper quietly permeated AI research over the following five years before emerging for the public to experience in the form of “LLMs,” or large language models, via platforms like OpenAI’s ChatGPT. 

Throughout the history of AI, there has always been at least one short leg on the stool between data, compute, talent, and algorithms. But this era—powered by the transformer architecture—we believe is different. It presents us with the perfect storm:

  • Data. We have more data than ever before, and we’re better at wrangling it than we ever have been.
  • Compute. Thanks to the cloud and to NVIDIA, there are more units of compute, those units are more powerful, and they are now broadly dedicated to AI.
  • Talent. AI is en vogue. University programs and professional education have (somewhat) caught up to the cutting edge, unleashing fresh talent into the AI ecosystem. 
  • Algorithms. Transformers, along with other novel architectures such as diffusion models, allow us to take advantage of all of the above in ways previously inconceivable.  

Equally important to changes in underlying technology is society’s experience therein. In prior AI eras, we were spectators. Deep Blue beat Gary Kasparov at chess, and Watson contended with Ken Jennings on “Jeopardy!” while we watched from the comfort of home. Now, for the first time in the 70+ year history of AI, we are all participants.  

As a result, AI is now a big deal. With AI front and center on some geopolitical, corporate, and social agendas, we are confronted with realities that, in the past, had largely existed as hypotheticals. We’re on consequential terrain, faced with questions of responsibility about what we can do with AI, what we should do, and how we should do it.

Charting a path toward Responsible AI

While it’s difficult to forecast the technical evolution of AI very far into the future, envisioning how it will shape society we think is a feasible—and worthwhile—task. A body of work from ethicists, social scientists, hackers, and politicians has shaped a lot of the dialogue to date, ranging from a White House proclamation all the way to repeated conjecture that AI will destroy humanity.  

The term “Responsible AI” is an outgrowth of this dialogue. It’s a big idea to which many organizations and individuals are contributing. While adapted across numerous frameworks over the years, it generally entails AI that is: fair and unbiased, transparent and explainable, secure and safe, private, accountable, and beneficial to mankind. 

At General Catalyst, we choose to be optimistic participants in this new AI era, and we’re excited. Informed by both our operating and investing experience in this realm, we hold closely a few principles of “constructive concern” to guide our Responsible AI investing practice at General Catalyst:

  • Cultivating trust in AI is a shared imperative. 
  • Enduring success will be enjoyed by those who adopt Responsible AI proactively rather than reactively.
  • The policies that form in this space will only be as successful as the tools that support them.

It is easy to grow anxious at the prospect of a future informed by AI—the unknowns of the technology plus hyperbolic headlines justifiably stoke concern. (See: “AI Experts Are Increasingly Afraid of What They’re Creating.”)

But the genie is out of the bottle. So we have a choice: We can swear off the technology and relegate its issues to others to solve, or we can embrace it and seize the opportunity to be active participants in its evolution.

We’re all in this together 

Trustworthiness is a precondition for the economic and societal success of AI. In the same way we wouldn’t trust a doctor who misdiagnosed us or a bank that lost our money, we wouldn’t trust AI that is unhelpful or—worse—harmful. 

But cultivating trust isn’t just a builder’s burden to carry; it would be shortsighted to assign responsibility for the future of AI safety to any one actor within the AI landscape. Rather, as we see it cultivating trust necessitates collaboration among every player involved: 

  • Technologists. The scientists and engineers pushing AI’s capabilities forward (both with data and models) must set guardrails for responsible access and use.
  • Practitioners. Those building products, businesses, and services on top of or with AI are often closest to the end-users. Because they have clearer insight into end-user intentions, they have a responsibility to amplify existing guardrails and tailor new mitigants specific to their users’ needs.
  • Consumers. Users of AI must self-moderate and peer-moderate usage of AI; as we see with social media, groupthink can become a powerful contagion.
  • Policymakers and community leaders. Officials must thread the needle of preserving safety and civil liberties while not inhibiting innovation.
  • Investors. Those deploying capital into AI have a responsibility to use their checkbooks and governance rights prudently by reinforcing Responsible AI with their portfolio companies.

Responsibilities across these roles are chained (pun intended). This web of stakeholders is tightly coupled, and the feedback loops between them are rapid. Missteps have the potential to reverberate quickly and powerfully across the ecosystem, impacting collective trust in the process. 

Take, for example, deepfakes (video or voice clones of individuals) in the misinformation era. Without clear markers of authenticity, their ability to perpetuate falsehoods is significant. But who is responsible for their proliferation?

  • Technologists. They created the underlying models that enable deepfakes. Should they have abandoned their research in order to avoid this potential negative outcome?
  • Practitioners. They created the software on top of technologists’ models that enabled the deepfake’s creation. Should they have avoided building their product?
  • Consumers. They believed the deepfake and reposted it. Should they have fact-checked before sharing?
  • Policymakers & community leaders. Their policy, or lack thereof, created an environment that allowed this content to proliferate. Should they have restricted creative activity or internet access? 
  • Investors. They funded the models and the software that rendered the deepfake. Should they have withheld their funds in anticipation of such an outcome?

We’re all in this together—responsible AI demands collaboration. If you haven’t been already, it’s likely you will be confronted with AI quandaries along these lines; it is never too soon to conceptualize your boundaries and responsibilities within the broader context of this new domain.

Think about responsibility now—not later

It is generally easier to build a dam before filling the reservoir. In this vein, we believe people who prioritize Responsible AI from day one will have a compounding advantage. That’s why we encourage our teams, the founders we support, and our broader community to think about Responsible AI now—not later. 

Our message to founders building in the AI space is to move fast and not break things. That era is over. It’s time to move intentionally. There is simply too much at stake to do otherwise. 

If you need a refresher on AI’s potential for harmful outcomes, think back to 2020. Ahead of the US presidential election, the non-profit RepresentUS commissioned deepfakes of Kim Jong Un and Vladimir Putin. US TV networks refused to run the videos, as they were “too realistic,” but they went viral anyway. Or recall DAN (aka “Do Anything Now”): a “jailbreak” exploited in the early days of ChatGPT (January 2023) that allowed users to creatively circumvent OpenAI safety controls.

Source

These anecdotes are not the first, nor will they be the last examples of unintended consequences in the AI era. It’s not possible to anticipate every use and misuse of AI, but it is possible to cultivate a mindset that prepares yourself and your organizations to thoughtfully triage issues if and when they arise. We believe prepared minds are able to swiftly and effectively anticipate and manage through difficulty—which will go a long way in an artificially-intelligent world.

Our Responsible Innovation Values guide every diligence discussion we have in the AI space. We don’t write an investment memo without them. Prior to writing any check, we spend time asking and answering questions along the following lines:

  • What difference or change does the company wish to make in the world?
  • Are the company’s business model and strategy aligned with our RI values of inclusive prosperity, environmental sustainability, or good citizenship?
  • Who are all of the stakeholders of the company, both direct and indirect, and how might they be positively or negatively impacted by the company’s decisions? 
  • Is the team diverse in a fashion that enables it to empathize with and consider all stakeholders?
  • What might go wrong as the company grows, and how does leadership—both at the company and at GC—think about mitigating those risks? 

To be clear: No single set of principles or questions suits every organization. At the end of the day, the most important starting point for Responsible AI is having any set of principles to serve as true north. Developing these now may allay future heartburn.

Marry the right mindset and mechanisms

Over the past decade, a constellation of governments, industry consortia, and NGOs have ventured to create frameworks for operationalizing and enforcing Responsible AI.

Source

That’s a lot of paperwork. If such policies were truly comprehensive, we likely would not have seen the March 2023 Future of Life petition advocating a blanket pause of giant AI experiments for six months to provide experts time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen.” So too would we not see 81% of tech leaders noting a desire to see increased government regulation of AI bias. It’s okay, though—because policy is only one part of the equation.  We believe it needs to be complemented by the mechanisms to make it a tangible reality for many organizations. 

So, what else do we need for Responsible AI to flourish? We need a suite of technologies that make it easy to take action—accessible and open tools to build the optimistic AI future we believe in. 

If guidelines are meant to be universal, compliance therein must be universally accessible. At this stage of the AI era, what is feasible for Google and Facebook to implement is not necessarily feasible for an early-stage startup. Compliance is not just an AI startup or technologist problem. For Responsible AI to evolve from a framework into a true system of governance, every stakeholder must know what to look for—and they will. We know that consumers, policymakers, practitioners, and investors are looking for validators of trust and will hold one another to account. 

“Tooling” is a vague word, and hundreds, if not thousands of tools are available for the construction of AI. They take many forms—some live within larger platforms; some are point solutions. Some are open-source, others are proprietary. Many existing tools are focused on core model construction, but we see a commendable group of “responsibility-native” tools emerging as well.

As we survey the landscape of Responsible AI tooling, we categorize tools based on the philosophical pillars of Responsible AI: 

  • Fairness & inclusion
  • Transparency & agency
  • Security & privacy
  • Robustness & reliability 
  • Alignment & safety

Below are a few (non-exhaustive) examples of the types of tools we hope to see more of—as independent businesses, parts of larger AI platforms, and as open-source offerings—in the future. We’ve noted commendable starts in select areas, but if you’re building in this space, we want to hear from you:

Fairness & inclusion

We want AI that works for everyone—not just select genders or races. Fairness and inclusion address the desire for representative AI—the degree to which AI systems and their outputs do not exhibit biases against particular individuals or groups and promote diversity, equity, and inclusion. 

Meeting this standard requires a holistic approach to gathering data, constructing, and testing models. Google’s What-If Tools, Anthropic’s Evals, and IBM’s AI Fairness 360 are just a few options that can aid technicians in this part of the development process. Many of these tools are open and encourage community contributions because the more users that converge on one tool, the more effective it can become.

Transparency & agency

Trustworthy AI inherently cannot be a black box, and we still think it’s a good idea for humans to be able to control these systems. Transparency and agency strike at these concepts by fostering AI systems with decision-making processes that are open, explainable, and accountable, enabling users and stakeholders to understand how decisions are made, identify biases and errors, and take appropriate actions to remediate.

Some tools help us understand the recipes for AI, like Hugging Face’s Model Cards or Stanford’s HELM (Holistic Evaluation of Language Models). Others dive into the models themselves and aim to aid “interpretability” or “explainability” (such as TensorLeap, Seldon, and Lit), which allow users to understand and debug the logic behind AI decision-making. Still, other tools systematize the idea of “human-in-the-loop” AI, which provides the option for humans to tweak AI as part of the process. We think we’re just scratching the surface of interpretability and look forward to tools that give us further insight.

Security & privacy

It would be harmful if AI used our personal data without consent or if AI was released with vulnerabilities that allowed others to use it for nefarious purposes. Luckily we see many builders focused on ensuring that AI systems and their data are secure from unauthorized access, use, or disclosure. Synthetic data tools such as Gretel and Statice preserve our privacy by masking data in machine learning models. Open-source platforms such as OpenMined and CleverHans provide tools that allow AI builders to construct secure AI by focusing on private data science and testing for vulnerabilities, respectively. We’re still in early innings, but as AI penetrates enterprises seeking to leverage their proprietary datasets, tools of this nature are going to become critical for commercial success.

Robustness & reliability

We generally want to be able to depend on AI to do the things we design it to do. Robustness and reliability tools aid in constructing AI systems that are resilient and can consistently produce trustworthy outputs. 

Monte Carlo’s data observability platform helps companies increase trust in data by detecting quality issues, such as missing or incomplete data, incorrect values, or anomalies in data patterns. Robust Intelligence develops algorithms and architectures for AI systems that can operate reliably and efficiently in complex and dynamic environments. As practitioners inject AI into enterprise and consumer applications, unexpected behaviors might emerge; AI systems hardened with such tools will prevail. We anticipate these tools becoming more important not only in the “core” but also as AI penetrates personal devices “at the edge”.

Alignment & safety

Knowing that AI systems are programmed with similar values to humans and can provide outputs with traceable origins is critical for building trust. Alignment is perhaps one of the most elusive AI attributes researchers aim to decipher.

Anthropic’s constitutional approach to AI places guardrails on AI to always be “helpful and harmless.” Approaches such as this are likely to permeate other AI models, but in the case that they don’t, tools such as those offered by Hive AI harness AI to help identify problematic content. For those wishing to distinguish between human- or AI-generated content, tools such as GPTZero or OpenAI’s experimentation with watermarking are in flight. 

As AI-generated content begins to flood our timelines, the ability to moderate the content and distinguish its origins—both as builders and users of platforms - will make Responsible AI a reality. Similar to misinformation and fact-checking tools used at social media companies, tools that address provenance and origin of AI-generated content may surface large opportunities for innovation.

Charting the path forward

These represent only a portion of the tools we need to manifest a world in which Responsible AI is trustworthy AI. We don’t just need more tools—we need better and more comprehensive tools to ensure that Responsible AI can be a reality for all stakeholders.

In front of us lies an opportunity to propel Responsible AI from an ethereal concept to an approachable reality. What we have seen to date is, in our minds, cause for optimism. Still, this vision requires a tech accord of Westfalian proportions—a set of standards and tools built by the AI community for the world. We’re excited to partner with builders who share this vision. 

Responsible AI is a pillar of our investment thesis and process here at GC; we’re fortunate to collaborate with companies like Adept, Hive, Luma, PathAI, Aidoc, Curai, Grammarly, and many more that share this ethos, and we look forward to welcoming many more to the GC family. As founding partners in Responsible Innovation Labs, we’re also committed to contributing to the underlying mindset and mechanism shifts we need to cultivate shared prosperity in this new era. If you’re interested in building AI responsibly at any stage, please reach out.

Further reading on responsible innovation and responsible AI at GC:

Published
April 10, 2023
Share
#
min read

As ChatGPT, DALL-E, and StableDiffusion made their way into the hands of consumers, the world’s experience of AI-enabled life put us on an irreversible course. The die is cast. 

Hacker culture is back, research publications are spawning by the second, and consumer experimentalism is reaching new heights. In the aftermath of ChatGPT’s release, Twitter captured our collective perception best with the resounding hymn: “Things are about to get really weird.”

As many have noted, this is an “iPhone moment.” We know everything after this point in time will be different, but aren’t yet sure exactly how. While this moment is important in and of itself, it’s perhaps even more significant in regard to what it means for the future: one of pervasive, public AI.

Although this feels like AI’s great debut, AI isn’t new. The term was coined nearly 70 years ago at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, and public attention to the technology has waxed and waned ever since. There have been breakthroughs—the expert systems era and the chatbot era, to name a few—but after periods of initial froth, they’ve left in their wakes a void of disillusionment until new breakthroughs reinvigorated the field once more. 

The latest breakthrough—the transformer—was revealed to the world by Google in 2017 in a seminal paper titled “Attention Is All You Need”. The transformer architecture outlined in this paper quietly permeated AI research over the following five years before emerging for the public to experience in the form of “LLMs,” or large language models, via platforms like OpenAI’s ChatGPT. 

Throughout the history of AI, there has always been at least one short leg on the stool between data, compute, talent, and algorithms. But this era—powered by the transformer architecture—we believe is different. It presents us with the perfect storm:

  • Data. We have more data than ever before, and we’re better at wrangling it than we ever have been.
  • Compute. Thanks to the cloud and to NVIDIA, there are more units of compute, those units are more powerful, and they are now broadly dedicated to AI.
  • Talent. AI is en vogue. University programs and professional education have (somewhat) caught up to the cutting edge, unleashing fresh talent into the AI ecosystem. 
  • Algorithms. Transformers, along with other novel architectures such as diffusion models, allow us to take advantage of all of the above in ways previously inconceivable.  

Equally important to changes in underlying technology is society’s experience therein. In prior AI eras, we were spectators. Deep Blue beat Gary Kasparov at chess, and Watson contended with Ken Jennings on “Jeopardy!” while we watched from the comfort of home. Now, for the first time in the 70+ year history of AI, we are all participants.  

As a result, AI is now a big deal. With AI front and center on some geopolitical, corporate, and social agendas, we are confronted with realities that, in the past, had largely existed as hypotheticals. We’re on consequential terrain, faced with questions of responsibility about what we can do with AI, what we should do, and how we should do it.

Charting a path toward Responsible AI

While it’s difficult to forecast the technical evolution of AI very far into the future, envisioning how it will shape society we think is a feasible—and worthwhile—task. A body of work from ethicists, social scientists, hackers, and politicians has shaped a lot of the dialogue to date, ranging from a White House proclamation all the way to repeated conjecture that AI will destroy humanity.  

The term “Responsible AI” is an outgrowth of this dialogue. It’s a big idea to which many organizations and individuals are contributing. While adapted across numerous frameworks over the years, it generally entails AI that is: fair and unbiased, transparent and explainable, secure and safe, private, accountable, and beneficial to mankind. 

At General Catalyst, we choose to be optimistic participants in this new AI era, and we’re excited. Informed by both our operating and investing experience in this realm, we hold closely a few principles of “constructive concern” to guide our Responsible AI investing practice at General Catalyst:

  • Cultivating trust in AI is a shared imperative. 
  • Enduring success will be enjoyed by those who adopt Responsible AI proactively rather than reactively.
  • The policies that form in this space will only be as successful as the tools that support them.

It is easy to grow anxious at the prospect of a future informed by AI—the unknowns of the technology plus hyperbolic headlines justifiably stoke concern. (See: “AI Experts Are Increasingly Afraid of What They’re Creating.”)

But the genie is out of the bottle. So we have a choice: We can swear off the technology and relegate its issues to others to solve, or we can embrace it and seize the opportunity to be active participants in its evolution.

We’re all in this together 

Trustworthiness is a precondition for the economic and societal success of AI. In the same way we wouldn’t trust a doctor who misdiagnosed us or a bank that lost our money, we wouldn’t trust AI that is unhelpful or—worse—harmful. 

But cultivating trust isn’t just a builder’s burden to carry; it would be shortsighted to assign responsibility for the future of AI safety to any one actor within the AI landscape. Rather, as we see it cultivating trust necessitates collaboration among every player involved: 

  • Technologists. The scientists and engineers pushing AI’s capabilities forward (both with data and models) must set guardrails for responsible access and use.
  • Practitioners. Those building products, businesses, and services on top of or with AI are often closest to the end-users. Because they have clearer insight into end-user intentions, they have a responsibility to amplify existing guardrails and tailor new mitigants specific to their users’ needs.
  • Consumers. Users of AI must self-moderate and peer-moderate usage of AI; as we see with social media, groupthink can become a powerful contagion.
  • Policymakers and community leaders. Officials must thread the needle of preserving safety and civil liberties while not inhibiting innovation.
  • Investors. Those deploying capital into AI have a responsibility to use their checkbooks and governance rights prudently by reinforcing Responsible AI with their portfolio companies.

Responsibilities across these roles are chained (pun intended). This web of stakeholders is tightly coupled, and the feedback loops between them are rapid. Missteps have the potential to reverberate quickly and powerfully across the ecosystem, impacting collective trust in the process. 

Take, for example, deepfakes (video or voice clones of individuals) in the misinformation era. Without clear markers of authenticity, their ability to perpetuate falsehoods is significant. But who is responsible for their proliferation?

  • Technologists. They created the underlying models that enable deepfakes. Should they have abandoned their research in order to avoid this potential negative outcome?
  • Practitioners. They created the software on top of technologists’ models that enabled the deepfake’s creation. Should they have avoided building their product?
  • Consumers. They believed the deepfake and reposted it. Should they have fact-checked before sharing?
  • Policymakers & community leaders. Their policy, or lack thereof, created an environment that allowed this content to proliferate. Should they have restricted creative activity or internet access? 
  • Investors. They funded the models and the software that rendered the deepfake. Should they have withheld their funds in anticipation of such an outcome?

We’re all in this together—responsible AI demands collaboration. If you haven’t been already, it’s likely you will be confronted with AI quandaries along these lines; it is never too soon to conceptualize your boundaries and responsibilities within the broader context of this new domain.

Think about responsibility now—not later

It is generally easier to build a dam before filling the reservoir. In this vein, we believe people who prioritize Responsible AI from day one will have a compounding advantage. That’s why we encourage our teams, the founders we support, and our broader community to think about Responsible AI now—not later. 

Our message to founders building in the AI space is to move fast and not break things. That era is over. It’s time to move intentionally. There is simply too much at stake to do otherwise. 

If you need a refresher on AI’s potential for harmful outcomes, think back to 2020. Ahead of the US presidential election, the non-profit RepresentUS commissioned deepfakes of Kim Jong Un and Vladimir Putin. US TV networks refused to run the videos, as they were “too realistic,” but they went viral anyway. Or recall DAN (aka “Do Anything Now”): a “jailbreak” exploited in the early days of ChatGPT (January 2023) that allowed users to creatively circumvent OpenAI safety controls.

Source

These anecdotes are not the first, nor will they be the last examples of unintended consequences in the AI era. It’s not possible to anticipate every use and misuse of AI, but it is possible to cultivate a mindset that prepares yourself and your organizations to thoughtfully triage issues if and when they arise. We believe prepared minds are able to swiftly and effectively anticipate and manage through difficulty—which will go a long way in an artificially-intelligent world.

Our Responsible Innovation Values guide every diligence discussion we have in the AI space. We don’t write an investment memo without them. Prior to writing any check, we spend time asking and answering questions along the following lines:

  • What difference or change does the company wish to make in the world?
  • Are the company’s business model and strategy aligned with our RI values of inclusive prosperity, environmental sustainability, or good citizenship?
  • Who are all of the stakeholders of the company, both direct and indirect, and how might they be positively or negatively impacted by the company’s decisions? 
  • Is the team diverse in a fashion that enables it to empathize with and consider all stakeholders?
  • What might go wrong as the company grows, and how does leadership—both at the company and at GC—think about mitigating those risks? 

To be clear: No single set of principles or questions suits every organization. At the end of the day, the most important starting point for Responsible AI is having any set of principles to serve as true north. Developing these now may allay future heartburn.

Marry the right mindset and mechanisms

Over the past decade, a constellation of governments, industry consortia, and NGOs have ventured to create frameworks for operationalizing and enforcing Responsible AI.

Source

That’s a lot of paperwork. If such policies were truly comprehensive, we likely would not have seen the March 2023 Future of Life petition advocating a blanket pause of giant AI experiments for six months to provide experts time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen.” So too would we not see 81% of tech leaders noting a desire to see increased government regulation of AI bias. It’s okay, though—because policy is only one part of the equation.  We believe it needs to be complemented by the mechanisms to make it a tangible reality for many organizations. 

So, what else do we need for Responsible AI to flourish? We need a suite of technologies that make it easy to take action—accessible and open tools to build the optimistic AI future we believe in. 

If guidelines are meant to be universal, compliance therein must be universally accessible. At this stage of the AI era, what is feasible for Google and Facebook to implement is not necessarily feasible for an early-stage startup. Compliance is not just an AI startup or technologist problem. For Responsible AI to evolve from a framework into a true system of governance, every stakeholder must know what to look for—and they will. We know that consumers, policymakers, practitioners, and investors are looking for validators of trust and will hold one another to account. 

“Tooling” is a vague word, and hundreds, if not thousands of tools are available for the construction of AI. They take many forms—some live within larger platforms; some are point solutions. Some are open-source, others are proprietary. Many existing tools are focused on core model construction, but we see a commendable group of “responsibility-native” tools emerging as well.

As we survey the landscape of Responsible AI tooling, we categorize tools based on the philosophical pillars of Responsible AI: 

  • Fairness & inclusion
  • Transparency & agency
  • Security & privacy
  • Robustness & reliability 
  • Alignment & safety

Below are a few (non-exhaustive) examples of the types of tools we hope to see more of—as independent businesses, parts of larger AI platforms, and as open-source offerings—in the future. We’ve noted commendable starts in select areas, but if you’re building in this space, we want to hear from you:

Fairness & inclusion

We want AI that works for everyone—not just select genders or races. Fairness and inclusion address the desire for representative AI—the degree to which AI systems and their outputs do not exhibit biases against particular individuals or groups and promote diversity, equity, and inclusion. 

Meeting this standard requires a holistic approach to gathering data, constructing, and testing models. Google’s What-If Tools, Anthropic’s Evals, and IBM’s AI Fairness 360 are just a few options that can aid technicians in this part of the development process. Many of these tools are open and encourage community contributions because the more users that converge on one tool, the more effective it can become.

Transparency & agency

Trustworthy AI inherently cannot be a black box, and we still think it’s a good idea for humans to be able to control these systems. Transparency and agency strike at these concepts by fostering AI systems with decision-making processes that are open, explainable, and accountable, enabling users and stakeholders to understand how decisions are made, identify biases and errors, and take appropriate actions to remediate.

Some tools help us understand the recipes for AI, like Hugging Face’s Model Cards or Stanford’s HELM (Holistic Evaluation of Language Models). Others dive into the models themselves and aim to aid “interpretability” or “explainability” (such as TensorLeap, Seldon, and Lit), which allow users to understand and debug the logic behind AI decision-making. Still, other tools systematize the idea of “human-in-the-loop” AI, which provides the option for humans to tweak AI as part of the process. We think we’re just scratching the surface of interpretability and look forward to tools that give us further insight.

Security & privacy

It would be harmful if AI used our personal data without consent or if AI was released with vulnerabilities that allowed others to use it for nefarious purposes. Luckily we see many builders focused on ensuring that AI systems and their data are secure from unauthorized access, use, or disclosure. Synthetic data tools such as Gretel and Statice preserve our privacy by masking data in machine learning models. Open-source platforms such as OpenMined and CleverHans provide tools that allow AI builders to construct secure AI by focusing on private data science and testing for vulnerabilities, respectively. We’re still in early innings, but as AI penetrates enterprises seeking to leverage their proprietary datasets, tools of this nature are going to become critical for commercial success.

Robustness & reliability

We generally want to be able to depend on AI to do the things we design it to do. Robustness and reliability tools aid in constructing AI systems that are resilient and can consistently produce trustworthy outputs. 

Monte Carlo’s data observability platform helps companies increase trust in data by detecting quality issues, such as missing or incomplete data, incorrect values, or anomalies in data patterns. Robust Intelligence develops algorithms and architectures for AI systems that can operate reliably and efficiently in complex and dynamic environments. As practitioners inject AI into enterprise and consumer applications, unexpected behaviors might emerge; AI systems hardened with such tools will prevail. We anticipate these tools becoming more important not only in the “core” but also as AI penetrates personal devices “at the edge”.

Alignment & safety

Knowing that AI systems are programmed with similar values to humans and can provide outputs with traceable origins is critical for building trust. Alignment is perhaps one of the most elusive AI attributes researchers aim to decipher.

Anthropic’s constitutional approach to AI places guardrails on AI to always be “helpful and harmless.” Approaches such as this are likely to permeate other AI models, but in the case that they don’t, tools such as those offered by Hive AI harness AI to help identify problematic content. For those wishing to distinguish between human- or AI-generated content, tools such as GPTZero or OpenAI’s experimentation with watermarking are in flight. 

As AI-generated content begins to flood our timelines, the ability to moderate the content and distinguish its origins—both as builders and users of platforms - will make Responsible AI a reality. Similar to misinformation and fact-checking tools used at social media companies, tools that address provenance and origin of AI-generated content may surface large opportunities for innovation.

Charting the path forward

These represent only a portion of the tools we need to manifest a world in which Responsible AI is trustworthy AI. We don’t just need more tools—we need better and more comprehensive tools to ensure that Responsible AI can be a reality for all stakeholders.

In front of us lies an opportunity to propel Responsible AI from an ethereal concept to an approachable reality. What we have seen to date is, in our minds, cause for optimism. Still, this vision requires a tech accord of Westfalian proportions—a set of standards and tools built by the AI community for the world. We’re excited to partner with builders who share this vision. 

Responsible AI is a pillar of our investment thesis and process here at GC; we’re fortunate to collaborate with companies like Adept, Hive, Luma, PathAI, Aidoc, Curai, Grammarly, and many more that share this ethos, and we look forward to welcoming many more to the GC family. As founding partners in Responsible Innovation Labs, we’re also committed to contributing to the underlying mindset and mechanism shifts we need to cultivate shared prosperity in this new era. If you’re interested in building AI responsibly at any stage, please reach out.

Further reading on responsible innovation and responsible AI at GC: