top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

Responsible AI’s Role in Product Leadership Efforts at Gong.io

A Discussion with Eilon Reshef - co-founder and CPO of Gong


Generative AI has been making waves in the business world as it re-invents content, processes, and offerings. As product teams look to harness the power of Generative AI to drive innovation and growth, the responsible practices surrounding its use are becoming paramount.


In light of this critical balancing act, our CEO, Chris Hackney, engaged in a thought-provoking conversation with Gong's co-founder and CPO, Eilon Reshef. Eilon has focused on Responsible AI as a core component of Gong’s approach to AI integration which has been critical for Gong as a leader in Revenue Intelligence.


Here is the full video of Chris and Eilon’s discussion. As a companion piece to the conversation, we also pulled together the top Responsible AI considerations for product leaders based on topics covered in their discussion. Check out the other interviews in the Responsible AI Leaders Series here.




Top Responsible AI Considerations for Product Leaders


1. Establish A Culture of AI Governance by Design

When developing AI-driven products, innovation is just one piece of the puzzle. It's equally vital to weave Responsible AI principles into every phase of AI development and deployment. Emphasizing fairness, accountability, transparency, inclusiveness, safety, and security ensures that, as AI solutions reach the market, they reflect the highest ethical standards and best practices of our organizations. (For more on AI Governance by Design, check out this blog.)


2. Make Responsible AI a Cross-Functional Effort

Responsible AI needs to be a team effort. Leadership from specific teams can provide momentum, but it really requires buy-in and contributions from all the groups within an organization to ensure that AI governance best practices become a part of the culture and DNA of the business. At Gong, their AI Governance Council brings together leaders from across the organization to ensure that the right controls are in place systemically to drive Responsible AI across the company. (See here for an example responsibilities for an AI Governance Board.)



3. Ground AI Decisions in Reality and Data-Driven Insights

Eilon stresses that meaningful interactions trump mere data accumulation, emphasizing the need to share information that "resonates and educates." In an age where AI advancements are often sensationalized, he advises businesses to discern fact from fiction, urging them to understand "what is real versus what the press says."


4. Ethical AI is Non-Negotiable

As AI's prowess grows, aligning its applications with ethical standards becomes increasingly critical. Most companies already have a mission and cultural values that act as their north star in driving the business forward. Start by aligning your approach to AI with those values and then layer in discussions and decisions tied to the unique ethical issues that AI brings with it. (examples: extra emphasis on bias considerations and a dedicated focus on AI transparency. Check out this AI Ethics policy template for more examples and a quick start.)


5. Humans are the Heartbeat of AI

AI might be the tool, but it's the human intellect that fuels its evolution and ensures its ethical deployment. More importantly, human oversight of each phase of AI development is a critical Responsible AI component because it provides a check-and-balance on the outputs from any AI system employed. This is why NIST AI RMF standards cover the topic of human oversight in-depth within their overall risk management structure. (If you aren’t familiar with the NIST AI Risk Management Framework, explore this primer.)


6. Responsible AI Needs Internal Champions

Oftentimes championing accountability and responsibility are thankless tasks, but they’re absolutely critical to ensure proper AI governance in a rapidly shifting landscape. Every layer of an organization should feel empowered to champion Responsible AI practices to ensure risks are alleviated while driving innovation forward. Diverse voices with complementary expertise can make a huge difference in how AI is reviewed and executed. (This ties back to point 2 but also the idea of designating an AI Officer - here’s a sample role description for such a role.)



Responsible AI Instant Insights






Comments


bottom of page