A Discussion with CallRail Chief Product Officer (CPO) - Ryan Johnson
Continue to explore responsible AI through AI Guardian Founder and CEO Chris Hackney’s interview with CallRail Chief Product Officer Ryan Johnson. Watch the full video, instant insight videos (scroll to the bottom) or read key excerpts below (edited for length and clarity). Check out the other interviews in the Responsible AI Leaders Series here.
Hackney: CallRail has been quick to announce new AI-powered features within its product suite. As an experienced product leader, how are you thinking about and approaching AI in your work today? Where do you see opportunities to integrate AI into your product roadmap?
First, CallRail’s AI product has been in the market since 2016 so CallRail has gathered a lot of learnings and that history has made it easier for them to embrace the fast change to take things from good to great as well as unlocking intelligence capabilities.
Because of the customers they serve, CallRail has chosen not to use open ChatGPT prompts directly with customers at this point but rather is doing things behind the scenes to make it simple and fine-tune the results to ensure that they are delivering value to customers.
I take a two-pronged approach to innovation. My driving force is to deliver real value by making what we have the best it can be with best-in-class technology. Then get things into the market quickly with CallRail Labs to assess what is worth turning into a product.
Hackney: How do you think about AI governance and responsible AI? How do you bring it into the process?
AI is going to permeate every part of our business and there’s no one who is really looking at AI from this perspective. IT compliance and security are, but they likely don’t realize all the AI applications the product team is considering/excited about, so the product team needs to engage.
We have to start to building positions on this. AI is going to impact every part of the organization - the products we sell and the products we use internally. We need to think about how we use customer data, the approval process for what products we use internally and how we protect IP. Employees might be trying to do the right thing but not understand the potential implications on the business.
We’ve set up a governance committee.
There’s not a lot of information out there on AI governance - you’re starting from scratch a little bit … how to manage this in a spreadsheet, what is the approval process and how to communicate throughout the organization, etc.
It’s exciting. From a product perspective, it is an opportunity to think of this as a company of products - both internal and external - whereas normally as a product leader I’m focused on what we sell. Now I can actually be part of the overall AI strategy and how that fits into our overall philosophy of AI use and how do we govern around that.
Hackney: What has been the reception of AI governance across the organization?
We’ve always been responsible stewards of our customer data. AI is just another layer of that - certainly the most complex layer.
The people at CallRail are embracing AI governance - they are just like “can you tell us what we need to do” because there is no playbook.
Our overall stance is very pragmatic. We see the power of AI. We want our employees to be able to embrace and use the best and greatest tech that’s out there. But we don’t want it to do any harm. We don’t want it to negatively affect our customers. Or our customers' customers. It’s that balance of the elation of trying these things with protecting our customers.
We need to be proactive so we have a response when we get a question (or a plan before an incident arises).
Hackney: Have you started thinking about policies that tell employees how they should use AI (rather than how they shouldn’t)?
Not as much as we should. We’re still exploring how to use AI in some areas. Customer support is probably the first frontier.
Overall, it’s a balance of how much you use AI with human touch. It’s a tool in your tool chest, but not the only tool.
Even the “don’t” guidance is really tough. There are many ways around a don’t rule. Even a simple rule not to use free ChatGPT - there are probably a thousand tools built on top of ChatGPT and none of your security checks would be able to protect you because they don't know what’s behind the curtain. It’s a little bit of education.
We’re in flight on figuring out how to use this technology to get the most amount of benefit but also how do you protect the company at the same time.
Hackney: As a product leader, does it feel invigorating having to learn so much?
There’s so much research out there on AI. It’s a lot of discussion about the Art of the Possible which is 99% of everything out there on AI and 1% is production commercial.
We’re at a point in time that’s pretty rare … SMBs and Fortune 50 companies have access to the same technology at the same time. It’s accessible, it’s democratized, it’s affordable. That’s probably the most exciting thing from a product perspective.
Hackney: It truly has democratized data science for a bit here. You used to need a massive data science team for ML and NLP - generative AI has cut so much of that middle ground out. You just need to think about how to use your own private data to create your own LLMs.
Hackney: There are things moving with the EU AI Act, the White House AI Bill of Rights, etc. Are you as a product leader trying to keep abreast of that? How do you approach regulation?
It’s a mix. You have to be curious as you’re building products to see where things are going. Partner with other folks like General Counsel, IT and Compliance to triangulate what you think is going to hit first. Then put it in front of executive leadership.
We need to start thinking about this and planning for it now.
Hackney: In a lot of areas, if regulation is too onerous, you can step back. But in areas like AI, it’s such a big thing that you can’t step back regardless - you have to learn to live within it.
Hackney: As an experienced product leader, what advice do you have for other product leaders on implementing AI and strong AI governance? What do you think is critical?
Product: When developing new products, if you focus on the customer (customer need, customer value, customer challenges and problems), you usually win. That doesn’t change. Don’t let the noise push you into releasing things to market if they aren’t ready.
Product practitioners: How does AI change our roles going forward? As good product managers or product leaders, you have to be super curious. I’m excited because I feel like we are going to be able to reduce cycle times, prove ideas with less dependencies. That’s half the battle. If you’re not investing in this stuff daily as a product person, you will get left behind. The role is changing, but it’s changing for the better. It’s allowing us to do more of what we enjoy - building really great products - and how we have tools to help us move faster.
Governance: Start early. Even if it’s just 1-2 people to start forming opinions, bring it to the executive team and educate the organization. There’s an opportunity for product to be a trusted resource. Be curious. Bring in other folks cross-functionally. It gives people career opportunities.
Hackney: When we talk to folks who are doing AI governance well, they all say it’s got to start with the culture. It takes champions, it takes leaders in the organization to build that culture and get others behind them. Governance is always a tough one - no one wants to be the one raising their hand saying “what about that thing a year from now that we might have to worry about”. It takes leadership and bravery behind that leadership to rally others around and build it into the culture. It’s less about slowing down what’s going on, it’s more about accelerating AI as you move forward so you don’t have hiccups along the way.