top of page

A Tactical Approach to Responsible AI

A Conversation on responsible AI with Jim Anderson, Beacon Software Chief Executive Officer

Check out Chris Hackney’s latest interview with Beacon Software’s CEO Jim Anderson. Hear how AI is similar to the Internet and Social Media and get a tactical approach to responsible AI. Watch the full video or read key excerpts below (edited for length and clarity). Check out the other interviews in the Responsible AI Leaders Series here.

Hackney: There’s a ton in the media about both the promise and perils of Generative AI. When it comes to generative AI, what's real and what's not? Where are you seeing the real opportunity or threats / issues around generative AI right now?

Anderson: There's so much to untangle there. I'll start with - I think AI is substantive and real and transformational. It's a big deal. The early days of the Internet back in 1997... remember the dial-up modems that we used then? Everybody was like, “this is wonderful. I wish it was faster.” So, we all moved rapidly into high speed and now it's effectively a ubiquitous utility. Think of all the things that were

enabled because of the Internet. I feel like AI has got that kind of potential. And also that kind of risk. Look at all the bad things that happen with the Internet, too. So we should not be naive about the challenges and responsibility we have to learn from past mistakes with technological innovation. But I think it is foundational, transformative and will affect every area of every organization. Exactly how and at what speed and how much is positive and how much is negative - that's very much to be figured out.

Do you know what people who predict the future are called? Futurists. Do you know what we call people who predict when the future is going to happen? We call them billionaires. So, if you can figure out when this stuff is going to happen, that's the thing. Being too early in technology is just as bad, in many cases worse, than being too late. So timing is everything.


But if you're not thinking about that, especially in your organization, you run the risk of actually going slower rather than faster because you didn't think through the issues.


I love what you're doing - taking responsible AI and governance. That's sometimes a dry topic. Some people love to sit around and talk about governance. Most people would rather just do stuff well. But if you're not thinking about that, especially in your organization, you run the risk of actually going slower rather than faster because you didn't think through the issues. And then you end up with a backlash or a set of unintended consequences because you didn't think it through.

Hackney: How are you thinking about generative AI at Beacon?

Anderson: We think about it a lot and in a couple of very tactical ways. One is just something like this. You're recording this obviously for use later. You probably have a tool that will automatically transcribe it, right? That's effectively using AI for transcription. Well, there are tools that automatically distill the meeting, right? Take this long transcript and distill the essence of it. It's maybe half good depending on what tool you use, just like meeting notes taken by people are sometimes half good. I think a lot of people expect perfection. If it's short of perfection, then the glass is half empty. I look at it the other way. I say, “the glass is half full. Would you rather start with the proverbial blank sheet of paper or would you rather have something transcribed and distill a meeting you were in?” And you're like, “yeah, that's right. I just did a couple of tweaks then in 5 minutes I've made a pretty good set of meeting notes instead of taking 25 minutes to make a pretty good set of meeting notes.” Those kinds of things are what we're doing.

On the software development side, GitHub Copilot has gotten a lot of traction. It makes developers vastly more productive. Roughly half of code seems to be completed by GitHub in Copilot in some way, shape or form. I'm not sure that literally translates to a 50% productivity increase for developers, but it is a measurable productivity increase for developers.

Hackney: You've always had the myth and text circles of the 10X developer. But I really think that generative AI is going to create the 10X everything. You're going to have the finance person … your analyst might even become 20X.

Hackney: Are you guys thinking more about generative AI in the back office now, things that aren't facing customers?

Anderson: Yes we are. First off, you know we're an enterprise software company. We know B2B. There's no reason for a consumer to log into Beacon. So, of course, it's “what can we do to help our clients with their software projects?” and help our statistics. We do a lot of work with governments. Various people have written that 70% of software and IT projects fail in government. We don't believe that has to be the case. We bring a variety of things to bear - our own software but included and embedded in that is all of this thinking about how do I use AI smartly. Ways to actually help all of this work better. But in a compliant way. And there's lots of details around that, but it's not hard to imagine half a dozen ways where AI can contribute to that mission.


But it's naive, and I think not even just naive - irresponsible, to say, “Oh no, government should not be involved. Let industry figure it out and then we'll let you know when we're ready for government to be involved.” We've seen how that works, right?


Hackney: Since you mentioned your work with government entities … we had big news this week. Both the White House executive order on Monday and the OMB piece yesterday on federal agency oversight. How are you seeing those regulations? Has this been a subject that's starting to be talked about on your end?

Anderson: Not much with our customers because this is all very new and you know how long it takes to propagate from executive order or OMB action all the way through down to the agency level. So I think this is a month, if not years, long conversation. But I will tell you my personal opinion. I view it as a step in the right direction. Is it exactly right? I'm sure it's not. Are there plenty of legitimate criticisms that maybe we shouldn't be governing by executive order, that instead Congress should take action? Well, sure. That's a fair point. There might be bigger issues with Congress than them not taking action on AI right now. We live in the world we live in - not the world we wish we lived in. And so I welcome these kinds of conversations, even debate, about “hey, this executive order is wrong and here's why.” We're never gonna get anywhere without talking about these things. And as we've seen before, executive orders can be litigated quite extensively. So we'll probably see no small amount of litigation going on around that and that's fine. It's a very messy process, but at least the process is going and we're having these conversations.

I contrast that with the area that you and I know quite well - social media - where I think we were naive about the negative effects - or overly optimistic. “Oh, that won't be that big a deal.” Well, guess what, it was that big a deal and there wasn't regulation. You can overdo it on regulation. You can under do it on regulation. Obviously. All the way back to the Three Bears trying to find a way to get to the one that's just right is the art of it.

Hackney: I remember the days where Mark Zuckerberg could go to Congress and he was treated like Sam Altman is today. Lauded genius. “What do we do next?” And then years later, he’s the most villainous person put in front of Congress ever.

Anderson: He was probably not as smart or as genius the first time around and not as dumb or malicious as he was the second time around. Everything's in the middle. But let's not try to do that exact same thing. We won't make the same mistakes again. We'll make a whole new set of mistakes. That's another one of my favorite sayings. So of course we're not gonna repeat the exact pass, but let's at least be smarter about it. And I think the executive order and the action in general from governments, it is a good step in the right direction. But it's naive, and I think not even just naive - irresponsible, to say, “Oh no, government should not be involved. Let industry figure it out and then we'll let you know when we're ready for government to be involved.” We've seen how that works, right?

Hackney: Yeah, that does not work well. So are there parts of regulation or responsible AI areas where you're interested in seeing advancements? Other areas that you really feel like they should be tackling at this stage?

Anderson: Not, not really. And I say that because I don't think too much of the policy level, right. You know, when you're building a company, you know it's all about executing well on behalf of your clients and most of that is tactical. “A bad strategy, well executed, is better than a great strategy, poorly executed” is one of the aphorisms. I generally believe that. We spend most of our day, all day, every day on those tactical items. And so I'm not thinking so much about broad sweeping AI policy things; I'm thinking very tactically.

I'll give you an example. We're recording this meeting. So imagine going into a government agency and saying, “hey, we want to record this meeting.” Well, you know recording has this (even today, in 2023, post-pandemic and Zoom where we're all way more comfortable) … still has this air of “is this video going to be used against me in the future? Is this going to become part of a deposition and a court proceeding?” And I guess it's possible. It's all digital information and who knows what'll happen in the future. But if you view every video recording as a potential deposition and therefore you don't want to have any video recordings, then you're effectively shutting off an avenue of productivity. “OK, I can't get an automatic transcription of the meeting. I can't get an automatic distillation. I can't automatically flag action items and follow-ups.” Again, very boring tactical details. Who cares other than in the moment about the action items in the list of action items three years from now? In litigation, maybe. I guess it's possible that would become relevant. But I think that's the concern I have is that we end up shutting off avenues of innovation if we're not careful. For very almost mundane reasons. That all video is weird and it makes people feel uncomfortable being a good example.

Hackney: It's one of those interesting areas where when we've talked to leaders, they're starting to get nervous in some of those areas, particularly around things that can move markets. The idea that, “hey, I've got an earnings call and someone takes that earnings call and manipulates it. Takes one of my answers and changes it. And before anyone can validate it, the market 's already moved on what “I said”.

Anderson: That's a good example, especially when you talk about markets and seconds, right? Seconds matter in markets. So a quick manipulation that takes 5 minutes or five hours or five days to correct is a big deal in the context of markets. I will say most of what we end up doing is not market moving. Most of the projects that I'm involved in are day-to-day work with clients. If something gets confused for 5 minutes, it's just not that big a deal. If something gets confused for five hours or even 5 days, it's probably not that big a deal. And frankly a lot of what we do is not that interesting to where somebody's going to come in and manipulate the action items from the meeting. I don't want to be naive to the misuses and I'm sure there are things I'm not thinking about, but by and large, I don't think you want to make every decision based on the market moving potential of a deep fake video with an earnings call, right? That's a particular use case that's sort of high pressure and high stakes.

Hackney: I think that's where you'll see those pressure points start because they always follow where the money is. And if there's money there, that's probably where it'll get tested first.

Hackney: So let's dial this back to you guys then. So you're a typical tech company, IT company. How are you guys thinking about Responsible AI now? On a scale of 1 to 10? When you are thinking about it, what are you doing or thinking?

Anderson: On a scale of 1 to 10, probably 3. In a tech company, things move quickly. It’s great President Biden issued an executive order. Let me know when it matters to me is the question. So we keep an eye on it. That's not to say that we don't try to behave responsibly every day. You want to anticipate and so we definitely look at things through the lens of how can AI help what we're doing. I need to write a proposal to a client. I can go find an old proposal I wrote on my hard drive or in the cloud somewhere. Or I can go to ChatGPT or some other large language model and say give me a draft of a proposal to do X, Y and Z for this type of client. And if you've done that kind of thing, you're like, holy cow! That's not right. Of course, it doesn't have the content, but the structure and the outline and the sections and the greeting and the closing and and all that was like, “wow, that's a real time saver.” That's one example. I need to write a safety plan for lane closures on a traffic corridor. OK, well, guess what, 1000 of those have been written probably just in the past year. I could look on my hard drive or in my cloud and find some, but these large language models can find a lot more. So in many ways those bigger areas that are slower moving, that have more documentation, standards, guidance manuals, design manuals … those heavyweight kinds of things, requirements documents … the more of that there is - it an information distillation exercise. So that's the more useful end of the AI spectrum versus “we're just doing everything by the seat of our pants and there's not a good written road map for any of this.” You're going to predictably get less value, I think, out of most AI queries than if there's a lot of data on which they can do their thing.

Hackney: I would agree on that. The summarization and or at least the finding and distillation pieces is probably the easiest early strength of it, especially arcane areas like, you might be going into it.

Anderson: Yeah, exactly. I mean, it's not arcane to the people who do it, but it's certainly arcane to people like, “oh wow. I've never done one of those.”

Hackney: Have you or your organization looked at any policies around AI yet? Or even just a memorandum on what the use of AI should be? How are you just thinking about how people at the company should use it? Because you trust yourself to do that, but you have to manage a whole organization around this. How are you thinking about that?

Anderson: That's where being on the relatively small size benefits. Some of your listeners may be in organizations with hundreds or thousands or 10s of thousands of people. And I think that does open up a whole new set of challenges. I know I'm behaving responsibly, but how can I make sure? The organization, the people on my team, peers … I think that's actually a very hard question and it's a very different question because it gets a lot quite a bit into organizational dynamics. How heavy-handed do we want to be? And just like I was saying with the video … we should have a policy about when and how we should record meetings and what do we do with that. For a large organization, that sounds like a six month exercise with multiple committee meetings and probably 60% chance that it ends in failure and lack of consensus, right? That doesn't mean we shouldn't talk about it and maybe we shouldn't do the work, but it's gonna be hard. That's very different than in a small, 20-person organization that can move more quickly and make decisions with less of that consideration just because of the different nature of the company. So, we tend to be on that smaller end of the spectrum where we're not thinking about the longer term. You know, “wow, this is a policy that's gonna live on for years and we need to get it right across multiple functional areas.”

Hackney: That's always the toughest thing for small medium business. And it's actually one of the things I liked about the executive order. It bifurcated between large organizations and smaller organizations requirements. Regulation is easy for larger organizations to handle, because they've got the dollars. They can go hire McKinsey or Deloitte to come in and implement it or a SOC2 compliance is easy at that level. But when you're a small organization and you’ve got an RFP that requires SOC2, you look around and go, how do we get this?

Anderson: Yes, a little bit more of a challenge. And that's true. And that's completely independent of AI. That's been the challenge of small companies all along. I don't have the big infrastructure and compliance history and things that will reassure you, certifications that will reassure you. But small companies have grown to big companies all the time. That's part of the journey - you've got to work your way through that and effectively punch above your weight class if you want to be one of those big companies. That downside is hopefully offset by a focus and agility that may be harder for a bigger company to match.

Hackney: I want to drill into data privacy. So we saw the pathway on data privacy had Cambridge Analytica and environments like that that led to GDPR and CCPA. Do you see this path kind of going the same direction where we'll have some events that blow up, that will speed things up, governments will speed up on their own. How do you see this comparing to how we saw data privacy legislation?

Anderson: I sure hope it doesn't map to that, because I don't think we had a very good collective path around data privacy. And some would argue there is no data privacy or there's almost no data privacy. Clearly the EU is taking a leadership role on trying to change that.

I spent a lot of time with data privacy lawyers in my previous life. I had a data privacy lawyer who said, “hey, you need to quit thinking about this as privacy and start thinking about this as compliance.” And he was talking about GDPR in particular. And I thought that was really interesting because I think most of us want to jump to what's this trying to accomplish, which is privacy. And in many ways that's a mistake, right? It's nice to understand what it's trying to accomplish, but what you have to comply with is not what it's trying to accomplish, but what it says. You'd love to think there's no distance between those two, but there often is. In fact, there always is. Law-making and regulation-making is an imperfect process at best and an awful process at worst. So you've got to understand both of those. What do I have to comply with? What's the intent? And that's a little bit of a forward-looking risk reward. I don't think too many people wake up in the morning and want to be malicious or irresponsible or act badly. But yet a lot of bad behavior ends up incrementally coming out of “Oh well the law doesn't require me to do this. This isn't so bad.” That progression and then, before you know it, you end up in a Cambridge Analytica situation where if anybody were to sit down and describe that from scratch, I think everybody would have been appalled and say we can't let that happen. But it didn't happen all at once. It happened a little bit at a time. Progressively people just kept pushing the boundaries and, at the risk of dating this podcast, I will say if you're a college football fan, which I think you are, you look at the what's going on at the University of Michigan and then stealing signs from the other, it's it's kind of fascinating to do. I think that's another one of the things that started small. And then before you knew it, you've got this whole systematic operation where it's an intelligence gathering operation that everybody's looking at and saying, “holy cow, how did that get to be so big?” And I guarantee it didn't start that big. And it's just those things, they start small and then before you know it, they've gotten out of control.

Hackney: Yeah, it's a classic case of you starting a grey zone and everyone's like, “oh, that's not a bad grey zone. Everyone does it.” and then you just keep moving.

Anderson: Not to make it about that emerging scandal - we don't know how that ends. But you've got a very enthusiastic young person who wants to overachieve and it's hard to communicate … “hey, I love your enthusiasm. I love your passion for doing your job. I don't necessarily love the judgment you're showing about how far you take that.” We've all seen situations like that where somebody who's enthusiastic just goes off in a direction without the controls, without the constraints, without the supervision and you're like, “wow, that's not what I expected.” And so we shouldn't be surprised by that kind of thing.

Hackney: You could have verbatim described our journey through social media with that thing called meeting. It's the same way, right? The saying was, “if you aren't buying the product, you are the product.” We all knew that in the industry. But it was always for a good purpose to start. It was always “Hey, I know this about you. I can serve you better ads. I can serve you better products.” But then it became so pervasive. Over time and so many third degree applications that yeah, it became something very different as a beast.


We are all going to find day-to-day tasks that we have suddenly just started to rely on AI to make better.


Hackney: Where do you see things in 12 months out with generative AI? What's your view of this world?

Anderson: I again am going to get tactical on you. I'm not going to give you a big sweeping prediction. I'm going to give you a prediction to say: We are all going to find day-to-day tasks that we have suddenly just started to rely on AI to make better. So if you're a software developer, you're using GitHub Copilot and becoming more productive. You're probably already doing that if you're a software developer. If you spend a lot of time in Excel spreadsheets - you've found this little AI trick that has made you vastly more productive. If you write a bunch of documents, you'll find some new tool that helps you do what you do better. And that's one of the exciting things is these new tools are coming and going. Every single day. And how many of them get folded into the core capabilities of the big large language models and the big tech players? How much room is there for innovation of smaller companies on top of that? I think that's all very much in flux. But we are all going to use AI. Which of us wouldn't use something that makes our job easier or better on a day-to-day basis. So I think looking at that 12 months … what am I doing today and how am I doing my job that's different than it was before AI came along? I I think we will all have probably 1, 2, 3 examples of “yeah, I didn't even think about that. I do this, I do that, I do the other. I couldn't do that 18 months ago.”

Hackney: Yeah I think you're 100% right on that. It's very easy for us to find the efficiencies with AI right now. Like what are the things to do today. And I think that's going to be great for bottom lines, it's gonna be great for leaders like yourself where you're like, “hey I can now 10X what we're doing.” I get really excited about the things we don't know, which is what's the innovation. It's when the marginal cost in these things goes to zero. What are the things you could never do before that now suddenly can be done at scale at a reasonable cost?

Anderson: I love that too. That’s the billionaire point from earlier. If we could figure out what that is and when to do it… it requires a lot of imagination. Predicting the future and imagining what's possible with capabilities that you're still grappling to even understand. How could we ever have imagined social media before the Internet? How could we ever have imagined artificial intelligence before the Internet and computers? You've got these things that you have to take for granted before it can then bake into your creative process. At least if you want to be credible with it. Maybe there's some small subset of people that can imagine and look around corners and see here's gonna … it's almost like you're playing three-dimensional chess at that point trying to figure out an infinite number of permutations. So I think for most of us that starts to become real and all of a sudden you're in a brainstorming meeting one day or you're trying to solve a real client problem and you're like “I could do this and oh wow, if I if I thought about 18 months ago, I would not even ever had that idea 'cause it wouldn't have been possible.” Beyond just efficiency and I did a 30-minute task in 20 minutes. “Wow. I never could have even done that” is far more exciting.

Hackney: It's been a long time since it's been this exciting to be an innovator in a field. Your next billion dollar plus companies are going to come out of this period right here. It’s a bit scarier to be a big company because I think consumer behavior is going to change dramatically, just like it did with the Internet. People get a hold of it …you don't know what it is, but people take control of information and other things in ways they never did before. And how does that change behavior and buying patterns and creates the Googles of the world and Amazons of the world?

Anderson: It's funny, I'm reminded… one of my brothers made a comment back in the early days of the Internet like “oh, I would never give my credit card number to anybody and buy something online.” And that was a reasonable position at that point. It didn't age very well as a prediction. I’m fairly confident that now he buys plenty of things online, but it was illustrative. It's like at that point you know the whole risk profile looks sketchy. HTTP versus HTTPS. Secure commerce. Entire industries built around that and making people more comfortable. Look at where we've netted out.

Hackney: Yeah, If you can nail that, then you're the next …I remember the same things about “I would never get in a stranger's car” or “I'd never let strangers in my house” and Uber nails it. Airbnb nails it. And those are a billion dollar plus ideas.


bottom of page