- Finding the Right Data Center Resources With Bridgepointe’s Mel Melara - November 27, 2023
- Artificial Intelligence Optimization in the Data Center World With Brian Eichman - November 14, 2023
- Is this How the World Will End or How Smarter Business Begins? A Practical Discussion about AI (Part 2) - November 6, 2023
For this episode of The Bridge, we’re bringing you part two of our live panel discussion from the 2023 Bridgepointe Tech Summit. We answer some of the most pressing questions about artificial intelligence and the pressure leaders currently face to adopt this technology.
For this practical discussion on the hype and realities of artificial intelligence, I was joined by:
- Dan O’Connell, Chief AI and Strategy Officer, Dialpad
- Karen Bowman, Vice President, Channel/Alliance, Level AI
- Tushar N Shah, Chief Product Officer, Uniphore
- Raghu Ravinutala, CEO, Yellow AI
During part two of our discussion, we get into how businesses are using AI technology today, where this technology is going in the future, and so much more.
Topics covered in this episode:
- The concept of superintelligence and the singularity, with varying opinions on its proximity.
- The idea that technology is inherently biased due to human input, and AI might help reduce bias by relying on data and algorithms.
- The emphasis on practical AI applications and real-world outcomes, such as automation and improved productivity.
- Speculation about the possibility of AI becoming self-aware and the implications of such a development.
- Concerns regarding the development of autonomous weapons capable of making decisions without human intervention and the need for regulation in AI.
- Addressing biases and ensuring transparency in AI systems.
- The potential misuse of AI by bad actors and the challenge of normalizing innovation and regulation.
- The importance of transparency in data usage and sharing among companies and consumers.
- The balance between innovation and regulation, with a focus on transparency and accountability.
- The role of both software and hardware in AI regulation and the responsibility of various stakeholders in the AI ecosystem.
- The need for regulation to prevent catastrophic events and ensure human intervention in AI systems.
- The importance of accountability in AI systems and the intention of regulation to drive accountability.
- Practical applications of AI, particularly in the area of customer experience.
- The impact of AI on cognitive decline and emotional intelligence (EQ).
- The potential for AI to improve customer service and business efficiency.
- The AI Act in Europe and its implications for AI regulation.
- The responsibility of individuals and organizations in determining how AI is embraced and used.
- AI’s impact on human interaction and generational challenges.
- The potential for AI to be both a solution and a challenge in the context of education.
- The importance of regulating social media and addressing its impact on youth.
- The opportunity for businesses to capitalize on AI technologies.
ABOUT DAN O’CONNELL
Chief AI and Strategy Officer, Dialpad
Dan is the Chief Strategy Officer at Dialpad. Previously, he was the CEO of TalkIQ, a real-time speech recognition and natural language processing start-up that Dialpad acquired in May of 2018. Prior to TalkIQ, he held various sales leadership positions at AdRoll and Google.
LinkedIn. https://www.linkedin.com/in/droconnell/
Web. https://www.dialpad.com/
ABOUT KAREN BOWMAN
VP of Channel and Alliances at Level AI
LinkedIn. https://www.linkedin.com/in/karen-bowman-5a52073/
Web. https://thelevel.ai/
ABOUT TUSHAR SHAH
Chief Product Officer, Uniphore
Tushar Shah has held numerous leadership roles throughout his career. He joins Uniphore from NationBenefits, where he served as president and CEO of Fintech, driving approximately $1B worth of spend. Prior to that he was the senior vice president at PayPal where he led product and engineering, and managed credit, identity, risk, compliance, customer service, machine learning and AI platforms. Shah also played a role in the critical aspects of the separation between PayPal and eBay in 2015 and in the rapid growth of PayPal’s market cap since then. Among his other leadership roles is spending a decade at Bank of America as a technology senior executive for consumer banking, e-commerce, sales and services and software engineering. Earlier in his career, Tushar worked in IT for Teleglobe (Bell Canada) and customer care for Broadslate Networks and WorldCom (Verizon).
LinkedIn. https://www.linkedin.com/in/tushar-n-shah/
Web. https://www.uniphore.com/
ABOUT RAGHU RAVINUTALA
CEO, Yellow AI
LinkedIn. https://www.linkedin.com/in/raghuravinutala/
Web. https://yellow.ai/
Tushar Shah:
I think the singularity is near because it actually impacts biology. The fact that AI is now moving into the biological ecosystem with DNA manipulation and those types of things, this concept of superhumans. So it may not be the machine that takes over, but there could be superhumans thinking a little bit of, if we want to think out of the box, Captain America, right? Getting that injection of that saline solution and now becoming the scrawny guide and about being Captain America. That’s essentially what AI is looking at and potentially bringing in: how do you potentially manipulate DNA?
Raghu Ravinutala:
Okay, so you’re a yes regularly. I believe it’s a long way away, singularity. As you know, I think the largest application we are working on is LLMs, and if you just break down what’s an LLM, it’s our next word predictor at the end of it, right? And we have been working on AI research for the last 30 years. It’s very hard to get large, highly adopted AI applications. And we are seeing for probably the first time at such scale with LLMs and getting to singularity and getting to this kind of superhuman intelligence. We did a ton of adoptions, probably not just about the basic algorithms but how well things are getting adopted. So you keep collecting data and improving. I believe that’s quite some long way away.
Scott Kinka:
We might as well. Let’s just get everybody’s score on this one. We’ll see who wins.
Karen Bowman:
I’ll chime in on this one because first of all, I was a physics major, so singularity meant to me the thing that happened after the event horizon, which is the death of a star of black holes. I’m like, huh, I don’t know, maybe that does relate to AI. So I looked it up and found it wicked and it is kind of that when this technology takes that next step to start developing things that we couldn’t even possibly think of or do and somewhat autonomously. So I think there could be things we can’t even imagine that could come from that singularity and that bias. And at the end of the day, when you’re building these models, you’re training them, you’re giving them the data and the input to use. So I think there’s a good long time where we’re going to have a lot of practical applications for automating things, doing deeper analytics than we’ve ever been able to do before, and providing true value based on that back to our customers. So I think for the strategists and the folks here, those are the concepts that I think are important to drill down with customers is yes, there’s this hype, yes, there’s things that we’ve learned about called a singularity used in a different context, but there’s also real world outcomes that you can get if you do a hundred percent auto scoring on all your agents. For example, we had a customer, they went from 30 minutes to find something to score to five minutes, that’s six times more productivity. So I think that’s where this conversation becomes real for all of us.
Dan O’Connell:
Yeah, I was going to say, I think it’s possible, probably not near, so I agree.
Scott Kinka:
You’re in the middle.
Dan O’Connell:
I’m in the middle.
Scott Kinka:
If you’re keeping score, we didn’t come to an answer by the way. We don’t have agreement one way or the other here, but fire away.
Dan O’Connell:
But I think what’s important is that every technological step always takes more time and it’s harder than we initially think. And I think we are really enamored by large language models, I think rightfully so. I think there’s some really practical use cases. We’re already sealed, but you can look back at autonomous driving. We thought 10 years ago that autonomous driving would be solved. And it turns out that that’s a really difficult problem to solve, whether it’s in an urban environment, let alone if you’re driving on rural roads and can’t actually map the road. So I think these things are, yeah, they’re possible, but they’re going to take more time in our lifetime. I think there’s a good shot at it.
Scott Kinka:
And I probably should have just leveraged how the singularity is being used as a term around AI. Ray Kreel was talking about in a book called The Singularity 20 years ago, but this is basically the moment at which AI moves from being referential to self-referential, meaning it wakes up and knows it exists as opposed to just executing our commands. And we’ll leave that there. I think the fun behind this for the science fiction nerds in the room, me included, right? I love that. Just as it may get there, we probably just won’t know it’s really the issue, right? I mean it’s going to be like, Hey, I woke up today.
Karen Bowman:
Well think of it creates different dynamics and decides that humans could be useful to it in a different way. Classic, like the Matrix, we could all be batteries, that would be a good thing. Let’s plug ’em all in, they won’t even know. So I think those are things that are postulated and possible, but I agree. I think it’s a way off, but maybe not as far as we think.
Navigating AI Regulation: Balancing Innovation and Security
Scott Kinka:
Okay. Alright. I have Robocop up here, but I pulled the quote, right, development of autonomous weapons that can make decisions without human intervention raises concerns about the potential of AI to be used in warfare and other disruptive applications. And I’m not going to dig into that so much as just this is where we cross into the regulatory thresholds. Alright, so we already have Tushar saying you are supportive of legislation. Can you talk a little bit about what’s going on here and then we’ll all unpack it together?
Tushar Shah:
Yeah, I think for us it’s more about ensuring that bias doesn’t exist. There needs to be a significant moment from technology. It’s almost, like you said, cloud, mobile, the web all rolled into one and now we’ve got this super just explosion of technology enablement. I mean, you look at what happened with Chat GPT-2, the biases that existed when you put in what does a white male do? What does a white female do? And the response back based on the data that it consumed, the knowledge it was able to consume. So for us, as we think about the execution of work that we do with our clients, like I said, making sure that they can actually explain, they understand where the data came from, the explainability, the ability to actually trust the information that they’ve consumed, the knowledge that they’ve incorporated into their ecosystem is extremely important. And I think that data lineage and understanding is going to be a critical point to the regulatory landscape because I think you see, even now OpenAI came out immediately, there were lawsuits in California saying, Hey, you didn’t have authorization for my data. Where did you get it? How did you get it? So on and so forth. And so as we now move into this realm of understanding, that’s why, like I said, I think it’s important when you think about the enterprises that you interact with and the suppliers, those suppliers that have large swaths of customers in different industries, to your point, can not only have to just leverage large language models to a certain degree, but actually can leverage the data that they have that they’ve obviously tokenized. They’ve obviously kept secure because there’s a way to actually show that lineage that said, Hey, to a regulator, here’s how we made a credit based decision, for example, or how we made a decision on whether a customer got an upgrade or not, so on and so forth. And we actually have that explainability. I think it’s going to become extremely important because the fact is it’s just like technology so much of to the point that Raghu is making, there’s a small community or a large community of people that are somewhat trained to go do this, but the larger population that actually doesn’t understand the bits and the bolts, the ones and the zeros that actually went into actually creating that. And when you think about it, if you embrace the regulatory, then you actually are part of the regulatory conversation and decision versus pushing back against it, right? You see so many times where companies try to push back and the regulators squeeze even harder, Because they’re like, Hey, what are you hiding? Why are you not part of this? Why do you not want to solve this? And understand the common algorithm here that ultimately everybody needs to understand why you made the decisions you made
Scott Kinka:
Personally, I agree with where you are. I think the challenge, and I’ll throw this out if anybody else wants to add in on that answer also while I throw this other question out there, great. But the bad guys don’t care about the regulations on AI, and they have access to the same large language models. How do we normalize those two things really at the end of the day? Anybody else?
Karen Bowman:
I always look at this question as what’s one of the worst things, atomic weapons, and do we have regulation on that? Yes. Do we try to figure out who else can do this and do that? Yes. And will we potentially have to develop ways to combat that? Yes. Because I don’t think given current world events, this is going to change human behavior anytime soon. So I love to think of all the positive things, but I think we need to be cognizant that in the wrong hands with a different bias or perspective of the world, a lot of stuff could happen. And then how do you regulate that? Do you look at who’s buying those NVIDIA chips, right? Do you look at what other types of things they’re producing? But I think that to Tushar’s point, there needs to be something out there. And even if there isn’t, as even us as a country, we need to be aware that this could be creating super chemical weapons. This could be creating nanotechnology weapons. I mean you could get way out there on it, but I don’t think we should sit back and just say, oh, this is cool and fun and not be attuned to the fact that it could be problematic.
Dan O’Connell:
Yeah. The things I would add, I think these are all fantastic points, is there’s always this balance of innovation versus regulation and protection. And that’s always a challenge. On the regulation side, I really would like to see not just unilateral decisions, but ones that have buy-in from everybody around transparency around models. I think the onus is really on companies to be clear around here’s the data, here’s how we leverage it being used. If they have other parties that they’re sending the data to, everyone should know that. Is your data being used and trained on? So I think the regulation, for personally what I care about, is just transparency for all of us as consumers. And then if you get into some of the other comments that Karen had, I think it is on both a combination of regulation on the GPU rider, so NVIDIA, the AMDs, Intels of the world to say it can’t just be software protection. You can’t just say, Hey, our large language model is never going to answer how to make the anthrax because you can go to Google today and have that same question pop up and get a similar answer. So I think there’s a software layer that needs to get regulated and there’s also a hardware layer and so it provides me some reassurance, at least NVIDIA and AMD and Intel are having those conversations as well.
Scott Kinka:
Fantastic.
Dan O’Connell:
Regulation.
Scott Kinka:
Absolutely, I think it’s here. It’s probably paramount
Raghu Ravinutala:
Both post- and pre- facto because a really bad event can happen and we can explain the bad event, but a lot of regulation probably is needed to not make that event happen as well. Similarly, take an example of GDPR, just making sure that upfront there’s privacy that’s kind of guarded for individuals. So similarly, I think the one part of the regulation is going to be about where the responsibility lies if something bad happens. Is it the implementer of the AI or is it the model creator or the end user? And a lot of this regulation, unlike historically being, which is on paper, I believe the AI regulation is going to exist in terms of AI software that would go and check all the systems that are using AI that go through these checks, whether seriously threatening events, have a clear human trial intervention before those events are triggered before these buses come in. So I believe, and that’s why it’s even more important that companies like us and all the AI companies work towards creating this regulation. And in my opinion, I think the regulation has to be in terms of software, not written documents.
Scott Kinka:
Got it.
Dan O’Connel:
Sorry. I was going to say his point, which is part of the intention around regulation is to drive accountability, which is if the system does something wrong, who do we get to point the finger at? And as humans, that’s what we want. We want to know that somebody is accountable for what’s happening.
AI’s Influence on Education and Human Interaction: Balancing Benefits and Risks
Scott Kinka:
We could be on this one forever. So I’m going to just jump ahead a couple of slides and then we’re going to open this up for questions. So for our runners, if you get ready, we’re going to come to you in about five minutes. Let’s talk about some real world applications. So this is a chart from Gartner, certainly don’t need you to read, but I think if you are paying attention to it out there, you’ll find that one of the areas where the practical applications are largely happening, like we’re gravitating towards democratized use of the tools, people on ChatGPT improvement. And in the business sense, much of the dollars are moving into customer experience. So I guess the question is sort of a two part question and I’ll let everybody grab this one. Why is everybody moving towards customer experience? Is that just because we feel differently about customer interaction post pandemic and the technology in the moment just seemed to meet? Or is it really that AI is that well tuned to do that job first of all the jobs we want to do? And then just sort of talk about your business and how you guys are approaching that. In that vein, give us a real world story. So we’ll just go right down the line, we’ll start here and we’ll go that way.
Dan O’Connell:
Yeah, so I think we focus on Dialpad, we’re a customer intelligence platform, just for the two minute spiel. So we can power communications on any device. I talk about us as we can understand those conversations to drive automation assistance or insights. And for us, we focus a lot on the sales and customer success or support side because that’s ultimately the fastest way and the way to drive revenue for a business. You help a business figure out how to churn less or sell more in a macro environment today, obviously it is harder to sell more as it is to go and take an install base and drive efficiency there. And I think again, that we’ve talked a little bit about these applications for AI, these technologies are really pertinent and can up level both the sales experience and the customer success experience, whether it’s automating the quality assurance process, whether it’s inferring customer satisfaction from any conversations. So nobody ever sends out a survey. I think these are all very real world examples that are easy to pull off, that are simple to talk about for all of you as you engage with different businesses and provide very real businesses or very real value to the businesses that are out there. Karen?
Karen Bowman:
Yeah, I would agree totally. And I think some of it is too in our more modern technological world and maybe some of it is driven by the pandemic. I never bought anything off of Wayfair until the pandemic, then I couldn’t go anywhere. So I think that customer service, it’s really the modern front door to most businesses, and there’s a ton of investment and money put into it. And generally speaking, one of the largest expenses is the human capital, the human beings doing it. So when you look at that, what do you say? How do we get better? How do we get better at doing that? And you take some of the simple things like call summarization and this can get into bias in a different way. So if an agent summarizes the end of an interaction in a call, that’s the agent’s perspective. So now we can have AI summarize the call with much less bias to what happened with the factual things that the natural language understanding of the words and the intent of the words that were said. So that gives you a much more accurate summary of what happened than potentially an agent. The other thing is auto scoring. If you have to go into a large contact center with tens of thousands, who know hundreds of thousands, of interactions to go find what to score, it’s almost like why do they bother? Because it’s going to be such a slanted subset of what’s happening. And you can even look at simple things like that. We had a customer, they were only able to get to 4% of their interactions. It’s actually a global face. They do a lot of fundraising types of things. So they were able to get 75% more reviews that happened, which led to more coaching, better understanding of the agent, and how to handle calls. And in one year they realized an ROI on just that of 24 million in revenue. So these are real things that you can go to your clients with and talk about and the CX space is just ripe with it. And I think yesterday Prater and Casson did a really good presentation of talking about, Hey, think of everything that your customer is looking at and buying. This is like the tip of the spear for you. You get in there with this conversation and the front to their customers and you will win the draw and the pool through business because this is that critical to them and it’s become much more critical. I think since the pandemic, more people have really gravitated to that online.
Scott Kinka:
We shared quite a bit of data in the opening session and the keynote yesterday about sort of the top priority CEOs have for it are not really IT projects anymore. With the exception of cybersecurity. It’s to make me more efficient, improve customer experience, and do these things. That I think to your point is a little bit sort of the result of the pandemic. I mean the buyer sentiment and expectations are different, but also the elevation of the IT guy who’s now sitting next to the CEO O being asked to solve business problems is very different from pre-pandemic in a lot of ways. But let’s keep the chain down here too.
Tushar Shah:
I think we’re somewhat unique in the fact that we’ve got three product sets, right? One is we believe in a queue for sales opportunity, which I think plays to the right of this graph, which is more around machine learning, sentiment analysis for sales. We think that ultimately, especially post the pandemic and during the pandemic, obviously people are traveling less and therefore being able to actually support sales teammates and agents using technology and AI to drive sales opportunities is an area which was our queue for sales products. And then obviously within our U framework of customer service and engagement, just like Karen, everybody else mentioned clearly that ability to support the customer but also the agent and be a cockpit to the agent during that customer engagement. And I think as a buyer of this type of technology, one of the big reasons is to your point, it’s not only the gateway to your company and how you support the customer, but it drives loyalty and retention. Clearly, the cost to actually acquire a new customer is so great for many enterprises. The ability to actually create a great customer experience when your potential product didn’t perform the way it needed to, which ultimately drove a reason for the call to be able to drive that great experience. Leveraging tools either through self-service or when an agent actually is engaged, provide that agent the knowledgeable information to be able to fulfill that customer’s inquiry and then maybe even drive it to then a sales opportunity is obviously the benefit that, for example, we do at Uniphore. And then the third big thing for us is, as I mentioned, the data, right? We fundamentally believe that without helping the enterprise solve their data problems, you can’t get the other stuff. And so when we think about our you capture and our extreme product that is about essentially the consumption of unstructured data, Excel spreadsheets, PowerPoint, PDF, being able to consume that, take in applications and platforms, video, all of those types of things, structure it in a way that then begins and builds your knowledge set. I think that somewhat sets us apart because we actually have that layer of actually, and then ultimately when you think about call recording from a compliance and regulatory perspective, that is also a big demand out there as well, and so we satisfy all of those areas.
Scott Kinka:
And I think that’s really the third point, a really important point to make. You hit me. I was about to ask you that question. How often have you had to solve the data problem before? I mean, it’s a really important thing that these technologies will do. All the things that we’re talking about, everything we’ve seen on the screen, you still have to solve the problem of does the customer have everything you want to inform the AI written down someplace? I mean, right at the end, is it in a document that can be consumed to actually inform this? I’m going to throw that to Ravi though. If you answer the question the same way and layer anything you like.
Raghu Ravinutala :
I think the fundamental principle of scaling any technology is whether there is going to be massive user adoption and are they going to love it? And I think what ChatGPT has proven and what every enterprise sees is that users are absolutely loving automated chat interfaces. They want that and enterprises are better suited. And it’s kind of obvious, it doesn’t need a lot of intelligence out there to say that if I’m going to provide ChatGPT kind of automated interfaces, virtual assistance to my end customers, the adoption is going to be massive and we are going to improve customer experience and at the same time we’re going to reduce significant cost, customer experience and virtual assistance, chatbots and voice assistance are coming across a trend where some of the most powerful forces are aligning together. One, they’re allowed by end users. So the no problem of our option we see with all of our customers, they launch it, our consumers are up like crazy. Second, it is the area of the most massive investments for enterprises. They reserve most of their budgets for employee and customer experience as per Gartner itself. So that is a key trend force that is aligning. The third is the LLMs. The biggest change in technology is completely aligned to providing that experience. So you have all these three forces combining together and that’s why you see the chatbots and virtual assistants out there. And that’s why companies like us are seeing phenomenally, rapidly glowing interactions on our platform.
Scott Kinka:
Amazing answer. So with that then, I’m sure marketing is already freaking out at the amount of time left on the, or not left, on the timer, we will take a couple of questions. So if you have a question for the panel, this is your opportunity to broadcast your question to the world. So anybody got one over here?
Audience Member #1:
Regulatorily. How is the government addressing the use of AI to purchase stocks and to create wealth in nanoseconds as opposed to holding up a piece of paper and saying sell?
Scott Kinka:
Anybody want to take that?
Dan O’Connell :
I do not know.
Tushar Shah:
Like I said, the only real regulatory position that’s been out there is clearly the AI Act out of Europe. I would definitely recommend that people read it. I think it’s extremely well thought through about where they’re thinking and where they’re headed. I think to the broader question about some of the things you mentioned, I don’t know that they’ve actually thought about that yet but it’s a precursor to what potentially is out there. And so like I said, you could find it probably on ChatGPT or whatever, just Google the AI Act out of Europe. And like I said, if you read that in detail, it really just talks about what they’re trying to do from a positioning perspective around controlling the breadth and the nature of how AI is going to be used in everyday life and ultimately the information that’s required to manage it.
Dan O’Connell:
Sorry, I was going to say the one thing I would add to that is many of those financial systems, so if anybody’s on Wealthfront or kind of robo advisors, those are traditional data models that would be doing automatic trading. If you’re buying in and just automatically putting a deposit in Wealthfront, it’s automatically managing and doing the trades. There’s obviously data models that are going to be behind that. So I just think of the large language model as just a bigger, smarter, more flexible model.
Karen Bowman:
No, I was just going to say I think you’re going to see more of that and we just saw the Hollywood Writers Guild have in their settlement protection against AI. So I think it’s all new to everyone or to a lot of folks on how do you manage that and what does that look like. I really like the idea of when you had the slide where it had to cite its resources, where did you get that from? I think there should be a little more enforcement on that if you intend to take that material and use it in a way to monetize it.
Scott Kinka:
I do like your example around the Writer’s Guild. It was something that I was going to say, which is that the way we legislate here in the United States is going to be based on lawsuits primarily, and then we will come in behind it. I think the AI Act is a little bit kind of a gateway reaction from GDPR. So they’re already on the hook saying this is what you have to do, and then AI’s coming in and complaining. And so I almost feel like the AI Act is a bit of a sister legislation to GDPR, where they’re already way more serious about it in Europe. We are here in the US but I mean, I guess we’ll figure out where that lands. Any other questions from the audience?
Audience Member #2:
I don’t deny the many benefits of AI, but I also think that the considerations are very real and one of the ones I didn’t see was cognitive decline. So I’m just curious, as these tools become more ubiquitous, especially for younger generations, is that at all a concern? And then also I’m curious if it would have any impact on developing EQ.
Scott Kinka:
That’s a really great question. Who wants to jump on that grenade?
Tushar Shah:
Yeah, I mean, I would say listen, I think if we allow, I think the point you are making, which is how do you embrace it into the world in which we operate in? So teachers, for example, not allowing a student to leverage ChatGPT, there’s a benefit and a negative, a positive and a negative, right? Positive is okay, they’re not getting the answers, they’re not getting that cognitive design, they’re not actually doing their own analytical thinking, but ultimately they’re also missing out on how you actually use technology and how to actually advance the capability. And maybe what ends up happening is that individual or student says, you know what? I can actually make this thing better and here’s how I’m going to make it better. So I think it’s how we as individuals embrace technology and allow it to essentially take over our lives to a certain degree. If we all sit in front of Netflix and just keep taking the recommendations that they make, then that’s on you as essentially taking a cognitive approach that says, you know what? I’m just not going to think I’m just going to let Netflix decide everything I watch, what I do, so on and so forth. I think for all of us, it’s upon us to determine how we want the technology to interact and interface with us, and ultimately that will drive the cognitive approach. But I can also see the opportunity. My mother has Alzheimer’s and my dad is having her leverage ChatGPT to help her be much more engaged with what’s happening and I see some positives because things that she technically would not because she can’t vocalize as much anymore, but she can read text. And so there’s some benefit of the fact that she’s able to now get answers to things that she’s got questions about by just my father typing the answers onto ChatGPT and she gets the response back. So that’s amazing. It’s an interesting dilemma, honestly.
Scott Kinka:
It is. And what I will add on that is, I mean, there was a great conversation in the communication panel last night that to me probably answers as much your question about EQ, right? There’s also a generational challenge we have right now with AI that is certainly exacerbating an already existing problem of the generational decline of human interaction as we go, and the ongoing desire to interact with the system in front of you. We have time for one more question.
Audience Member #3:
Big thing about AI is the fear of using AI as bad actors, right? There’s a lot of talk about, for instance, our power grid being compromised, but I would think that AI can actually equally be used to find bad actors and to support bad actors. What are your thoughts on that?
Karen Bowman:
Well, I would totally agree with you on that, and that gets back to how do we regulate, how do we do those things? What would be the components of that that you could even track like the chips or things along those lines? But we have to because we know that that will be a scenario that we have to defend against. But I do want to go back really quick on the last one. So I think they should teach plagiarism in college, right? So because once you get out of school, for most of us, how many of us really create any unique slideware or any unique things that we’re presenting to customers, we’ve evolved in a way that it’s acceptable to use our materials amongst ourselves or we’ll give it to partners or this kind of flow, the same thing. Schools thought CliffNotes was going to destroy kids’ reading and critically thinking and summarizing books, but none of those things really happened. And I think we’re going to see that with ChatGPT. I think that the youth of today are a lot smarter than we give them credit for, and maybe more of the regulation should be towards social media. My daughter teaches in school and she said she was auditing a class yesterday with 37 middle schoolers trying to learn science, and they had real tactical things, bailing whale stuff, all kinds of stuff they could do. Kids were on their phone, there was no control. The teacher just kept rambling on and she said it just made her sick. So I think rather than being afraid, we need to embrace it and go run after it and in all these other places, but especially in your business because you guys are really on the forefront of being in the right place at the right time to capitalize on this. And as bewildered as some of us may be, and some of you imagine where your customers are at contact center technology, some of them are still using dark age’s stuff, right? They’re trying to figure out how to get their contact center to cloud, let alone what to do with AI. So that’s what I think is the real opportunity here.
Scott Kinka:
And I think that’s a great breaking point because we’re over by 5, 10 minutes at this point. So I’d like to thank all of you that’s sitting at home for tuning into this episode of the Bridge and our live studio audience here in Palm Springs. Can we hear it? Let’s hear it again. All right. This was a lot of fun. I don’t know what the topic next year is going to be, but I think we’ll do another one of these. This was great, and thanks for being a heavy topic early in the morning. Thank our panelists here, if you would, and certainly engage them early and often. And for those of you at home, we look forward to seeing you on another episode of The Bridge. Thanks.