Author: Scott Kinka

The Bridge Podcast - Brad Reynolds Expedient LogoOn this episode of The Bridge, I’m joined by Brad Reynolds, Senior Vice President of Artificial Intelligence at Expedient. We’re talking about productivity AI and so much more.

Expedient is a data centers and cloud services provider offering infrastructure as a service (IaaS) with local operations in Pittsburgh, PA, Baltimore, MD, Boston, MA, Cleveland, OH, Columbus, OH, Denver, CO, Indianapolis, IN, Memphis, TN, Milwaukee, WI, and Phoenix, AZ. Ranked as one of the Top 3 managed services providers worldwide on Channel Futures’​ MSP 501 list, Expedient’s converged solutions enable clients to focus on strategic business innovation while the Expedient team handles the operation of the information technology needed to support it.

During our conversation, we discussed how business leaders get ahead of their competitors and unlock the promise of productivity AI without exposing the company to risk, the current state of AI adoption and the parallels to the early days of the internet, the role of corporate policies in guiding the responsible use of AI technology within organizations and more.

Topics covered in this episode:

  • Brad’s role at Expedient.
  • The parallels between music and technology.
  • Expedient’s history and evolution from an access aggregation business to a colocation and cloud-centric provider.
  • The current buzz and interest surrounding AI in the tech industry.
  • The role of AI as a transformative technology for business operations.
  • The importance of considering how to become experts in AI as it represents a significant part of the future in technology careers.
  • Public large language models (LLMs) and their impact on AI’s public perception.
  • The challenges created by corporate use of public LLMs and the need for controls and wrappers.
  • The need for businesses to provide access to AI and the importance of enabling employees to explore and discover AI’s potential uses.
  • The current stage of productivity AI adoption in businesses.
  • The emotional impact of AI on individuals and its potential to touch various aspects of life.
  • Exploration of the concept of the singularity and AI becoming self-aware.
  • The challenges of understanding AI’s decision-making process and neural architecture.
  • The challenge of explaining AI in the context of human processing and language.
  • The difficulty of using existing language to describe AI’s workings.
  • The practical product developed by Expedient for corporate AI adoption.
  • The importance of preventing the sharing of proprietary or private information.
  • Shameless predictions for the next 18 months.

The Bridge Podcast - Brad Reynolds HeadshotABOUT BRAD REYNOLDS

Bradley Reynolds is a recognized leader in the AI industry, with a deep understanding of the transformative power of Large Language Models (LLMs) and Generative AI. As the Senior Vice President of Artificial Intelligence at Expedient, Bradley spearheads efforts to simplify the operational deployment of private AI infrastructure for enterprises given strict data governance and regulatory requirements.

Prior to Expedient, Bradley served as an AI Consultant at Yellowfin Holdings, where he played an instrumental role in helping enterprises harness the power of private AI models. His expertise spans from strategic AI planning to specialized areas such as vector database design, language model pre-training, and model fine-tuning.

He is a staunch advocate of the idea that “AI Models Are the New Compute,” and believes that our grasp, personalization, and expertise with these models will guide us through the next computational revolution. Based in Cleveland, Ohio, he is passionate about making AI more accessible and helping organizations explore the vast potential of this emerging technology.

CONTACT BRAD

LinkedIn

Web

YouTube

Twitter

 

SUBSCRIBE ON YOUR FAVORITE PLATFORM

          

 

Scott Kinka:

Hi, and welcome to another episode of The Bridge. My guest this week is Brad Reynolds. He is the Senior Vice President of Artificial Intelligence at Expedient. Brad, that’s quite a title. We’re going to get into that, but welcome to the show.

Brad Reynolds:

Thanks so much, Scott. Appreciate you having me.

Scott Kinka:

Fantastic. Well, let’s start with you and then we’ll get into how AI works. Brad, just tell us a little bit about your background and a little bit about you, where you live, what goes on in your life. Fire away.

Brad Reynolds:

Yeah, for sure. So I’m in Cleveland, Ohio, the virgining technical metropolis of the United States. So my background’s serial entrepreneur and I’ve founded and sold a bunch of companies. Super nerdy space has always been, technical coder engineer. Actually, one of the companies I founded and sold was Expedient where I’m working at now. And so I’ve always been in that mode. And my current role as that voluminous title, SVP of AI is really about an AI startup inside of a larger organization Expedient. So I get to operate in a way I’m familiar with and move very fast and build things, but within the construct of a much more established organization.

Scott Kinka:

I love that. Well, there’s a couple of areas I think where we have some potential shared history. Our listeners know I was a tech co-founder and CTO for a long time as well before hanging that up. So I call myself a reformed CTO now, so I love that my wife calls me a tinkerer. Does that phrase fit you? Are you a tinkerer?

Brad Reynolds:

Yeah, I’m not only a tinkerer but a collector. I have these collections of everything, whether it’s guitars or books or puzzles. I’m constantly experimenting and it definitely plays into the AI story because there’s just a lot of tinkering going on there. But yeah, I’m constantly doing that type of stuff. What’s your tinkering obsession?

Scott Kinka:

I just mess with everything. That’s the problem. My current tinkering obsession is automation. I just want to be able to walk into a room and speak to whatever thing is listening because everything’s listening and then just have everything change. And so I want to be able to voice activate that scene from the middle of, what’s the movie I’m thinking of where they’re in the dorm and the bed pops out and the thing comes down and the animal house and animal house. I just want to be able to voice, automate the room, turning into what I want it to do. So I’m completely obsessed with it now, but I’m also tinkering a lot in music. And I understand that you are a semi reform rockstar. Is that not accurate?

Brad Reynolds:

I definitely am, the backup band. I’m not the main singer like you.

Scott Kinka:

I appreciate that. But tell me about music. How long have you been playing? What do you play? What do you do?

Brad Reynolds:

Yeah, so I just think there’s a lot of interesting parallels between music and just creating things, maybe in code, where we tend to think of systems and code as very formulaic, but at the highest levels they become artistic because it’s how you put large things together. For me, I love jamming. Practicing the same song all the time is just boring to me. It’s what you need to do to be a professional or to go out and play. But I just like getting friends together who are good musicians. You never know where music is going to go. So my strongest suit is guitar, but I’ll make music. I’ll use the keyboard with the DAW to create music. The digital stuff that we can do now, one man band kind of post-processing is phenomenal. But yeah, I have a guitar obsession. My wife is very upset at how they keep accumulating. I keep hiding them in different spots in the basement and then she finds them. She’s like, this one wasn’t here. I’m like, how do you know? Really know? Do you have a picture? Yeah, I like guitar, bass. Those would be the main ones, but I play the drums, play the keys. It’s kind of like whoever comes over. We have Southern Live in Cleveland. We have a daddy band night of all guys who used to play out and be in bands, and we don’t get to play that much, but everybody brings something pretty unique to the table. So if a drum expert comes over, we’re not playing the drums that night. I mean that guy’s taking the lead on that type of stuff. And so it just becomes a rotating, movable feast of a jam.

Scott Kinka:

I love your conversation about the parallels between music and tech. I also have an obsession. My problem is that I’m a piano player, so when I collect, they take up way more space as people know. There are like three full-size pianos in my house. Not to mention all the electronic keyboards, but that’s just my obsession. But I do like what you’re saying, I mean music is emotional to listeners, but they don’t realize how mathematical the job is, how pattern based it is. I think it’s one of those things where autonomically, you’ve got to understand all the patterns and the math before you can make it artistic. And I love the parallel to code, and I agree with you. Anybody can make a thing that does a thing, but to make it sing is a totally different scenario. Let’s use that to launch over then.  I mean, I had a conversation, it was in Miami a couple of weeks ago, and we’ve been in this the last couple of weeks of the show have been AI centric. We did a big panel at our summit. We broke it up into a couple sessions. I had a data center provider last week talking about how AI is changing the physical footprint of their data center facilities. It’s just been this ongoing thing. And I asked to slide you in at this spot because I feel like the timing’s perfect. I was with your CEO Brian Smith a couple of weeks ago and he was chatting with me about some of what you guys were thinking about in AI. And I think the way people think about expedient and then saying that you have a VP of AI, maybe they don’t get that correlation. So if you could just start with first, give me the minute long on Expedient for those of you who might not be familiar. And then if you would, can you just explain to me a little bit about your initiative in AI there and we’ll sort of use that to jump in.

Brad Reynolds:

So expedient, I started it in 2001 as a rollup. So I bought a bunch of, not a great time for telecom, 2001, 2002. But if you had cash, it was a great time to build an aggregate telecom company. So I ended up buying a bunch of Fiverr and wireless asset companies and then just general internet access companies, cobbled those together and then sold it in 2005 to a very large family office. But the business at that point was evolving from an access aggregation business, a super high speed access. Cogent would’ve been a similar type of company to what we were doing at that point so we accumulated these data centers that we couldn’t pay somebody to move equipment into. They were mostly acquired from Blue Cross, big medical plagues. We just got these things. We had our equipment there. So when I sold it, the business was an access business and the first, we’ll call it the second iteration of Expedient was from access to colo centric. And there was a big push in the mid two thousands of people moving out of either their own data centers or frankly Telco Closets at their office into a solid facility.  And so that ran for a course. The next layer of that is, Hey, we have our stuff there, or at a different facility we’re going to virtualize. And so the second kind of piece of that was, okay, now we’re going to get a lot more efficiency out of our servers by virtualization functionality. Being able to move servers like software instead of pulling boxes and moving hardware. I was gone by that point, but that was like V2, V2 or V3 of Expedient was cloud, which is that’s putting it up against AWS and all of that. But Expedient has a very bespoke kind of managed infrastructure that fits a certain type of company that needs a little more handholding than you’re going to get with one of the hyperscalers. So there’s been a series of evolutions. The thing about tech is that, I don’t know that tech has been sexy ever, but the sexiest I ever saw, it was when the internet first launched this real start of my career, everybody was like, we need this internet thing.  We don’t know what we’re going to do with it. I just remember setting people up like Jones Day with email. They’re like, they don’t know how, we don’t have a website. We need this email thing. Nobody knew where it was going to go, but everybody kind of knew that it was an important thing. So this is the first time since then that there’s, frankly, it’s been the only time in my life where if I go to a cocktail party and I say what my title is, it’s like everyone wants to talk about it. No one knows what they’re talking about.

Scott Kinka:

Even more so than being a musician. You’re getting all the chicks with AI now, is that what you’re saying?

Brad Reynolds:

Yeah. I don’t know if it’s paying off that way.

Scott Kinka:

You’re the most interesting guy in the room now thanks to AI

Brad Reynolds:

Yeah. So basically for both you and I, let’s milk it while it’s here.

Scott Kinka:

Right, exactly.

Brad Reynolds:

But I think that the important part is a door opener. So for Expedient, I look at it being at the earlier part and now I see Expedient as a series of stair steps, but the technologies were more like efficiency technologies and reliability technologies and not transformative in terms of business operations. Very rare technology like electricity would’ve been transformative to business operations technology. And so I look at this as just another dot, it’s just a very exciting dot on a kind of Expedient stair-step journey and this dot for all of us, whether it’s Expedient or your career or other people’s careers, I look at this personally as like I’m halfway through my career. I’m 45. The book’s half the first part of the career was access and kind of a data center. I’m looking at the second half as being all AI stuff. And so everybody that’s listening should be thinking about how do I become an expert in AI? I don’t exactly know all of the pieces, but it’s their stair step too. And so I would say we’re on the bandwagon with that for sure.

 

The Emotional Impact of AI: Unpacking the Corporate Push Towards AI Adoption

Scott Kinka:

You said something earlier that I thought was an incredible parallel to where we are with AI right now. You said you back at the beginning of the internet, people were like, they knew they needed it. They had no idea what they needed it for. There weren’t use cases yet, but they were like, I gotta get me some of this stuff. And you made the parallel that that’s sort of where AI is right now. I think when people are thinking about AI, there’s a million different kinds of applications that are driving off of that right now. But I think the public persona, let’s say, of AI right now are the public large language models that are out there. It’s the chat GPTs of the world, and now they’re starting to get a little bit of that attached to attached to Office 365 with copilot beginning to pop up on desktops, and people are starting to feel it and understand what prompts mean and all that kind of stuff as businesses are trying to figure it out. Corporate use of public LLMs is a blessing and a curse in some ways. I think people understand the potential blessing, but what are the challenges that create in your mind?

Brad Reynolds:

Yeah, I think we’ll look at that as a stair step too. So we’ll just call it AI adoption for humanity, let alone corporate AI adoption. And the question that I would ask, and we’re all looking at the business side, but kind of what inning of the ball game are we at? There’s all kinds of interesting conversations about use cases and applications and how AI can transform a business, but is that really where we’re at? Are we at the point where we’re implementing those things? Or if we all look at our individual companies and we say we have 400, 500 people, how many of those people are using AI on a daily basis? How many are using it effectively? How many are using it on corporate policy, or are they just buying ad hoc chat GPT on their own dime? It’s a much smaller population than all of the conversations would have us believe. So why is that? So for me, that baby step, that first step, that crawling aspect is what are the things that we need to have in place as businesses to give everybody access to AI as part of, I call it like a gym membership at our company. It’s like, here’s your gym membership. We’re underwriting it. You don’t have to pay for it, but we’re going to put some controls and wrappers around it so that we have logging and monitoring. We integrate it with our authentication, so it’s a single sign on. We redact P, we need some corporate controls. We can’t just say, Hey everybody, here’s access to AI, good luck. We need policies in place too. So you need something there to get control around AI as a business, but your end goal should not be so much the strictures in control. Those are just things that you wrap around it. Your end goal for step one, for the crawl step is how do I ensure everybody in my business has access to it and is using it as effectively as possible? Because I think one of the stair steps that we’re getting distracted with is a couple steps down, which is a transformative use case. I have always felt, I’ve built a bunch of companies, all of my transformative use cases have come from my clients. I had ideas beforehand about, oh, this is going to do this and that and we’ll raise money, and it never worked out that way. So if you can get your employees learning how to fish with ai, they’re going to find those use cases that some could be transformative and some might just save 30 hours a month worth of work, but you’re essentially enabling them to do that. So how do you do that? You put some controls and wrappers around access to the public models, and that’s a product that we have, but that’s just a general concept you need to have because that’s why folks aren’t giving that as a gym membership to their employees. They’re a little nervous. They don’t have controls, they don’t have audit logs. Just very basic blocking and tackling type work for any type of enterprise software system, let alone something as unsettling or interesting as AI.

Scott Kinka:

I want to come back to sort of your brokerage services, probably the wrong word to use it. I’m going to let you explain it in your words, but I want to take a step back for a second on one of the statements that you made around the technology outpacing the understanding of the use cases right now, are we very much in the board level corporate edict in the same way that we were around cloud 10 years ago? It was like the board’s go-to cloud. I have no idea what that means, but take everything we have and throw it out there, corporate edict to make it happen. And then today we find an Expedient, we do with you guys as well, or we’re unpacking cloud repatriating back into colo or in private things like that, cloud applications. Are we in the same spot with AI right now? It’s sort of like the corporate edict of moving to AI. I don’t really know what that means, but please, the board’s saying go do AI stuff. Do you run into that a lot?

Brad Reynolds:

Yeah, so I would say partially yes and partially no. So definitely the first time in my career I’ve ever seen a technology and maybe cloud was one of them, but I kind of wasn’t in the cloud zone back then being pushed from the CEO level down as opposed to being built from the developer level up as like, Hey, we need this technology. This will improve stuff, I would say, so I could see some similarity if cloud was pushed at the CEO level down, and AI is definitely being pushed at the CEO level and board level down. The difference is we can all talk to AI, at least in the generative component. We can have an emotional, frankly, emotional experience with what this technology is, what’s cloud, how do I grab the cloud? I can grab AI and I might not really understand what’s going on under the hood or why these outputs are this way, but I can see something that I’ve never seen before. I saw a YouTube video today of one of the AI guys I follow, and it was like, oh, I have an AI avatar. Okay, great. I’ve seen all these avatars. They don’t look like it was indistinguishable from him as a human. It took him 10 minutes to do. I’m like, oh, I guess we have AI avatars to do all of our marketing now. I didn’t know that until I watched that video today, but it’s tactile and the stuff that we can touch seems like it’s going to touch our lives in ways that have nothing to do with business. And it’s like when in tech is anyone emotional about the cloud? I mean, unless their business depends on it. I don’t think so. People are emotional about AI. It has implications for humans as well as businesses. I mean, we all have the business hat on. We’re thinking about business applications. But yeah, we’ve never experienced something like that before. And like you were saying earlier, the only parallel would be the early internet where there’s a notion that if we democratize access to data, we’d become smarter as humanity because now we don’t have to go through a few sources. We can kind of get it from anywhere. That was the dream that was sold. It kind of turned out a little different. But yeah, so I would say that’s where the big difference between cloud on the evolution of technology and AI fits.

Scott Kinka:

Got you. Brad, am I going to find out later that you actually sent your avatar in to do this interview? I am actually looking at you right now, correct? I just want to be sure.

Brad Reynolds:

One of my buddies has an assistant and all the scheduling was going through this assistant and I met him in person, and there’s a 50/50 chance that assistant wasn’t a human, but it was a human. So I felt good

Scott Kinka:

Is the singularity near? Let me just ask you that question, Brad. Where are you at on that? Where you out on that spectrum?

Brad Reynolds:

What’s the singularity?

Scott Kinka:

Well meaning the consciousness AI, I don’t want to say run amuck, but AI being self aware.

Brad Reynolds:

Yeah, so I actually gave a talk about this last Friday on existential and philosophical implications of AI. Pretty rare I get to talk about something that is not that much bearing on reality and making money, but that was certainly one of the topical areas that came up. And I would say I don’t have an answer to when a singularity is happening or whatever, but I do have an answer. Having dug under the hood of these models and built models that the thing that we’ve engineered and it’s made by humans is our first contact with aliens. And not in a weird tinfoil hat way, it’s more of the way that it thinks and processes is just not the way that we’ve evolved as humans to process. And so the singularity of it taking over or whatever, I don’t know when that’s going to happen. I just know that when we try and look in our minds to say why it’s doing a certain thing or where it’s going to evolve, it has its own mind, a whole different digital kind of neural architecture than we do. And so we try as humans to anthropomorphize AI and be like, oh, it’s doing this, the human brain, or it’s doing this to people. It’s doing it like its alien intelligence has it do things, and it’s very difficult to get under that black box and say exactly why, which is, I know to me intriguing, it’s like the old areas of the maps in the 15 hundreds where there was just nothing out there kind of like, oh, it’s one of those exploration maps that still exist. We’ve mapped all of the continents, but we have not mapped the brain space of these models. And this is a huge opportunity, an interesting one.

 

The Future of AI: Accessing Multiple Models and Ensuring Responsible Usage

Scott Kinka:

And we could spend the next 40 minutes philosophizing about this, but it’s sort of a double whammy too, right? Because not only are you trying to explain it in the context of the way we process, but you’re trying to explain it using the context of our own language too, which is even if you sort of figured out the route it was going, would you be able to describe it with the words we have, right? Logically speaking, you don’t even have the context by which you had that conversation. At the end of the day, it’s just producing something completely new. So speaking of producing something completely new though, I want to just jump back quickly into a little bit more about the product that you’re developing right now, and then we’re going to have some fun at the end. But let’s jump back to the product for a moment. So add Expedient to a million things that I’m sure you are responsible for exploring around AI. But right now, what you guys have or are in the process of releasing from my understanding is a very practical product around which I can leverage my SSO and login and it will give me corporate sponsored access to the multiple public models that are out there. We will use ChatGPT as an example, but there are others, and you can describe those and also apply some governance to it, how you can use it, why you can use it. Let’s make sure you’re not sharing corporate secrets. Let’s make sure that you’re not putting personal information in there. I mean, is that a good way for me to describe what, in my mind, is a very unique thing that you guys are building?

Brad Reynolds:

Yeah, you got it. And it comes back to the baseball analogy or the stair steps or the innings or whatever you want to say, where there’s all kinds of interesting use cases and interesting infrastructure you can build around AI. But where are people? We need to get people using AI to learn about AI so they can understand what applications they want to build. So if we’re building infrastructure for applications that happen to be underneath our product, people just aren’t ready to have that conversation in depth at scale yet, you gotta get ’em comfortable. So we’re like, okay, we’ve built all this infrastructure to service AI applications. Let’s add a chatbot on top of that, but a chatbot to help people get to the goal. So the goal is to give every employee at our clients a gym membership to the best public models available with governance, after which there’s a long kind of journey down the AI path as they start to spread their wings and understand it. You’re right. And so the question would be like, okay, if we built a chat interface that is branded for clients, what are some of the kind of key elements that an enterprise would need to wrap around it? So one is that authentication mechanism. So for us, we use an active directory. Being able to have a single sign-on is good from a corporate policy. But the important offshoot of that is it knows my role. So if my role as an HR person, an operations support center person, a coder, as you log into the system, it gives you different context for how you approach the world. So it’ll give you, in the V2 of our products, it’ll give you access to different sets of data. So if you’re in HR, there’s a set of proprietary data that you have access to that somebody as a technologist. But the seed for that is integrating into some sort of one login type system, like an active directory so that you can inherit all of that kind of role-based permissioning super, super important. I mean, role-based permissioning comes back from the data side, but it also goes into what models do you have access to. There might be folks like, Hey, the marketing department has access to GPT-4, but no one else does for whatever reason. I’m just picking a model out there, that type of stuff. So that’s kind of like the starter point is getting that. The next portion is about private information. So if you’re using public models, you want to make sure that nothing proprietary goes out. And so we have our own, it’s actually a model that’s part of the chatbot. So when a prompt comes in and they ask, Hey, I want to do this, or Here’s a social security number, it will either filter that and say, Hey, there’s PI in this and we’re not sending it out. Do you want to rephrase that? Or the secondary part of it is it will actually substitute dummy data in, so dummy data will go out. So for social security numbers, it’s a format. It’ll sub in a fake social security number, send it out to open AI’s GPT-4, it’ll come back, and then it’ll swap it back on the way in so that the model doesn’t get confused about what it’s trying to do. If you just swap in some long UUID or something like that, it’ll get confused. So that’s a piece. Third piece is the access to multiple models. So different models are just different flavors of ice cream. Open AI’s models are awesome. Their image generation models are awesome, but Anthropic’s Claude has a lot of awesome capabilities for business too. And cohere, which we have access to, is one that’s very business built. And so it’s kind of like as people are using these models, they have a couple of choices of ways that they want to process. And so that’s an option for one of the products. And the last part is the logging and the telemetry. So as people are using this as they run into issues, you can debug it, but you also have an audit trail in history. So if there’s a concern about what people are doing with this system, you could say, oh, how many redactions has this particular person had? Or do we need to look at what kind of prompting is going out there? So our thought is that on the V1 of our product, those are enough kinds of controls in addition to a client setting a corporate policy about how you use these things. The reality is technological controls can be circumvented by bad actors or maybe uninformed actors. And so you have to say, at the end of the day, we have these strictures and controls, but be a good citizen. Here’s a policy of things that you should be doing with this and that you shouldn’t be doing with this. And I think that you need to have that component in place to be able to unlock the capabilities of some sort of an enterprise grade chat solution.

 

The Intersection of Technology and Creativity

Scott Kinka:

That’s an amazing story, and there’s a lot more there, and I’m sure that listeners are like, how do I get my hands on that? We’ll give you an opportunity to explain that in a minute. Let’s have some fun. Before we wrap though, this has been super fun and informative. I’m going to hit you with a couple of quick questions. Is that okay?

Brad Reynolds:

Yeah, for sure.

Scott Kinka:

Alright, this one’s less fun, but I always find it interesting. What’s on your end table reading right now? What are you into reading right now?

Brad Reynolds:

Well I’m a big reader, so I’m reading about the creation of the Oxford English Dictionary and I forget, I think it’s called a doctor and a madman. So back in the day, we didn’t have dictionaries. So William Shakespeare, when he wrote his plays and used a huge number of words in the English language, there’s no reference for him. I don’t know how he knew that those words were being used, right? Maybe he was making up how they were being used. But the OED was kind of the first compendium of volume in the English language. And how was that created? It was created by a doctor on one side in England and a man who he was corresponding with that volume who he never realized was an inmate at an insane asylum. And so it was the interplay of those two people, how they ended up combining together. And then the kind of prefix of that story started with him visiting this guy after 30 years of correspondence and finding out that he was an inmate at the asylum. So that’s amazing. I like the nonfiction type stuff more than not, and just unique eccentric stories are super cool. So that’s on the nightstand right now.

Scott Kinka:

Alright. I’m hoping you have an eccentric answer to this one. Let’s just say for the sake of argument that the robots do take over, or the next covid level event, whatever it is, happens in some dystopian future. There’s only one app that works on your phone and you get to choose which one is it?

Brad Reynolds:

I mean, it would be some sort of AI front end. So yeah, it would be something that I can essentially outsource my interaction to the world with. So right now that’s an AI app, and so right now I’m not going to Google very much, but it would be something like that. What would yours be?

Scott Kinka:

That’s a really good question.

Brad Reynolds:

It’s not that eccentric of an answer. Sorry.

Scott Kinka:

No, no, it’s a good one. I’ll tell you what some of the best answers have been. One person said, just make sure the flashlight still works. I asked this question a lot, which I think is kind of brilliant. You have no idea what’s still going on. That’s brilliant. For me, I answered the music app. I need something to do and I would need, if the world was cratering around me, I would need a place to mentally go and not physically go. So music would, again, playlist cancer, but it’s honest.

Brad Reynolds:

Well, what would your dystopian world playlist be? Who would be those? Would it be songs or bands? Who would you be listening to?

Scott Kinka:

I think, to be honest with you, it would probably be based on what kind of dystopian future we were in. If I had to be angry to survive every day, then I’d be jamming out to something harder. If the world is critter around me, I need to go to my personal reflection zone. I would have something completely different. But I listen from classical music to jazz, pop to rock. I mean, I’m kind of all over the place. It depends, I’m in a little bit of a country mode right now, to be frank. I saw a great band with our producer back here, Gene, on Monday. I need to breathe who I really like that’s been in my car on a new album. So that’s really good stuff. Right on. Alright, so okay, one more. Here we go. Last one, I just need a shameless prediction. Give me any kind of next 12 to 18 months, something that’s going to happen in the world. Could be sports, could be technology, could be in your life, doesn’t matter, just be Nostradamus for a moment.

Brad Reynolds:

Yeah, so I mean, all I do is AI. Now I have a family, but all I do is AI. The thing that I see in terms of evolution is we’re all talking about AI models. There are these monolithic things. So hey, it’s GPT-4 behind ChatGPT. Well, the thought process on that is that’s actually eight models right now. But as I look at enterprises, I don’t know, this will be 12 to 18 months, it might be like three or four years. We’re going to have a bunch of models inside of these enterprises that are going to be specially tuned to different functions. We already see it now with coding models versus general models, but the notion of, hey, we have all of these finely tuned assistance with niche domain expertise. If we have two of ’em, why can’t we have 10 of ’em? If we have 10 of ’em, why can’t we have a thousand of them? Uber’s running 5,300 models currently, and so they’re a forefront person, but how can we democratize that kind of agent intelligence to business? Got to figure out how to scale it. And Ben Horowitz, when I was out at a conference a couple of months ago, said that these models are the new computers. So as we think about virtualization and all of those evolutions, the fact that these models are the new definition of compute. So however we’ve scaled and deployed compute, think about those same orchestration and kind of structure and architecture thoughts for models, because that’s the brains of the operation going forward. And so I think there’ll be this explosion of customization and hosting, and frankly, there’s a ton of open source stuff going on and there’s a big fight between open and closed source. It’d be interesting to see how that boxing match works out. But I’m definitely on the open source side. I like to see people using tons of models.

Scott Kinka:

Yeah, I love that. So if somebody wants to leverage some of your thinking right now and they want to learn a little bit more about, in particular the products that we were talking about around AI, sort of governance management, where would they go? How could they learn a little bit more?

Brad Reynolds:

So if you’re on a cursory piece, you can take a look at the Expedient website and we have a couple webinars. One talking about AI strategy and another talking about enterprise AI, not sales pitches at all. They’re just level sets and how do you get your compass oriented type things. We also have information documents that are attached to that in a success kit for, again, more bedside reading about how to get up to speed. We talked about AI use policy. We have some examples of AI use policies that they can employ in their business, but again, nothing expedient, particularly if there was an interest in deploying enterprise chat for V1 with public models or V2 with your own private data and private models, then you could reach out to me. My stuff’s on the website, but my email is just bradley.reynolds@expedient.com. It’s probably something in the podcast as well. But yeah, if you’re interested in that spot, we’re currently taking on beta clients for the V1 of the chat. And the bonus, if you get in the beta program for V1 when beta for V2, which is private data access, which is really hot. You’re right of first refusal in line there. This is not an unlimited amount of beta slots.

Scott Kinka:

Understood.

Brad Reynolds:

Yeah, so we’re in that zone of beta right now and in January we’re going to release it in public. But yeah, we’re accepting beta clients right now for V1 that we talked about earlier.

Scott Kinka:

Fantastic. And of course, at Bridgepointe we’re significant partners for Expedient, we do quite a bit together. So reaching out to your friendly neighborhood Bridgepointe strategist to help broker that communication. We’re happy to do that for you, Brad, this has been super interesting. We went to a lot of places. I can’t wait to have another conversation with you in the future about this, but really a great time. I appreciate your insight. I appreciate your creativity and really, really excited about what’s to come. Thanks for the time today.

Brad Reynolds:

Well, thanks so much, Scott. I appreciate your approach to getting good information out. It was a pleasure to be on the podcast.

Scott Kinka:

Fantastic. Thanks Brad.