A conversation with Reed Coke. Principal Machine Learning Engineer at KUNGFU.AI
“I mean, frankly, in a lot of ways it’s kind of a frightening time. So both of my parents are artists and I’ve grown up surrounded by musicians and artists of all different kinds, and a lot of them are reaching out to me and asking about things like Dolly or most people know of Mid-Journey, the website that kind of aggregates a lot of these generated images, what’s going to happen to their livelihood? This is a very real question that a lot of people have. My father, who’s a jazz musician, played a show recently where the score was composed by an AI, and he’s sort of live texting me like, oh, this one was kind of interesting. It sort of had these shortcomings, but it’s not bad actually. And I think that the problem we’re going to have to figure out in order to understand if we skew exciting and optimistic with this or a little bit fearful and in the extreme, maybe a little dystopian, has to do with what really is the goal of these systems?” – Reed Coke
In this episode of Control the Room, I had the pleasure of speaking with Reed Coke about his decades of experience teaching others about AI. He begins with reflections on how he started. Later, Reed explores human learning with machine learning. We also discuss what new roles may emerge for humans. Listen in to reflect on what makes humans inherently unique as humans.
Show Highlights
[1:50] How Reed Got His Start In Computer Science
[10:35] The Art Of The Possible
[17:25] Human Language Acquisition And AI
[24:00] Freeing Up Time With AI
[32:25] Generative Systems And AI
Links | Resources
Reed on LinkedIn
About the Guest
Reed is a Natural Language Processing expert. He has industry and research background in conversational AI, text mining, and linguistics. Reed has a deep passion for how language, communication, and education about AI shape our world. Coke holds his MS in Computer Science from the University of Michigan at Ann Arbor.
About Voltage Control
Voltage Control is a change agency that helps enterprises sustain innovation and teams work better together with custom-designed meetings and workshops, both in-person and virtual. Our master facilitators offer trusted guidance and custom coaching to companies who want to transform ineffective meetings, reignite stalled projects, and cut through assumptions. Based in Austin, Voltage Control designs and leads public and private workshops that range from small meetings to large conference-style gatherings.
Subscribe to Podcast
Engage Control The Room
Voltage Control on the Web
Contact Voltage Control
Full Transcript
Douglas: Welcome to the Control the Room Podcast. A series devoted to the exploration of meeting culture and uncovering cures to the common meeting. Some meetings have tight control and others are loose. To control the room means achieving outcomes while striking a balance between imposing and removing structure, asserting and distributing power, leaning in and leaning out, all in the service of having a truly magical meeting. Thanks for listening. If you’d like to join us live for a session sometime, you can join our weekly Control the Room Facilitation Lab. It’s a free event to meet fellow facilitators and explore new techniques so you can apply the things you learn in the podcast in real time with other facilitators. Sign up today at voltagecontrol.com/facilitation-lab.
If you’d like to learn more about my book Magical Meetings, you can download the magical Meetings Quick Start Guide, a free PDF reference with some of the most important pieces of advice from the book. Download a copy today at magicalmeetings.com. Today I’m with Reed Koch at Kung Fu AI, where he is a consultant at strategically partners with clients on large cycles of change in innovation to help them compete and lead in the age of AI. He’s been a machine learning practitioner and educator for over a decade and specializes in natural language processing. Welcome to the show, Reed.
Reed: Thanks, Doug. It’s good to be here. Thanks for inviting me.
Douglas: It’s great to have you. And so excited to be having this conversation because then good pals with Kung Fu ever since the very beginning when I was starting Voltage Control and Steven was starting Kung Fu and we were both a capital factory kind of getting things off the ground literally at the same time. And it’s just been fun to be on my journey and watch you all’s journey along the way and partner on a few things, including a course that’s coming up. And so really excited about this conversation today.
Reed: Yeah, I’m delighted we’re going to get a chance to talk because so far we’ve only really seen each other in passing.
Douglas: For sure. So let’s talk a little bit about how you got your start in this work of AI machine learning, and I believe even teaching kids what AI is.
Reed: Yeah, yeah. So funnily enough, I kind of fell into computer science unintentionally. I was originally very, very interested in really the question of language acquisition and how people learn foreign languages and what it takes to learn a language, what kind of brain processes you need as a baby to learn a language. And at some point I spent a long time studying psychology and neuroscience and education and linguistics. And then somewhere in there because of a gen ed requirement, I tried out programming. And then I sort of realized that by studying natural language processing, I can kind of answer the same question of what’s this fundamental thing of language, but use computers to drill more towards that answer than human studies or new teaching curricula or anything like that.
Douglas: Super fascinating. So did you find parallels to learning spoken language, written language, a human language to learning computer languages?
Reed: So one of the most important I think differences between human language and computer language is the ambiguity, right? Human language, well, let’s take the other side first. Computer language is almost totally unambiguous and there are a couple rules here and there to resolve potential ambiguity, but math has things like order of operations, what order everything is going to happen. And in computer programming works basically the same way. Human language is this evolved thing that is meant to be as expressive and communicative as possible without wasting energy to do that. And so we end up with silly things like homophones or all these things that make learning other languages hard or just using language at all kind of ambiguous and difficult. And so it’s a good question though in terms of what the similarities are. I think that that’s the key difference that separates them at least.
Douglas: Yeah, for sure. And your point about this force towards simplicity around language is really fascinating because I think about jargon a lot when women were facilitating, that’s one thing you had to be mindful of is who in the room doesn’t understand what was just said or are people walking around with different understandings of what’s been said because of a word that has a strong meaning, but can be misinterpreted or interpreted in different ways.
Reed: Yeah, absolutely. Well, and it’s been fun too, thinking about sort of words in their power, this I think to jump ahead I think a little bit, something we might talk about later, this is sort of a little bit the power of what ChatGPT can kind of do better than a lot of systems that have come before it. I won’t get into that too much, but getting into, going back to my beginnings, getting into natural language processing, I really learned that a lot of it was about predict what word will come next, predict the three most likely words that could fill in the blank here at the end of this. And that’s just not, to me, that’s not what’s important about language. Even if you ask me just a yes or no question, I could say yes, yeah, I could just grunt. And it all means the same thing. I don’t care. No one cares what words I’m using as long as we understand each other. The thing that has always fascinated with me about words is that it’s not about the words
Douglas: And there’s even this regionality to language. And even if we’re in the same language and in the same locale of that language, that can be regionality and maybe location awareness, that’s not the right way to describe it. But in my house we have inside jokes, we know things about each other and so our language can be more terse. And it’s fascinating because I think that kind of stuff shows up a bit in computer science problems and coding for if we can constrain the problem, it’s easier to solve the problem. So even though the language of the code’s different, the approach to the solution is simpler. So there’s some kind of similarities there too.
Reed: Yeah, absolutely. And I think literally in the sense that computer science is all about setting up variables and substituting complex ideas for a single variable, that’s sort of exactly I think what you’re getting at with almost like the inside joke or even what comes next often, like the single word that describes that whole inside joke or something like that.
Douglas: That’s simplification that you’re kind of talking about and how a language evolves.
Reed: Yeah.
Douglas: So let’s talk a little bit about the course that you just developed and we’re have launched out on the world together. And to me, the thing I walked away with was how easy it was to just understand the basics and for folks to think about how it might apply to their work. So if you’re an executive that is curious about AI and potentially wondering if we’re late to the game or even wondering, does this even apply to us? And so I’m curious to hear some thoughts from you around what you found to be the most important things to think about when starting to have those thoughts.
Reed: Absolutely. So I’ve been a consultant with Kung Fu for two and a half years, but before I joined Kung Fu I actually ran my own consulting company for quite a bit longer than that. And so I’ve done a variety of projects in quite a few different settings, and I’ve definitely found that there, early on, adding AI to your business was this very kind of hyped up idea. And a lot of people would pursue something like this with essentially the goal of having added AI, not the goal of AI having added value. And I mean, sure there’s some value there in terms of a PR you would get from it or something like that, but AI is pretty expensive to develop if you want to do it right. And at the end of the day, you really do want to have a clear view on how this AI is actually going to create value or capture more value or better capture existing value, all those things for your business.
And I think that now the tools have improved a lot since I started consulting. So I think it is much easier, much more broadly applicable to everybody. There’s most likely a way that AI could improve your business at this point. And I think this is going to be obviously a bias statement, but I do think talking to experts about this can help you find the right way to do it and that that’s absolutely worth it compared to seeing something on TV and trying to come up with a version of it yourself without really a solid plan of how it’s going to result in your business running better.
Douglas: Two things kind of struck me. One was really thinking about the centers of value or what’s the value stream in my organization? So where are we deriving value today or where are we potentially interested in? Where do we see value potentially in the market that we’re not exploiting? Plus what data do we have access to? And you kind of marry those two up and that seems like fertile ground to start investigating.
Reed: I certainly think so. Every time that I’ve seen AI be very successful, it’s usually because the company was sitting on a treasure trove of data that they knew that they needed to collect but didn’t necessarily know what to do with at the time when they started collecting it.
Douglas: And there always seems to be a data engineering problem for folks as well. When they have that treasure trove, it may not be organized or accessible in ways that are most useful at the time. So I’m really curious, is that still commonplace to where that’s a starting point for Kung Fu and organizations like yourself?
Reed: I would say extremely. And in fact, so much so that Kung Fu has its kind of engineering solutions group. It also has a group that is kind of purely strategic advising and it is possible to engage Kung fu and get really more strategic advice with no code written to help build up your understanding of how can we make better decisions now even if we’re not ready to necessarily do some big engineering project with the outcome, how can we better organize our data, how can we capture the right data? All these kinds of things. And I actually was, inspired is kind of a strong word, but I think an accurate one. I was very encouraged when Kung Fu decided to start up the strategy practice because I think in all of the less successful projects I’ve seen throughout my career before Kung Fu, truthfully that was the piece that was missing. It wasn’t an engineering problem and it was often a data problem, but in the sense of better planning would’ve led to better data.
Douglas: And that’s the place that we often like to partner and when we’re working with you or ideating around where things might go and always fascinated watching the light bulbs go off in these workshops where we’re kind of exploring the art of the possible, and some executive goes just the epiphany around, wait, look at what I’m sitting on. It’s super powerful when that moment strikes and just to be there facilitating that moment, it’s pretty incredible.
Reed: It’s pretty fantastic. And I will say sometimes in the more corporate setting that that moment can be disappointing because I taught kids AI for so long and with kids that moment sounds exactly the same every time. And oh, it turns out CEOs don’t really do that publicly, but it’s nice either way.
Douglas: No doubt. And so the kids that you were working with, how did that come about? What age groups were you working with and what was the nature of that?
Reed: Yeah. So the company that I taught through, Hello World Tech Studio is an Austin based education company and their goal is not necessarily to train the next generation of engineers, but to give students the ability to solve a problem with kind of an engineering mindset, which I really like because I think what we’re seeing about AI more and more every day is that it can apply to anything and the AI is part of that, but the domain expertise is part of that too. And even for me, pairing AI with language and education has been really powerful compared to just knowing AI in isolation. So I met the founder of Hello World pretty much through happenstance and it turned out they were looking to design a Python course and specifically an AI course. And so I hopped on board and we started this pretty wild adventure of developing a curriculum that we taught.
So Hello World serves kids between third grade and the end of high school. And I did not necessarily think I would get to a point where I was teaching machine learning to kids who haven’t taken algebra, but going back to your idea of coming up with different ways to represent the same thing, that just was a hugely satisfying and rewarding challenge to come up with effective ways to get these ideas across without, for me it was take six math classes and three stats classes and get a college degree and blah, blah, blah. And for these kids, it’s like they know long division, what can we do with that?
Douglas: Yeah, that’s amazing. And it’s interesting because it’s going to create an entirely new ecosystem, and so everyone’s not going to be sitting in a role where they’re doing complex calculus and other deep math to maintain these systems, to grow them, to think about how to use them. It’s going to take all sorts of skill sets. And then also when I hear how you’re working with the students and maybe they haven’t had algebra before, I would imagine some of them were probably inspired to go learn more math after taking the courses because it opened up a whole new possibility for them.
Reed: I sure hope so, for myself too, I definitely remember days in math class thinking, when am I ever going to use this? And ironically, those are the exact days now where I’m like, I really wish I had paid more attention that specific day because I’m currently trying to use this. Yeah, I mean that’s anyone I could prevent having that experience that’s definitely a win.
Douglas: The one for me was linear algebra.
Reed: It’s exactly it.
Douglas: I was in linear algebra thinking to myself, when am I ever going to use this? And then fast forward to me being a software developer years and years and years later and I’m thinking to myself, oh wow, now I finally see an opportunity. But it took many years for me to find one, and now I’m sure the applications are boundless when you’re talking about AI and data science kind of applications.
Reed: Oh yeah, no, a linear algebra together with statistics and multivariable calculus, those are pretty much the foundation of all machine learning and it’s exactly linear algebra was one of the classes I was thinking of when I said that because at that time I was still an education major and interested in language acquisition research and thinking, well, I like math, I’ll just do one more. But there was no real purpose or motivation behind it. So much so that I actually recently was talking to a friend of mine who also is a machine learning researcher, and we were talking about how I came to linear algebra backwards by already understanding machine learning and NLP and then having to figure out the linear algebra part.
And so I end up in this weird space, we were joking that was like, oh, I don’t really know what a determinant is, but if you could give me a text document and then do these NLP things and then tell me what the determinant represents and then I’ll understand him. And he’s a very strong math guy, laughing about just how backwards derived a lot of my knowledge in the area is. And so I imagine I probably passed some of that on to my students.
Douglas: Amazing. I think that to a degree we all have that. We all have the ways that we learned things and the way that we laid down those understandings in our brains and procedurally how we approach stuff, it’s all that conditioning is going to impact how our reason about stuff.
Reed: Yeah, absolutely. And I wonder, would it be interesting to see if that’s reflected in machines?
Douglas: Yeah, it’s really fascinating. I was going to come back to your love for language and how humans learn language and talk about general intelligence for a second. As I learned that term when we move past AI and ML just being able to solve business problems and clustering and these kinds of things. And now this thing is approaching and surpassing human intellect and this ability, just the reason about things generally. And as I think about it’s easy to consider it learning almost like a child would learn, but probably more rapidly. And so I’m just kind of curious how much you’ve taken that early passion around how humans learn language and how much that shows up when you’re thinking about training models or how you dream about even more sophisticated models in the future.
Reed: I am so thrilled you asked this question, have a lot of strongly held beliefs about this, but to try to sum it up the best I can, I believe very much, me personally, I believe that the reason that babies can learn language so efficiently is because they can do the statistical pattern matching that is the main driver behind a lot of machine learning language plus they understand the concept of social reward and they want their parents to be interacting with them and they want their parents to be happy and engaged by the things that they are doing and they want whatever they did, they want the food to arrive. They have these needs and this kind of a social outcome of the actions they take. And I like to jokingly call it a conspiracy theory, but my own personal hypothesis is that this is the thing that really matters.
And this is the thing that for up until a year ago or so, this was not something that machine learning was really looking at in particular because it was all about predicting words and just get this word right, get this word right, get this word right. But what’s so interesting about the sort of elephant in the room perpetually in my life of ChatGPT, is that it does a little bit of what words should come next in terms of how it’s trained and what kind of information it’s latching onto to train, but the actual signal it’s learning from is of these four ways I could answer the question, which one is most useful to the person I’m talking to or to the system I’m talking to? And I think that that is actually a really key difference in the goal that the system is moving toward. It’s much more of a social goal than it really ever has been.
Douglas: That’s super fascinating, and I hadn’t really thought about it in the perspective of it trying to align with that audience’s needs or desires, but I guess is that really the goal of the adversarial part or the adversarial trying to say no, that’s not in the best interest and kind of shooting it down if it doesn’t meet that criteria?
Reed: I think where it really comes from is the way that it’s the ranking in terms of how the actual training data are labeled by all of the people who contributed to the data set. I think the fact that your, there’s no one single right answer. There are just relatively better answers and worse answers, but all of them might have some merit.
Douglas: Now that we’ve gotten on the topic of ChatGPT, what do you think about the landscape? I mean, it’s really fascinating to me because you’ve got, I mean, gosh, there’s so many different tools emerging and you’ve got folks that are building their own models, folks that are building stuff on the GPT3 API. I mean, I even got an invite just yesterday because I’d been on the waiting list for notions AI service. And so literally when I’m writing documents in our wiki, I can have it suggest things to me. It’s super fascinating to play with. And so I’m just curious, what are you seeing as far as where the landscape will shift? Certainly Microsoft’s invested pretty heavily and OpenAI, and then Google’s got its own stuff it’s working on and there’s rumors that it’s going to launch some things around deep mind or what have you. So I’m kind of curious, as someone who is deeply in this world, where do you think things are headed?
Reed: So one of the ways to look at this using a framework borrowed from Power and Prediction by Agrawal, Goldfarb, and Gans is that they describe three different classes of AI solutions, a point solution, an application solution, and a system solution. And this point solution is kind of just plug and play, drop this in, take out whatever manual process existed, put in the AI, and it does it better. And I’m only halfway through the book, so we’ll go with what I’ve read so far. But they talk kind of about how these are often the easiest solutions to envision, but also depend a lot on the rest of the context at the business facilitating this AI doing better. There are a lot of cases where if you drop an AI in where a human was making decisions and just tell the human to do whatever the AI tells it practically, people are not going to be very happy with that, and that’s probably not the best way to do it.
And in the book, they sort of claim that that’s because the system, the larger context is not set up in a way that facilitates this point for point replacement. That next level that they talk about the application solutions that I think kind of represents ChatGPT, it’s sort of a self-contained thing and its output can be useful, but it’s not really quite plug and play in the same way because of the degree to which all of its own information is already contained and it just kind of comes in a box and you can use it. I haven’t gotten to the part of the book about system solutions yet, but it sounds like it’s going to be exciting. And the one thing they have kind of mentioned is usually designing a system solution requires a pretty complete overhaul of how you even think about this thing that you’re trying to address.
Douglas: And I would imagine it’s going to involve a lot of systems change.
Reed: Oh, yeah.
Douglas: Because anytime that we’re overhauling systems or introducing new systems, it’s going to have by definition of system effect. And so it’s going to be going to be temporal in nature. It’s like there’s inputs, outputs, there’s variances. We have to think about all of them, and there’s a lot of people in the mix that are going to be impacted both emotionally and maybe even physically if we’re talking about factory changes and things. So I had to be really mindful of that level of change.
Reed: I mean, frankly, in a lot of ways it’s kind of a frightening time. So both of my parents are artists and I’ve grown up surrounded by musicians and artists of all different kinds, and a lot of them are reaching out to me and asking about with things like Dolly or most people know of Mid-Journey, the website that kind of aggregates a lot of these generated images, what’s going to happen to their livelihood? This is a very real question that a lot of people have. My father, who’s a jazz musician, played a show recently where the score was composed by an AI, and he’s sort of live texting me like, oh, this one was kind of interesting. It sort of had these shortcomings, but it’s not bad actually. And I think that the problem we’re going to have to figure out in order to understand if we skew exciting and optimistic with this or a little bit fearful and in the extreme, maybe a little dystopian, has to do with what really is the goal of these systems?
And I don’t mean the objective function of the AI. I mean the goal of the company employing the AI, and if the goal is simply create as much short term profit as possible and throw however many GPUs or TPUs at the problem in order to do that personally, I’m a little afraid of where that’s going to lead us. But if the goal is to do these tasks that are to some extent drudgery and free up people to do more interesting things short term, that’s still disruptive and very challenging. Long term, it’s a little easier to see how that might lead things in a good direction.
Douglas: It kind of comes back to the value piece you’re talking about, thinking about the value we’re trying to derive for ourselves, thinking about the system we are in and the impacts we have on that system. I didn’t really thought about your point, what’s the approach we’re going to take to get there and how mindful are we going to be in taking that approach?
Reed: Absolutely. It also, I think is going to highlight this very interesting thing. As I mentioned with ChatGPT, one of the most interesting things about it is, basically it’s trying to generate a chat response that one of its annotators would rank highly according to whatever annotation guideline the OpenAI workers use. And as we get these systems that can more and more powerfully optimize for what the thing that they were trained on, we will run into this situation where the output of the people who create the data has this immense effect, like a systemic effect like you’re saying. And so how the objectives are defined, how usefulness is defined, what the guidelines are for ranking something, I think these are all things that will become tremendously impactful as new incredible technologies like ChatGPT come out and are easily applied to all these different fields and problems.
Douglas: There’s also a giant movement happening around titles and roles that are emerging in this space. It was just a month ago, I think, maybe a little longer the first time I’ve heard of the role prompt engineer, and it really spoke to me in a big way because in the world of facilitation, our job is to craft really good questions. And here now you have this role in the world of AI and machine learning, or specifically in this kind of ChatGPT sector of that world, there’s this prompt engineer whose job is to think about how we craft prompts that are going to make it more usable and generate the best results possible. And so I’m just kind of curious, what roles have you seen pop up and just any thoughts on what’s emerging and what’s happening there?
Reed: Sure. Well, so for one, you’re nailing it as a prompt engineer. These have been great questions and I really appreciate all the thoughtfulness on them. The other thing, I actually had to address this in the course itself because one of the topics in the course is who do you need on your team in order to do something like this? And not to give it away, it’s worth watching, but the ultimate takeaway here is we’re not in a time in the world where the titles are set. You really can’t just look for titles and say, oh, I need two data scientists and one data analyst and three data engineers, and then we’re good. Anyone with that title, we got it. Really that piece of the course talks about, well, these are the titles mean different things in different places and what you really need is to cover the competencies and breaks down a little bit what those competencies are.
But it’s funny you mentioned titles that don’t exist in other places because actually up until reasonably in her first job out of college, my wife had the title of AI trainer, which is an insane job title if you go back even 10 years. And it’s not clear at all what that means. And I’m sure there are other people somewhere out there with that title who’ve done completely different things for her. So yeah, I mean prompt engineer, that makes sense. AI trainer is a whole nother interesting thing. Yeah, you’re going to be right on the money. I think there are going to be a lot of new weird titles and probably to tie it back to the book, as these new systems come out with fundamentally new structures, we’re going to get new titles to match.
Douglas: And I think people are going to need to rise to the occasion.
Reed: Yes.
Douglas: What is inherently unique about us as humans that we can contribute to the new scenario that is emerging. Because there’s one thing about us is that we’re very adaptable. So when the systems emerge and get really good at evolving and doing what they do, how are we asked to show up and how can we respond to that request? And to come bring it back to your point a moment ago around the value and really focusing on what the company’s trying to achieve through AI and even the assessment of what is accurate, what is good, what is helpful toward that value, that’s how we show up as humans. And that doesn’t require deep mathematics. I mean, sure there might be deep mathematics required to implement some of it, but to make the decisions and go through the process of deciding how we should shape it and why we should shape it. I mean, I think that’s how we will show up and those will be the nature of some of the roles that will be requested of us in the future.
Reed: I completely agree, and I’m sure it won’t surprise you that I’m about to say this, but I do think that it is kind of our human responsibility to be able to make relatively informed decisions, not in terms of learning all these deep mathematics or anything. That’s getting by just fine. I didn’t start there. I have it now. But I do think that it’s important to understand the fundamentals of AI because it’s going to be everywhere and it affects everybody today. This is another thing we talk about in course, whether you’re aware of it or not, plenty of the things you interact with are AI based. And another way of looking at this is the issue of students using AI tools to cheat in their classes, which I’ve read probably at least 10 news articles about this in the last week really since ChatGPT came out.
And there’s this interesting issue of kind of, well, what do you do about that? Well, some professors are just saying, using AI is cheating. Other professors are saying, yeah, you’re going to have to use AI and I’m going to do the work to make it part of the course to teach you how to use this appropriately in this field. Most probably just don’t have the time to even address this question because that’s the issue of teaching. But I think that they’re the lead engineer at OpenAI compared ChatGPT to a calculator, which I think in some ways is a convincing comparison because the computing, for example, is not a job anymore because we have computers and we have calculators, and the actual math of it can be done automatically. But I do think that it’s a little bit of a slightly misleading comparison because the thing about a calculator is that whatever the calculator tells you is correct, and that’s not really the case with ChatGPT, it’s doing something very different. It’s trying to be useful, whatever that means.
Douglas: I will say this, a lot of people carry the opinion that ChatGPT is just really good at bullshitting.
Reed: Yeah. I don’t disagree with that.
Douglas: And so if you think about it, I’ve always been a fan of generative design. Autodesk did this years ago where they had systems that would generate 500 different chair designs and then a designer would flip through them and be inspired by them. What a great way to brainstorm and ideate is to have this generative system give you these moments of inspiration. So that’s really powerful. And so if the system’s bullshitting and your job is to look through and go, this is compelling. Coming back to your point around AI generated scores, I mean, as a composer, what if you had the AI generate lots of variance of things and you reassembled them? It’d be no different than Kanye West doing a bunch of sampling and being really good at it, right? And so I think it’s kind of forced us just to get creative in new ways, and I think that’s really powerful. And I agree, assuming that it’s correct, it’s probably not a wise thing to do.
Reed: Yeah, I do think that trying to gain an understanding of these systems enough to understand the many different ways in which it’s bullshitting simultaneously is going to be an important thing as human beings in the future. I think that it is doing ourselves a disservice to claim that it’s just going to replace composition, and that’s dangerous and bad, and there’s nothing good to come out of that because exactly like you’re saying for someone who’s using it intentionally, it can really be a big help. And also on just the note of generative solutions in general, this was a big thing that happened recently. There’s kind of a system that can generate protein folding in a much faster, more useful way than people have been able to. And while that’s a little bit of a stretch to compare those two, there’s a lot of system important things that can be done better and really don’t need a human hand to be doing. I think it comes back to the question of the goal.
Douglas: And how are we tuning those detectors to make sure that our outputs are aligned to goal. I just started thinking about combining systems too. If you’ve got a generative system that can generate output and you’ve got another system that can help you verify correctness, it’s almost like different grits of sandpaper. You’re just going to using different tools and layering them, and that’s a form of craft. It’s not that you’re not a good woodworker if you’re not using hand chisels and stuff. That’s my belief. But to your point, we got to be careful with some of the metaphors and analogs. People might take the wrong meaning away because you’re right, a calculator is accurate and gives you the right answer.
Reed: Right. I was on a south by Southwest panel, I think it was two years ago. I have trouble with time these days where one of the other panelists actually was studying kind of students’ perceptions of AI. And one of the things they were really interested in was do kids understand that AI doesn’t necessarily have a chrome body that sort of looks like a human and probably has kind of a round head? And do they understand that it is this abstract thing that can exist? It’s not a robot, they’re not the same thing. So that was a very, I think, interesting question that ties a lot to what you’re saying here.
Douglas: Yeah. The mental models that people have for anything in the world, it’s super fascinating. In fact, there’s this body of work that we like to use called clean language, and it’s not really about avoiding profanity, it’s about avoiding metaphor.
Reed: Okay.
Douglas: And how metaphor got it can often obscure, it’s kind of your point earlier that I really latched onto to, which is how language morph become more simple and reduce cognitive load, if you will. And so people use metaphor and have you ever heard someone in the meeting say something like, it should feel like magic and someone else might say it shouldn’t be magical. So they could be describing the same system, but meaning totally different things. They’re not disagreeing when they say those two things, but, and that’s where a metaphor can really get in their way and cause that ambiguity you were talking about with language.
Reed: Yeah, I definitely think so. And as someone who’s been an AI educator for a long time, it’s something I think about often and in more worried moments, it’s something I think we need to get better at. Not necessarily clean language, but just being a little bit more thoughtful about the metaphors we use for AI because the metaphors are everywhere and I assume very unclear, I know in fact from a lot of my conversations, I know it’s unclear to a lot of people where exactly the line is between sci-fi and reality these days.
Douglas: Yeah, for sure. Sometimes it takes a while for the light bulb to go off for folks. You have to see things from different perspectives. I don’t know. It’ll be really interesting to see how the metaphors adapt as we go. And with that, I want to make sure, because we’re kind of running out of time quickly, I want to make sure we shift a little bit and think about the future. By nature, this topic is fairly future realistic, and you already just mentioned that a lot of people have a hard time even talking about what is real versus sci-fi, and yet I want to talk a little bit about where things are going when we look out on the horizon 5, 10 years, when you think about just putting your forecasting hat on, where does this all lead to?
Reed: Forecasting is something that I often try to avoid having come from a hard research background, but I’ll do my best. I am actually pretty interested in this point application systems framework that the power and prediction book lays out. And my suspicion is that everything that we’re seeing right now, it’s hard to take more than one step forward at once besides, I guess a few notable great leaps for mankind over the years, humankind, let’s say, I do think that the pace of these steps will continue accelerating. And I think if you look over the last year or so, you can already see that sort of these major releases of new AI tools are happening faster and faster and faster. And I suspect that that means that even within five years, a we’ll have more self-contained AI solutions that actually work. And we’ll start to have some system level things that are something that never could have even existed before AI got to a level of maturity that it is at by that time.
The example that they give in the book of an application system is electrical factories, because factories used to be powered by steam. And in order to build an efficient factory, you had to build it close to the power plant where the steam came from. And once electricity became a thing, you could move your power plant anywhere and then you could redesign your floor plan and you could do all these things that wouldn’t have even made sense to ask about until electricity was in the picture. And so I think we’re going to be, this is this classic researcher prediction, I don’t think we can predict this right now because I think the fundamentals that go into building these things will be so different even within five years that it’s just hard.
It’s impossible to say right now. However, you’ve mentioned I think at some point, is it too late to get into this or is it too early to get into this or something like that? Implied in what I just said is if you aren’t taking steps in that direction, it doesn’t even make sense to think about the things that you’re going to be needing to do in the relatively near future. So I think it is crucial to continue stepping in that direction in order to be poised for whatever this crazy future is that we have coming.
Douglas: Yeah, I think the factory analogy makes a ton of sense because, and it would be a great activity for people, and we actually do something really similar. We call it 1876.
Reed: Oh, yeah.
Douglas: And basically we have people get it in groups of two, and one person in each group of two will be from 1876, and the other person will have to explain to them what an iPhone is. And going through that activity is really eyeopening to think about, wow, they’re in a totally different mindset, or they were in a totally different mindset, and the rapid pace of advancement will basically mean a few years from now, people could look back at us and say, wait, you weren’t thinking about this, you weren’t even considering this as an option for something to do?
Reed: Oh, I drew a colon and a closed parenthesis on a whiteboard once for kids. And they were like, what are those? Because they only know emojis. But yeah, it’s just the whole, that’s a very silly example, but I think it’s exactly what you’re talking about that they’re just going to be these things that we can’t even conceive of at the moment.
Douglas: Yeah. Your mind won’t allow you to go there because it’s almost like we treat certain things as assumed and like they’re physics, and as soon as the paradigm shifts, it’s like, oh, wow, we’re just immediately open to new possibilities. And now when we’re working with those new quote unquote laws of physics, then we start to imagine whole new solutions. And using that system from the book, you can imagine that when systemic solutions start to emerge, then entirely new point solutions start to become even reasonable, and it becomes this virtual cycle of progress.
Reed: Yeah, I think it’s going to be quite something, and I just sincerely hope that we’re aiming for the right goals as we get closer to it.
Douglas: No doubt. Be careful what you wish for.
Reed: Yeah.
Douglas: Well, I want to make sure that we leave you an opportunity to give our listeners a final thought.
Reed: Well, thank you. I’m terrifically excited about this course that we’re launching together, and I think that one of the reasons why I wanted to help make the course is because following the central, I think metaphor, kick back to metaphors, of Black Mirror, it just shows you what’s there. AI shows you what’s already there. It creates more of whatever you teach it from. And as a result, I think communication and especially cross-functional communication are only going to become more important as we move into more and more of an AI driven world.
And so what I really like about our curriculum is that it’s aimed to teach you enough of AI to understand how you might begin to implement it at your own company, as well as emphasizing that communicating with the people who are really in the problem and the people who really understand things and also the people who are going to be affected by your solution is really an integral piece to actually building something that’s valuable and not just building something that kind of looks good. It’s not that different from generally good. I think software design principles at this point, except that it’s even more critical because the AI is going to be this cold reflection of whatever you give it, even if your intentions were good.
Douglas: Yeah, invite everyone to be reflective on that point. And as you embark on using AI and creating your own take read’s words seriously and also be optimistic about where things might go. And Reed, I highly enjoyed working on the course with you and glad that’s out and hope people enjoy it. And I wanted to say thanks again for being on the show. I really appreciate it chatting with you.
Reed: Yeah, thank you. It’s been a blast and I hope our curriculum helps a lot of people.
Douglas: Thanks for joining me for another episode of Control the Room. Don’t forget to subscribe to receive updates when new episodes are released. If you want to know more, head over to our blog where I post weekly articles and resources about radical inclusion team health and working better, voltagecontrol.com.