A conversation with Stanford University’s Christina Wodtke


Christina Wodtke, author of best-selling book Radical Focus and a lecturer at Stanford University, has helped grow companies such as LinkedIn, Zynga, Yahoo, and The New York Times. She speaks worldwide about humanity, teamwork, and the journey to excellence. Christina describes herself as a “curious human with a serious resume.”

I had the pleasure of speaking with Christina about the work she does with her students, the importance of lining up one’s values with the values of the company they choose to work for, and the ethics of innovation.

Douglas: Let’s start with this – tell me a little bit about how you got started in the work that you do.

Christina: Oh, that’s a very difficult question considering that I’ve taken the scenic route to get here.

Douglas: Yeah, that’s great.

Christina: I went to art school and moved to San Francisco, and painted, and waited tables for a few years, just chilling out until the internet showed up and a friend of my boyfriend said, “CNET’s building a Yahoo killer. Do you want to work on it?” I was like, “I’m sick of what I’m doing. I’ll try it.” I was reviewing 50 websites a week for the directory and I just fell in love with the web so hard. It was the best way in the world to find this universe where people are creating things as fast as you discover them one day I wondered, just how hard is it to make a webpage?  So, I taught myself some HTML and I started working at eGreetings and from there, I became an information architect and a design manager at Yahoo, and built a startup and sold it to LinkedIn.

Douglas: Oh, cool. What startup?

Christina: It was a tool for blogs that have many people working together in an editing process. At the time, we were looking at all the blogs and we figured the ones that made money… like say Tech Crunch, which I don’t think anybody thinks of as a blog anymore, but they had editing processes and no blog software supported it. So, we built PublicSquare and then… I actually have a silly story, which is I just read Four Steps to Epiphany, and this is a long time ago. This is before lean startup, before all that stuff. I’d read it and I was like, “Shoot! I think we’re doomed.” I’ve always felt that authors are just people, which means I have a hope of talking to them.

So, I try to look up Steve Blank and I find a phone number for him and I call and it turns out it’s his home phone number. His wife answers it, she goes, “I think it’s one of your students, Steve.” I talked to him and he says, “Do you want to come out to my place in Half Moon Bay?” I drove out there. I must’ve talked to him for two hours and at the end, I was like, “I’m definitely doomed. I think I’m going to try to sell it.” So, I shopped around and I was lucky enough that LinkedIn was under 200 people then. I met with Alan Blue, and met with Reid Hoffman and just really loved them. They mostly just wanted me and my engineer, although we ended up using the base of the software to build an events product, which is now gone. Funnily enough, they just released another events platform. Anyhow, it all started with me saying, why not try talking to Steve Blank?

Douglas: Yeah, that’s interesting. I’ve recently stumbled on that just in the journey of writing and looking for mentors and I found that it’s really fascinating. I guess maybe not many people reach out to them. Especially when you tell them that you’re writing, then they’re like, “Oh, yes. I want to talk about writing. No one else seems to talk about writing.”

Christina: Oh, yeah. And it helps that you have platforms like social media. In my case, I’ve just always felt really comfortable reaching out to whomever. And people do respond, as long as I do the research and get all the dumb questions out of the way. I find that almost anybody will talk to you if you’re asking really smart questions.

So after I worked at LinkedIn, I worked at MySpace, and I worked at Zynga. I got so burned out and I quit Zynga. This was the first time I quit without having the next thing lined up.  I was like, “Oh god, what am I going to do? I hate everything and I hate everybody.” I was really burned out. But I’ve been a lean startup person most of my life and I started coming up with hypotheses for my future like maybe I’d work in the food industry. So, I took the six-week culinary program and I’m like, “No, this is too hard on the body.” And I’m like, “Maybe I want to work in a food startup,” and then I worked as an adviser for a food startup. I’m like, “No! This is ruining my love of food.” Margins are really tight in the food industry.

And then I thought, “Maybe I’d like to teach.” So, I taught a night class at General Assembly and I’m like, “Yes! I love teaching.” So, from there, I went to CCA, California College of the Arts, and now I’m at Stanford. I feel like a lot of people would be like, “I hate my industry. I’m going to go become a teacher,” and then get a PhD. I found that it made a lot more sense to figure out if I like teaching before I went down that six-year path and it turned out it wasn’t really needed. I feel so much like a imposter at Stanford, but my life experience was enough for them to feel like I had something to give to students.

Douglas: Very cool. What do you think the work you’re doing with students, how is that different than what you see in the industry? Because I’m always fascinated by the stuff that academia is looking at versus what practitioners are doing and startups are actually up to.

Christina: I think a lot in my current life has more in common with my past life than different in a lot of ways, although I miss launching things. I really love putting product into the world. That’s why I write books, I think, is because there’s no feeling in the world better than having something that you’ve worked on for months, and months, and months and then release it to the world and see what happens. I think books scratch that itch for me now. But a lot of what I was doing … by the time I joined LinkedIn I was a product manager and then I was a general manager and if you’re a manager, so much of your work is about understanding people and growing people that I feel like there’s a lot of connections between managing people, teaching people, and coaching people. I’ve managed to take a lot of techniques from managing and from coaching into the classroom.

So, a lot of people think that you learn by being talked at and that has been proven not to be true at all. We don’t learn with our ears, we learn with our hands. So, I teach in this classic art school method of studio where I give students projects and they have to go out and interview potential users, and understand what their needs are and what their problems are, and then turn that into an idea for a product to design. But within that structure, it’s always about being really committed to the individuals who have come into your classroom, and understand what their goals are, and how they want to grow, and who do they want to become and then teach them the things that they don’t even know they need to be successful.

It makes me think about back when I was managing, the difference is I can’t fire a student. I can flunk them, but I can’t fire them. But when I was managing people, a lot of times, I’d have a problem person and just like with a problem student, you want to get down to why aren’t they succeeding in the way that they want to? They want to be good at their job. I want them to be good at their job. So, there’s this interesting mystery of what’s going on here, right? Are our goals misaligned? Did they not fully understand what the goal from the project is? Are they missing some skills? Do they need a little extra help? Is there some cultural divide where their model of how the world works is not matching my model of how the world works?

So much of what we do in tech is working with large teams, right? And so, when you’re in the classroom, it’s still the same question of how do you try to make sure that everybody’s meeting their goal? My goal for them and their goal for them. I think that if people would change how they think about management and stop thinking of it as do what I tell you to, but think of it more as a group of really smart, amazing individuals that you are providing leadership and coaching to, I think most companies would do better as well.

Douglas: Yeah. It made me think of more complexity in formed organizations and self-organizing teams. I’m curious how that … I mean, because I’m totally a fan of everything you just said and so I’m curious how that relates to self-organizing teams because they have that extra dynamic of there’s maybe not a person that is in charge of figuring that out.

Christina: Well, I’m also a fan of self-organizing teams to the degree that after my first book, Radical Focus, did well, a lot of people came and said, “Well, what about this and what about that? How do we deal with conflict and how do we deal with performance reviews?” So, my new book, The Team that Managed Itself, is much more about that and it is … How can I put this? You always have to be aligned with something, right? So, if you have a self-organizing team, they are organized, brought into a team in service to the company in some way. So, if you’re a team that’s been brought together to improve the acquisition of new customers or if you’re a team that’s been brought together to perhaps create a new offering and a new market that you’ve never seen before, teams are usually organized for some purpose, but then often you have to reflect back to your manager, your boss. What does that really mean? How do we make that concrete?

That’s a good place for OKRs where you say, “Okay, our objective is to move into France.” Well, it’s not enough just to be in France, you’ve got to actually have customers who love you, and are excited by you, and doing good word of mouth, and you have to be making money. That way, if you say be embraced by France, you can say it’s going to like … one key result is going to be word of mouth and another key result is going to be revenue, et cetera. You can bring that to your boss and the boss will say, “I agree with this, but I don’t trust this metric or I don’t think that’s important. Don’t worry about revenue right now. Let’s really focus on awareness in the market.” And it becomes this healthy conversation where the team still owns the goal, but they’re not misaligning with what the company needs from them.

In the classroom, you’ll get a team and they’ll be working on a project and I think a lot of teachers make the mistake of just saying, “Okay, go. I’m going to grade you later.” And I feel like it’s the same thing. I need to check in with them and say, “Okay, here’s our learning goals. Are you really thinking about this,” or, “Is interviewing really the right thing to be doing here? Maybe we want to do a cultural probe instead.” But I think that in every case, even though the manager is not telling you what to do, the manager is constantly chewing you, making sure that you’re pointed in a good direction and that you’re aligned with whatever organization you’ve attached yourself too, right?

Douglas: That’s interesting. It reminds me of General McChrystal’s talking about in this space, the leadership becomes more like a gardener. And you’ve been talking about the chewing, made me think of topiaries. Like you don’t actually make it grow but you push in that direction and it doesn’t sting.

Christina Wodtke

Christina: My daughter just got into bonsais recently, which is crazy, she’s 14, and it’s really interesting to watch her learn because you don’t water bonsais. You put them into water and they soak up what they need. You put them in a couple of inches of water and let them sit there and then you give them a little bit of plant food because they’re in such a weird environment that they’re not getting a lot of nourishment, so you have to give them plant food because they’re in that little pot. And you get to choose whether you want to trim them or not. Do you want to let them grow in their natural way or do you want to take back the bad habits?

And I feel like so much of that sounds like teaching or managing where you provide resources, you support people as they need to be supported considering the situation. Like maybe you’re a good team in a toxic company while you as a manager might need to provide a little more coaching and support about how to navigate the complexities of an unhealthy situation. And then the question is do you have somebody who can just let go? Not let go like fire, but let them just go crazy and be their own beautiful self. There’s people like that, but then there’s other people you really have to be giving clear feedback because their behavior is perhaps not serving the needs of the team and the organization. So, I thought of bonsais as being a lot more hands on and it turns out they are and it turns out that it’s really about coaxing them and supporting them much more than trimming them and cutting them back.

Douglas: Yeah, that’s great. You hit it on purpose gracefully and that’s something that I think is so important and so critical. Sometimes I think it comes from a sense of insecurity or laziness. But you encountered so many students and you mentioned purpose, so I wanted to just hear any experiences you have around that.

Christina: Yeah. Well, one of the first reasons I got into OKRs is because before that, if you have a KPI, it’s like, “Oh, we needed to make $5 million in revenue,” and you’re like, “Uh-huh, okay, $5 million in revenue.” It doesn’t feel very purpose driven. But if you say that you want to delight customers or you want to reinvent healthcare so it’s more humane, that’s a purpose and then you still have the metrics to know whether you’d achieved it or not. So, that’s one thing that I like about OKRs.

I think finding purpose is always tricky. I see with my students, while they’re here, they so want to make the world better. Sometimes they’re angry, and sometimes they’re frustrated, and sometimes they’re excited. The students I have are just full of all this energy and they want to make the world so much better and then when they try to step into the work world, they really struggle to keep that and it’s so hard for them because they’re like, “I guess I’m working on the AdSense team at Google now to pay my student debts,” and they just get deflated because they went from, “Hey, how do we make meaningful text that’s going to fix the electoral system,” in some sort of class project and then you go to like, “Is this shade of blue or is it that shade of blue?” It’s monetized.

It’s heartbreaking and I think that it’s really important as a person new in the workplace to try to find the meaning within your work and if you can’t find the meaning, really ask yourself, “Can I wait a little bit longer before I take that first job?” As an example, maybe you’re working at AdSense, which maybe you hate because you think ads are evil, well … let’s say you’re very excited to be part of Google and you know that they’re doing these crazy, amazing things with machine learning to do early detection of health disease, which a friend of mine’s over in another corner of Google, they’re doing that, and you’re like, “Wow, I’m on AdSense.” You could be depressed. But on the on the other hand, the only reason Google has money to do these really amazing, speculative projects that could save lives is because they’re making money off of AdSense.

So sometimes, you have to recontextualize yourself and say, “The work is not directly positively impacting, but I’m part of something that’s good.” The problem is when you’re part of something that’s bad and that can be a lot harder. I see students who join big companies and then they quit because they’re like, “Oh my god, I had mixed feelings about this company before I went there, but then after I got there I realized that this is a company that just doesn’t want to make the world better and isn’t interested in that as a problem.”

If your values are misaligned with your company’s core values, you will experience cognitive dissonance and you will eventually become ill and even quit. I’ve seen it so often. I experienced it when I worked in Zynga. A student experienced it. I don’t want to say one set of values is bad, one set of values is good, but if you’re working somewhere where people are very interested in just making money because they want to feel safe and have stability for their family but your values are all about making everybody in the world safe or providing healthcare for somebody, you’re just going to have stressors. You’re just always going to be exhausted and frustrated because you don’t share the same core values as your company.

So, we talked about purpose, but I don’t think we talk enough about values. Purpose is borne out of values and I think it’s well worth it to spend the time looking at a framework, perhaps like Shalom Schwartz’s 12 Universal Human Values, and ask yourself, what are the ones I really value? Do I value safety? Do I value making the world better for everybody? Do I value security? What is it that matters to me? And then look at these companies and say, “Considering the choices they make in everything from financial structure to what products they ship, what do I think their values are and am I going to have a hard time being there?” I know that everybody gets a choice of work, but if you can find your way to working somewhere that shares your values, you will be happier, you will be healthier, and you will be more successful.

Douglas: Absolutely. Is that something that you actively work with students on?

Christina: Yeah, this is something that I’ve … it’s actually a fairly recent development over the last year or so where I’ve been using Schwartz’s 12 Universal Values to start talking about what are your values and what are company values. And I’m really interested in going further in exploring software values. So, let’s take Twitter, for example. They clearly value an individual’s right to express themselves way, way higher than everybody’s need to feel safe. They choose to allow very controversial individuals who say things that are very hateful and very painful for a lot of people. They give them a platform to speak and that results in a lot of women and underrepresented minorities who end up shut down and leaving the platform.

For me, that’s not my values at all. My values are getting a lot of diverse voices and a lot of different points of view, and making sure everybody has enough psychological safety to express themselves. And so, I should never ever work for Twitter, but it doesn’t mean that somebody else who believes so much in the freedom of speech and believes that it’s just words and you got to figure out how to take care of yourself in this hard world, that could totally be a value somebody else has and that could be the best place in the world for them to work.

It’s really hard to get out of the judging business where you say, “These values are good and these values are bad,” and move into a place where instead, you respect people and respect their values. For working with students, it’s absolutely vital that even if I don’t share their values, I have to respect that they have them, and that they matter to them, and it’s going to be part of who they are and as they go on their journey, those values may evolve. But I try to help them see and understand other students’ values and learn how to work with people who aren’t necessarily like you. I feel like I’m the first person who has to model that every day. If I’d come in and go, “Damn republicans, blah, blah, blah, blah,” then I’m going to lose some of my students.

So, I’d rather just speak much more thoughtfully and cautiously about being respectful of different people’s values. I do think there is something called the truth, and I do think there is something called science, and there are something called facts. That’s not the same as disagreeing with people’s values. But I don’t want to cast anybody as a bad guy just because they don’t share my value system.

Douglas: Yeah. I always like to catch myself if I feel like I’m painting someone as the villain. That’s a dangerous territory.

Christina: Yeah. Once we start othering, down that path leads genocide, literally.

Douglas: Yeah. That is the dead end of that, you’re right.

Christina: That’s where we’ve seen it go before. So, I think in a workplace … It’s so funny, my friend Laura Klein, who I think you’ve spoken to, she’s working on some interesting questions about the problems between product managers and user experience designers and I think a lot of that comes down to values as well. The product manager is like, “Hey, if this product doesn’t meet its numbers, I’m not going to have a job anymore. I need this to happen,” and the UX designer might be like, “But I don’t want to hurt users even if that’s the right thing financially.”

If they dismiss each other’s values, they’re just going to yell at each other. But if they say, “Oh, your value of really needing some safety and security is a real thing. You got family, you got kids. And my value of being good to people is a real thing. Why don’t we work together and try to find a solution that’s going to reflect both of our values and create something that’s going to create real value for the business and the customer?” Instead, you see these like, “They’re terrible. PMs suck. Designers are such big babies,” blah, blah, blah. It’s useless, right? You’re not moving forward.

Douglas: I feel like when you apply those constraints, that’s one of the amazing things. Not all these solutions present themselves.

Christina: Oh, yeah. The Eames said design is constraints and I always liked that. If you don’t have a dozen constraints, then you’re probably not doing design work. But yeah, I think you’re right. I think it’s when you have a really hard problem that you get really extraordinary solutions and so, making sure that the product hits everybody’s values is a really hard problem and it will create extraordinary, extraordinary things. New business models, better technology, all sorts of wonderful things.

Douglas: Yeah. And another thing that struck me is that it was fresh because a lot of folks when I’m talking about innovation or the future work business transformation stuff, culture comes up a ton and as you were talking about values, the thing that struck me is more a nuanced way of talking about the importance of culture because when people hear culture they can think, here’s this like, “Do we have ping pong tables?” These more superficial things whereas like I think the values are a way to really get at that.

Christina: I think all culture is a collection of norms and a norm is an unspoken rule of behavior. So, one of the things I do when I do work with clients is I get them to get their norms out in the open where they could say, “Yeah, this is great,” or, “Ooh, we actually really don’t want things to be this way.” And so, I think we all struggle a lot to talk about culture, but if we know what our company’s values are and then we look at the behaviors that come out of those values, we’ve basically described our culture and it’s up to us.

Okay, here’s the thing, technical digital products are never finished. In the old days, you used to spec something out, and then you build it, and then you’d send it, and you’d sell it. But now, we have these things online that are constantly getting updated and growing as the markets change, the features change, as your customer base changes, you evolve it. But we’re not doing the same thing for our companies and the reality is that there’s a small startup and it will set a company culture and then that culture will drift off in all these strange directions and we don’t spend enough time to just stop and say, “Okay, are we really being the people we want to be? What do our employees need? What does the market need? What do our stockholders need?”

Really treat the company culture as if it was a really critical product that you constantly have to be caretaking and evolving. You can never just leave it there and go, “We’re done now.” And that’s something that I think is really overlooked and forgotten is that culture, especially if you want to change it, like say Uber, is something that needs to be actively taken care of like a bonsai.

Douglas: That’s incredible. I love that. I’ve been seeing a lot of HR folks showing up to conferences and it’s really fascinating to me. I started thinking along the lines of you could see these design practices work really well for some of that work and as it starts getting pointed inward, you could potentially see it bifurcate. Here are the tools and the approaches that work really well for the products and here are the tools and the approaches that work really well for our culture and for our internal structures.

Christina: Yeah. I love a lot of things about design thinking, but I think the way it’s mostly taught, it is like let’s just get together and have a big sprint and then we’ll have this solution. A lot of the design thinking literature and workshops and stuff don’t recognize the ongoing process. I agree completely with you that there’s so many tools and techniques out there that are so good at getting at what do people really need, ideating more widely, evaluating what you’ve got. But we have to accept that that is never done. Our products are never done. Our culture is never done. It’s something we have to keep growing and evolving all the time or else this is going to devolve.

Douglas: It’s also interesting, you talked about a great example of a product manager versus the UX designer. I think it happens in other areas too. I even see it happen between sales and engineering or vice versa. So, when you have these stovepipes, these silos, you end up with what I would say is a mismatch in values or maybe there’s different cultures in each place. In fact, I remember this startup in Austin where they had tons of sales people because it was part of their model and it was just bad. It was a very frat boy culture that in order to retain and grow a serious engineering organization, they had to put them in a separate building enduring an entirely different culture.

I thought that was a really strange solution to that problem because it’s like we realized we have this problem so let’s just isolate it rather than fixing the problem … because they were benefiting from it, right? It’s like, I don’t know. Anyway, I’m just curious about your thoughts on this notion that the company can have multiple cultures and that seems like a bad thing to me, but what are your thoughts there?

Christina: I mean, companies can definitely have their own cultures and there can be problems by trying to standardize them too much. A couple of quick examples from Zynga. I feel like I can finally talk about Zynga because it was eight years ago when I was there and I hear it’s a different company now so I’m like, “So, here are some things I liked and some things I didn’t care for.” But on Fridays, around say 4 o’clock, people would celebrate whatever they’d accomplished this week. It’s an agile culture, so there’s the whole idea of Friday demos and usually, in our group, we were working on platform stuff and so we called it wine down and they’d bring some wine, and cheese, and fruit, and we’d demo, and we’d chat.

One day at that time, I had to go find somebody in biz dev so I went down to the biz dev group and they were standing in a circle and each of them had a shot of tequila and everybody had to brag and if you bragged about something you’d accomplished that week you’d do your shot, but if you didn’t have anything to brag about, somebody else would steal your shot. It was a very, very different culture because the platform work is inherently super collaborative, so our Friday was all about celebrating. But these biz dev folks, much like sales folks, were very much about closing deals, and competing with each other, and constantly trying to be the best at things.

And so, the different celebrations took different tones based on the way the culture of a given group was. As long as I can respect that biz dev is all about winning, and crushing it, and closing deals, and making relationships happen, and they can respect that us platform folks are about connecting the company together and looking at the big picture, as long as we both respect that the cultures are both valuable, and valid, and just not like yours but not yours in a way that makes sense, then we could always talk to each other. We could always get on just fine. The problem is when you’re like, “Well, you folks should be more cutthroat.”

At the time, I don’t know what it’s like now, Mark was compensating everybody in the company the same way you compensate sales people, which is if your numbers go up, you get bonuses. But the problem is really, really creative people, like a lot of the game designers, don’t work that way. As soon as you start putting a financial number in front of them, their creativity shuts down. And there’s a lot of writing about this in books like Punished by Rewards where you can look at the way reward systems can backfire and can actually be terrible, terrible, terrible for getting quality work out of people.

And that was one of the things where they thought, “Well, everybody likes a nice bonus,” which is true, but there was no thought to realizing that creative people work in a very different way. They’re very intrinsically motivated and it became this paycheck gun to their head – make us something brilliant that makes a lot of money or you can’t feed your family. People can’t really be creative like that. So, using a sales culture in a creative culture was a giant backfire. I wish I could just say, “It’s very simple.” The only thing that really matters is having empathy for people who are very different from you and the better you can get at that, the more likely you are to get the company to work better.

Douglas: I want to ask a few questions about innovation programs. What approaches to innovation or what about innovation programs can be wrong-headed or potentially backfire?

Christina: Oh gosh. From what I’ve experienced, things that are really deadly are these ideas that there are … I just said there were creative people and you have to treat them differently. Well, I think you do have to realize that creativity can be everywhere in the company and that there shouldn’t be one special group that’s like, “Okay, this is our innovation group. They’re the ones who come up with innovation and everybody else just keeps the lights on.” I think innovation programs need to include the breadth of people in the company. I mean, maybe somebody in PR only has one good idea but damn it, one good idea can make you a lot of money and make the company amazing. So, you need to make sure that everybody has access to it.

I think the best way to do it is what I’ve seen in quite a few companies…they treat innovation programs like investments. So, you go in, you pitch your idea and they’ll invest in it, but they’ll often invest in it with human beings. So, let’s say you’re a product manager, you got a great idea for a new product. They’ll say, “Okay, we’ll give you a fourth of this designer’s time and one engineer,” or whatever, however that works. And then you go off and you get a little bit of proof, lean startup style. Maybe you do some customer validation, maybe you put out an MVP and then that gets refunded. That system where anybody can pitch, anybody can get funded and then once you’re funded it gets built out and then you may need new skills.

You may need somebody with more management experience if the group gets bigger, et cetera. Just because you have the idea it doesn’t necessarily mean that you have the skills to bring it to market or the skills to scale it. But really I think the company’s acting almost like a VC except with employees rather than just raw cash. That approach seems to work very well in companies I’ve worked at as well as other companies.

Douglas: Yeah, the portfolio approach.

Christina: Yeah. I guess I didn’t say anything particularly innovative, but sometimes appropriate is better than innovative.

Don’t do something new just because it’s new. Do something new because it’s something better or it has the potential to be radically better.

Douglas: Yeah, absolutely. I’m always curious about how people thought on how to measure innovation because if we focus too much on short-term gains, line revenue and things, then programs can get shut down. If we focus on the number of ideas generated, that’s not necessarily great. So, how do we measure it?

Christina: Number of ideas. Good lord, does anybody do that? That’s just … that’s up there with page views for the worst ideas ever.

How do you measure the success of an innovation program? I’d say are there things coming out of it? Are there products coming out of it that are getting adapted by users? I think you have to again think like a VC, which is if you talk to a VC, they’re going to tell you that they’re going to look at hundreds and hundreds of companies, and then they’re going to take a second look at a handful, and then they’re going to invest in a smaller handful, and then they’re going to reinvest in a still smaller handful. So, you just have to realize that the percentage of individuals that will produce something that’s going to be super successful, it’s going to be small and of course, a unicorn, as a VC would call it, a billion dollar company, those are going to be every more rare.

So, it’s hard for companies because they want to micromanage processes, but you really can’t. You have to be like a VC. And I have friends that are VCs because I live in Silicon Valley and their attitude is you got to look at the people and look at the idea and say, “are these the kind of people that I’m going to have a leap of faith in? Are these the kind of people that I believe are going to be able to bring something to market?” So, it’s a trust then verify point of view where you basically leave them alone. You give them their head, you give them the money and then you say, “go to town,” and then you check in and see where they are. And then if they failed at the end of that deadline, six months, one year, what have you, you just don’t reinvest in those people. In an entrepreneurship situation, those people can go back to their day job. But maybe you do reinvest because they’ve proven whatever their hypothesis is.

And this is one of those places where getting the team to name what their goal is and what success will look like is absolutely vital because if the team can’t say what success will look like, the manager will make something silly up like 10 million dollars or whatever revenue. So, it’s worth taking the time to really set smart goals, really good OKRs really, and then be really good at measurement, and then decide at regular intervals whether to continue. I’ve worked with a lot of startups in my life and one of the things that I’ve noticed, and nobody talks about this, one of the hardest things about being in a startup is knowing when to quit, right?

You’re like, “I’ve got a little traction, but not that much traction. I’m not out of money but gosh, I’m getting tired of ramen.”

Douglas: Yup. We call that the walking dead.

Christina: Yeah, the walking dead, and that’s the thing is in a bigger company, you don’t want the walking dead either. So, that’s why really clear goals and really clear check-in periods is just vital.

Douglas: Yeah. That’s something I’m a big fan of. The people that are doing metrics, I mean the people that are just missing the bone entirely, but the ones that do it seem to focus on the positive outcomes they’re trying to seek and I think that there’s room to actually look at documenting how we will determine if we’re going to pull the plug. So, when you get there, you know, “well, we said we’re going to pull the plugs, so here we are.”

Christina: Yeah, time to … And this is one of the places where I think it is easier to be an entrepreneur because your boss is like, “Oh no, you’re done now.” You’re an entrepreneur, you’re like, “well, I didn’t get any more funding, but I’ve got a credit card.” It’s too easy to lie to yourself. Your boss isn’t going to lie to you.

Douglas: Yeah, no doubt. Let me just ask you one final question, which is what excites you right now?

Christina: I’m really, really, really excited by the rise of ethics. So, the students I have, they are asking so many extraordinary questions about what should get built and how do we know if this is actually going to be good or not and how do we start thinking about unintended consequences. And the students themselves are just so committed to not putting worse things into the world and my faculty colleagues are so stoked to integrate ethics into the classroom, even places where it’s going to be a little bit harder. I’m in the CS building and they’re like, “what does ethics look like for systems and what does ethics look like for security?” How do we work in the question of how we know what the right thing to do is.

Technology is not neutral. That’s a lie. And again, I’m going to use the word values, all technology is the productization of values. Our values are written into the code and if we don’t examine our values, and question them, and say, “are these the values of a bunch of weird ass, rich, white people in the Silicon Valley? Are we imposing our values and culture on people who have a very different idea of the world? Do we have the right to do that? What does that look like?” This is such an amazing moment and I feel like I’ve been waiting for it forever. But I’m so happy that we’re working on these really hard questions and anybody who thinks it’s boring, I think this … if I try to choose between machine learning and figuring out how to make ethical products, I think figuring out how to make ethical products is way harder.

You have to be a futurist. You have to be a studier of different cultures. It’s just the complexity of bringing something into the world that makes the world a little better and not a little worse is really hard and I think it’s exciting, and magnificent, and interesting. And dare I say it, it’s even going to be a producer of much more innovation because it’s a massive constraint and I think right now, that’s what really gets me excited is people are starting to really ask, “am I making the world better or am I making the world worse?” And that’s a question everybody needs to be asking. So, I’m so excited, I’m so thrilled even if it means that we get into long, stupid, semantic arguments. It’s worth paying that price if we can just make a few better decisions for the future.

Douglas: You know, this is awesome to hear because I don’t spend any time inside academia. I see what’s happening culturally with just the backlash of startups, bad behavior, as well as politicians, and movie executives, et cetera. So, it’s interesting to hear that this is going into … and it makes sense that it’s on top of mind for students. It’s great to hear that you guys are having that conversation.

Christina: Oh, yeah. We’re in the ’60s again, people. Students are like, “No. We need change. We need it now.” It’s really an exciting moment to be here.

Douglas: And it also made me think like doctors have the do no harm pledge, but there’s no pledge for the work that we do, especially if you think about how software is eating the people’s brain. I think we could see that there’s not that big of a divide between some software and some medical procedures and the impact they could have.

Christina: Most medical procedures are software these days.

Douglas: Yeah, exactly.

Christina: It’s just everywhere. It’s really fun to be somewhere too where you get really crazy, weird projects like the lab down the street has been analyzing 18th century texts and they discover – and they can only do this with software and algorithms – is times when authors are pretending to sound like textbooks. You can say, “well, who cares if Sherlock Holmes right now sounds like an anthropology textbook,” and then it goes back to sounding like a mystery. But I think there’s something wonderful about something that’s so old as literary criticism that’s getting a new lens through work that is just too boring and too slow for humans, analyzing thousands of lines of texts, and it’s providing a lens for us to look at our own humanity and I think that’s really exciting too.

So, I think the way technology is getting distributed across all of these other disciplines, there’s so much cool stuff that’s going to come out of that, but of course then ethics become important as well because well, you don’t want to put crazy technology in the hands of somebody who hasn’t really thought about the consequences of what it could do to various other peoples. So, I think that we need to go together with ethics so that as we’re shipping out all this cool technology stuff, like AI machine learning, that we’re also shipping with that algorithm saying, “Hey, training data, are you sure you’re using something that’s not just reinforcing the same old bullshit?” We’re in a place right now where I feel like as we export our CS we’ve got to export ethics, which means we’ve got to freaking have some.

Douglas: Amen. So, let me ask you one other … this has got me really thinking. So, you made a comment there around shipping this getting in the hands of folks that might misuse it…

Christina: Yeah. There’s so much cool stuff that people can do in their fields with it, but they have to know basic stuff that tech people grew up with like garbage in, garbage out, right? That’s the answer to the mystery of why various AIs are racist. It’s like garbage in, garbage out training data. So, there’s like a bunch of …a lot of people are going to be adopting it, period. We’re not even shipping it to them. They’re coming and saying, “Give it to me.” So, people are adapting it like crazy but we want to make sure that we also know those core concepts about the technology and how to work with it.

Douglas: Yeah, I was thinking that it’s like a lot of the encryption capabilities can’t be … you know, job encryption libraries couldn’t be exported outside of the US and that sort of thing. And I guess when we think about … it’s this interesting conundrum because I think the young folks, like you’re talking about we’re in the ’60s again, very concerned about ethics and very concerned about maybe some of the actions, and beliefs, and behaviors of the government and, at the same time, the government does provide a shield for those things, potentially not getting in the hands of people who might do things against us. And so, I guess to me that’s a bit of a conundrum because it’s like there is some protection there, there is a mistrust there, and then there’s like are they the right stewards to the ethics but if not, who is?

So, I guess I’m curious to hear your thoughts on that because I did a workshop for the special operations command once and before I did that, someone asked me, “Are you concerned about your work that’s appropriated?” I got to think long and hard about that. So, I guess I’m curious because to me that’s a conundrum.

Christina: Oh, it’s one of the hardest things in the world. I mean, I was at a salon, the AI salon here, which is a meeting of the people working in AI and they have four people working in AI ethics and the topic was secondary use scenarios. In other words, you design this for this. For example, you designed a machine learning algorithm to understand the spread of disease so that we can hopefully stop them from spreading, but a terrorist could use it to understand the best place to drop a disease. Every single thing that we do always has secondary use scenarios and we live in this world where borders are really not very firm. Some can talk about building a wall all day long, but there’s so much stuff that goes over the internet then you see countries that are desperately trying to keep the internet out and losing always.

So, we live in a moment in time where every single thing we do will be found by somebody and they probably will try to use it in the worst possible way and you just can’t assume anything else. You can’t say, “we’re going to lock this up where nobody can get at it,” because that’s not viable anymore. So then, it raises the question of what can we do? And I think that’s a question we’re still really wrestling with very hard, which is we can do a lot with making other people smarter. We need to get a lot more education, especially into elementary school about how technology works and what’s the appropriate use of it. We just … god, it’s just so hard. Talk about a conundrum. There is no solution there.

If you want a really good, juicy, wicked problem to work on, figure out how to keep technology that has amazing value in a good world from abuse in other people’s hands. I think we haven’t even begun to take a small step down that road of solving it because when I go to these salons, and panels, and discussions of ethics, we know how to say, “here’s how I’m going to try to be a better person,” but there don’t seem to be a lot of solutions about how to handle bad actors and you see it in things like Twitter. It used to be there were some crazy ass racists in one small town and they were terrible, but mostly they just cuss at the TV and now, they’re going online, they’re doxxing people, they’re crashing people’s computers, they’re organizing marches and acts of violence. The internet may be the first original technology that made wonderful things happen and now enables dreadful things.

I think it’s going to be a really difficult time for us for a long time and the only solution, it’s a lot like what we’re talking about earlier with innovation, is we need all hands on deck. We need everybody who might have an idea about how to work on this problem to work on this problem because I’m not going to come up with a solution, I’m just one woman. You’re not going to come up with a solution, you’re just one guy. But if we’re all working on it, maybe we’ll get to the other side of this strange time and just like with the Industrial Revolution where there were pea soup fogs, and there was ground collapsing underneath people because of overmining. Maybe we’ll get to the other side of this and hopefully we’ll have a better answer of how all this technology gets reintegrated into our lives in a sane way. But it’s going to take everybody. It’s going to take everybody working on it.