Skip to main content

In this installment, Susan delves into the captivating narrative surrounding Sam Altman, the forward-thinking CEO and co-founder of OpenAI. She examines the surprising announcement of his departure, the reverberations it caused within the tech community, and the unprecedented choice to reintegrate him into the organization.

Topics covered

xpansion and Evolution of OpenAI
Open AI’s CEO Transition
Autonomous AI Creation
AI’s Socioeconomic Implications
Advancements in AI Development
Governance in Startups

Susan Sly’s Bio

Susan Sly is a tech investor, co-founder, best-selling author, keynote speaker, entrepreneur and podcast host of Raw and Real Entrepreneurship. She has appeared on CNN, CNBC, Fox, Lifetime Television, The CBN, The Morning Show in Australia and been quoted in MarketWatch, Yahoo Finance, Forbes, and more. Susan is also a member of the Forbes Business Counsel. She holds a Certificate in Management and Leadership with a focus in AI, Certificate in Strategy and Innovation, Advanced Certificate for Executives in Management, Innovation, and Technology, and Certificate in Artificial Intelligence in Pharma and Biotech from MIT and is the author of 7 books.

As a highly acclaimed keynote speaker, Susan has spoken for MIT, NVIDIA, Intel, Lenovo, and shared the stage with Tony Robbins, Jack Canfield, Robert Kiosaki, and more. And she has been a featured guest speaker for the National Restaurant Association, Executives Next Practices Institute, Forbes Roundtable, Corenet Global and the Edge AI Summit.

In 2022 Susan was honored to receive the Rosalind Franklin Society Award in Science and a nomination for the Rising Star in AI from Venture Beat.
Susan Co-founder of RadiusAI – an award-winning artificial intelligence company with offices on three continents.

Susan has completed the Boston Marathon 6X and placed Top 10 in the Pro Division of the Ironman Triathlon in Malaysia. Susan is passionate about philanthropy and has dedicated a significant amount of time and money working to liberate girls from trafficking and invest in education to support women and girls who have survived trauma and abuse both domestically and overseas.

Susan is the mother of four children and resides with her husband in Scottsdale, Arizona. Find out more about Susan at

Follow Susan Sly

Show Notes

Read Full Transcript

Susan Sly 00:01
Well, what is up Raw and Real entrepreneurs wherever you are in the world, I hope you're having an amazing day. I have quite the show for you. This is a very special episode, I have been getting texts. I have been getting direct messages, people asking my opinion on Open AI, the ousting of Sam Altman, the rehiring of Sam Altman and the board and what is q star and why should we care? So I'm going to talk about all of that I am going to go through the facts. I'm going to go through the data. And, of course, give my opinion. A few things to note, one, I want to acknowledge the team for putting together the show. Tisha and Neville and May and Abby and Diana, thank you, thank you. The show is a labor of love. And this is a special episode as an extra episode, and I am recording it the day before I traveled to Barcelona. I want to give a shout out to Hewlett Packard, or HPE, as we call them. And they are bringing me to Barcelona to speak on the future of women in AI, technology, and hosting me and amazing other females in our space for a beautiful panel, where we're going to talk about what it's like to be a woman at the bleeding edge of technology. So shout out to HPE, I love you, I love you. And thank you for the privilege of getting to go to Barcelona and share some wisdom and insight. I also have some big announcements in the coming weeks. And there are lots of things going on in my personal life. This was just in the media, but I resigned as CoCEO of radius AI so that is in the news. I can tell you now, and I am not going to go into details. But I will say I want to thank you for I've had people reach out from so many different companies from Nvidia, from Lenovo, from HPE. From so many people across the technology space, I am doing great. I am happy. It was a choice for me to leave. And I wish them nothing but the best and I'm very excited about the future. But there you have it, big announcement, there. If you want to find out what is going on all the time, go and follow me on LinkedIn @Susansly. You can also follow me on Instagram @Susansly, on X @Susanslylive. And of course Raw and Real Entrepreneurship is on LinkedIn, It's on X, and it is of course on Instagram. So check us out. I do not use Tiktok, I refuse to use Tiktok, don't even get me started about the implications there. Anyway, let us talk about Open AI.

Susan Sly 02:51
This is Raw and Real Entrepreneurship, the show that brings the no nonsense truth of what is required to start, grow and scale your business. I am your host, Susan Sly.

Susan Sly 03:05
I'm gonna do a brief history of Open AI. So Sam Altman is one of the founders of Open AI and he's originally from St. Louis, Missouri. He dropped out of Stanford in 2005 and he co opted a company called Looped, which was a location sharing device for smartphones. And then he began his entrepreneurial journey in a new sector. And he was a part time partner at Y Combinator, which is a Silicon Valley. You've heard for different founders, they came up through Y Combinator, it's kind of this growth incubator where

Susan Sly 03:50
many genius founders like Altman come out of so he was there. And then in 2015, he was recognized by Forbes as a top investor under 30. And that is when he co founded open AI, with Elon Musk and other entrepreneurs. And he had a mission to develop and promote friendly AI that benefits humanity as a whole and I want to pause there. Those of us in AI, I have been working in AI for the last five years. We all want friendly AI. We want human centric AI, humans in the loop. And that is the intent. I'm not saying everyone is like this, I am going to be bolder on this show going forward. There are nefarious forces out there in different countries and governments that don't necessarily want happy, friendly AI. And AI right now, I say this on stages all the time, is in a toddler stage. So you know toddlers are very cute, they're adorable. I've got you know, kids myself and none of them are toddlers anymore, but I know toddlers are also, they fall down. They cry, they, my son, when he was a toddler, he used to eat dirt. They do all kinds of things that you just don't expected. AI is in that toddler stage. My concern is when AI reaches the teenager stage where AI becomes rebellious. And that's what we're going to talk about in this episode. And the possibility of what happens when AI reaches singularity. It's where it is that line where it passes, human thinking, and abilities. And so the question around Q star is, is that where q star is going based on what has happened. So we're gonna talk about that in a moment. But the whole thing with open AI, it started as a nonprofit and an AI research organization. And Altman and Elon Musk were the initial board members, and the company intended to collaborate freely with other researchers and institutions, by making its research open to the public, alas, the name, open AI. And in the early days of open AI, it really sought to recruit the top talent in AI research, and it was offering competitive salaries to attract all sorts of experts from these leading tech companies. And I'll talk about one that they attracted coming out of Tesla that initially replaced Altman in the big board coup that happened. So the in 2019, open AI, transitioned from a nonprofit to a for profit model. And it allowed it to attract investments and give employees stakes in the company. And this transition was part of a bigger strategy to scale the operations and meet the capital demands of pursuing artificial general intelligence. And I will tell you, AI is expensive. As a former co CEO, and a co founder of an AI company, AI companies in order to produce products, it is significantly more money. And unfortunately, because AI is still broadly new from an investment standpoint, especially certain types of AI, like the AI that I was involved in computer vision at the edge, there are a lot of venture capitalists who don't understand and they will compare it to a general SAS model and go, Oh, well, why couldn't you create that AI for like, you know, $2 million, like someone who, you know, built an app in their, you know, spare time, it doesn't work like that. AI is expensive. You need data scientists, you need data engineers, you have to clean the data, you have to annotate the data, you have to train the models. Then there's the expense of having the hardware and things like GPUs and servers and so on. I mean, it's, it's, it is expensive. So they made this transition. And Microsoft became a significant investor in open AI, contributing first a billion dollars, a billion with a B, friends, in 2019. And later, 20, or $10 billion in 2023. And as an aside, I was researching this episode. And so I have a lot of notes, you may hear some papers shifting, but that's okay, because I am reading to you directly from quotes and sources, and so forth. And I want to make sure that I get these facts right, for you, my listeners who I adore, and so that you can go out and be more educated in terms of what is going on, and why this particular news is relevant to you as an entrepreneur, regardless of what size your business is. So I want to take you through a timeline. So open AI is going along and we have chat GPT and at the beginning of 2023. Everyone's like, so excited about chat GPT. And it's this, this phrase, we talked about in AI called the democratization of AI, meaning AI is available for everyone. And even a couple of weeks ago, I was on the new GPT4 builder. I built my own generative AI for a project that I'm working on. It was phenomenal. And then I built one for my mentor, Harvey Mackay, who's 91 years old and he's written many New York Times bestselling books, and I just had, you know, I'm not trained. It's, you know, just an off the shelf AI but I gave it some guidelines, and it was able to converse and give wisdom just like Harvey was giving the wisdom. How cool is that? So then I created another one with a friend of mine who's in the healthcare world, and we had fun doing that. It didn't take very long whatsoever. And so then there's the ability to create images and everyone's using GPT. At the beginning of the year, I was doing talks, how many of you are using GPT? Maybe you know, one or two hands would go up in a small audience of like, 200 people, now everyone is using it. Even at institutions like MIT, they're letting students use it for papers as long as they verify. And there is, GPT is fascinating as a tool because it is so accessible. So when you think about what they were focused on creating, what they managed to create in a fairly short period of time, major investments, major players coming into the space, of course, when you have that kind of accelerated growth, and the, suddenly you go from something that just, you know, maybe techy people know about to everyone's talking about it, it's a resource for kids, it's a resource for older people. And so now, of course, things are going to get really interesting. So keep in mind, at an AI company, you've got different divisions. So there are people working on the existing product, then there are people working on new products, then, of course, there are the people who are focused on taking care of the people who are building all of these products, then there are the people who are selling and getting the word out to the market. And then there are the people who are focused on the day to day, including legal and HR, accounting, and all of those things. So open AI grew to a company size, that was, you know, just north of about 800 plus employees in this time. And that brings us up to where we are now. And I'm gonna go back to Friday, November 17. So what happened on Friday, November 17 is Altman stepped down. He didn't really step down, he was ousted. Now, when you hear that someone has stepped down as a CEO, it can happen for a few reasons. So I want to add context to it. So it can happen because they are asked to step down. Or they may step down for personal reasons, or a CEO may choose to step down to go pursue building another company. It doesn't mean when you hear someone has stepped down that they have been ousted. But what happened on Friday, November 17, for Sam Altman was that there was a review undertaken by the company's board of directors, they felt that Allman was not transparent in his communication with the board, and they decided that he should be removed as CEO. So here is a statement from the board from that day, Mr. Altman's departure follows a deliberative review process by the Board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibility. And that was a statement that open AI made and then the board no longer has confidence in his ability, meaning Altman's to continue leading open AI. So he, initially is reported he stepped down. He was ousted. But, of course, there is more to this story than that, and they're always it so don't friends, please, please, please, especially if you follow Silicon Valley News, tech news. When you hear that someone has stepped down, don't necessarily jump to conclusions that they were ousted or other conclusions. There's always a story behind the story. And generally speaking, you're not going to know the whole story, just like we probably won't know the whole story of what actually happened. The company appointed their current CTO, Mira Murati as the interim CEO. Now, Mira Murati was previously a Senior Product Manager at Tesla. Remember I said Elon Musk co founded open AI with Altman, so she was working for Elon at Tesla. And she was serving, as I said, as their current CTO before she was suddenly put in the interim CEO position, which only lasted a couple of days. On Sunday, so all of this news breaks. Sam Altman is tweeting. You know, he's on X he's saying his side of the story. The news is breaking all over the different publications. And on Sunday, November 19, Open AI says OK, Murati is out. Now we are putting the former Twitch CEO Emmett Shear as our interim CEO, and then Shear confirmed this was true you know, on Monday morning. So now within the span of 48 hours, we have gone from Altman to Murati to Shear and then now we see okay, what's gonna happen with Altman. So the CEO, Microsoft, Satya Nadella announced they'd hired Altman to lead the new artificial, Artificial Intelligence Department and this caused this major stir on that same Monday, Monday, November 20. Nearly all of the employees at open AI signed a letter calling for the resignation of the company's board and the return of Altman as CEO. The employees threatened to quit and join the newly announced AI department at Microsoft if their demands were not met. And the letter that was,

Susan Sly 15:28
that was submitted where, it also include a Murati who signed that letter and saying she too, wanted Altman to return. And so the letter, one of the things that the letter says to the board members says your conduct made it clear, you did not have the competence to oversee open AI.

Susan Sly 15:53
Oh, need to pause there for a moment. So I want you all to think about that. I want to celebrate the employees of open AI. Because so often, a CEO, steps down, is removed, and employees don't speak up. They just accept whatever they've been told. They're not necessarily necessarily thinking what is truly best for the company. And you do, as an employee, have a voice, you can. And the open employee, open AI employees did. And kudos to you if you're listening to this, because I cannot emphasize enough that you, without the company, especially without all of you, there is no company. And the company isn't the AI. It's the people behind the AI. So shout out to all of you. I think it's outstanding what you did. Tuesday, November 21, Altman now reaches an agreement to return as the CEO of open AI. And this also included reconfiguring the board. So the new board is supposedly going to include former Salesforce co CEO Brett Taylor, former US Treasury Secretary Larry Summers, and Quora CEO Adam D'Angelo, who was from the previous board and also voted for Altman to be ousted so, what? I'm sure there will be a Netflix documentary like, there will be a Netflix Docu drama on this or documentary. This is, it's just crazy. But then you think, oh, okay, it's settling down. It's Tuesday. Everything's back to normal. Open AI, it's all good, employees are going, you know, yes, Okay, business as usual, Altman's back as CEO, but no, on Wednesday, now we have a leak, that there was a secret open AI project called Q star, and several media outlets, quoted anonymously opening AI staffers as being concerned that q star could be the, the beginning of artificial general intelligence. And so when you, Artificial General Intelligence, for those of you don't know, is when the AI can begin to generally problem solve at a greater level. Just to put in layman's terms. Artificial super intelligence would be that the AI can create other AI. So I was doing a talk here in Scottsdale. And for a group of entrepreneurs. And I gave this example. Artificial general intelligence would be like the Terminator. It was designed for one specific thing. Artificial super intelligence is the AI that created the Terminator, just, if you remember or watched any of the Terminator movies. So one of the things that Q star allegedly can do is solve math problems that is never seen before. And so when you think about that, you might be like, well, that's kind of cool, but that is far beyond toddler, friends. This is now going into teenage territory, and that is why some people are very concerned. So Wired, I want to quote Wired, Wired said Sam Altman's Second Coming sparks new fears of the AI apocalypse, five days of chaos at open AI revealed weaknesses in the company self governance, that worries people who believe AI poses an existential risk and proponents of AI regulation. As someone who has spoken very constantly on AI regulation and question, you know, are we too far beyond being able to actually regulate AI? It's one of the questions I have regarding open AI, and they'd love to talk to Sam Altman about is, is that his opinion? Do we, does he feel we're too far past that threshold? And what does it mean? What can we do now? Because we can't go back. A lot of AI was being created, sort of without any kind of light being shone on it, no microscope. And so here it is in what the world saw is chat GPT. But there were other forms of AI, as you've heard me speak about many times before, and many interviews I've done, that were being revealed and being utilized. But it was only really available for enterprise, it wasn't available for the average person. And now here we have AI at scale, and models that are learning very, very quickly. So it's just a matter of time before there's a model that can be really and truly self learning, and to be able to do things that no other model has done before. So I thought I would ask Chat GPT4 the following query and see what it says. So I said, What happens when AI can create its own AI. And I will read you what it said, the scenario where AI can create its own AI without human intervention is often referred to as recursive self improvement, and could lead to a situation known as intelligence explosion, where the capabilities of AI could rapidly surpass human intelligence. The implications of this are subject of much speculation and debate amongst experts. And it gave five points so one, acceleration of innovation, so AI could innovate at an unprecedented rate solving complex problems in science, medicine and technology. And my opinion on that is there can be some really good things. I've spoken often about customized medicine, customized, early, early detection, I mean, super early, being able to treat things before anyone even has symptoms. So I think that's all well and good. Conversely, when innovation happens so quickly, faster than humans can keep up. What does that mean? That is a problem. Number two is ethical and control concerns. So there are many, there might be significant challenges in ensuring the AI has goals that align with human values and interests, which is a central concern in AI safety research, I would say that is very obvious. And we don't know what those are yet. Because once AI can start doing things, by itself, self learning, self guiding, then friends, what kind of ethical controls is it going to have? Number three, economic impacts. Automation can reach new levels profoundly impacting labor markets and the economy. This could lead to both the creation of new industries and the obsolescence of certain jobs. So we're already seeing that different researchers whether it's Accenture, or whether it's McKinsey, they're predicting that hundreds of millions of jobs are going to be displaced by AI. And this acceleration of this means that people are generally not prepared. And even if you think about it, if you have children, or even thinking about what your parents told you to do, when you went to school, and what you should study, a lot of that is already obsolete. Even for my son, when he started university, he's about to graduate next spring, he's doing a degree in design, it has changed so rapidly, it's all AI enabled, will we even have any designers? So he basically is doing a four year degree, which unless he pivots to something like UI UX design, like front end design engineering, he might get a tail on that. But the things I am suggesting to him are, let's make sure you learn a trade, let's make sure you get into investment and real estate and things like that, because people are going to need to have places to live as an example. So there are already impacts we're seeing quite rapidly now. And number four are societal changes with AI handling more complex tasks there could be shifts in education, governance and daily life as humans adapt to living with highly autonomous symptom systems. Good golly, you know, think about that. Right? What about a possible shift where the reinforcement whether it's it's law, just like we've seen in movies is actually done by AI. It's not done by humans. There are so many different ways that AI and especially AI implemented into robotics could possibly affect us. And number five is the risk of unintended consequences. This is coming again from GPT, if not properly aligned with human objective self improving AI could lead to outcomes that are not beneficial or even harmful to humanity. So, basically, the AI is saying that, yes, this, you know, it can be detrimental to humanity in a multitude of ways. And I, I know, some of you are going to write to me and say, Susan, that episode like, oh, my gosh, I don't want to be,

Susan Sly 25:35
what do I do? Right? I've had friends who have said to me, like, Susan, what do I do? Do I, Do I go live in the country and make sure I have land and chickens and you know, a waterslide? All that stuff? The answer is I don't know. And the answer is, we don't know, as a society. But one of the things I do know is that it is so important, especially wherever you are in the world, to begin to speak to the politicians, ask them what their stance is on AI, ask them how they intend to vote to govern AI, begin to take a look at the way our world is already being shaped by artificial intelligence. And don't hesitate to take a stand. I firmly believe that in the coming years, that what we are looking at in terms of electoral issues that we're going to vote on, there's going to be more of a technology prevalence there than then we expect. That's just personal opinion. Now, speaking of personal opinion, I want to talk about Q star. So if q star is able to self guide, so to speak, then as we talked about, it's going to begin to accelerate known AI development. And so what does this mean? It could mean massive displacement of jobs to AI. So what I mean by displacement is jobs that currently exist will not long, no longer exist, but there will be new jobs created. And Goldman Sachs estimates that almost 18% of the global workforce could be replaced by generative AI. That's depressing, right. So what you know, when you think about it, the timeline is going to have some different things that impact it. So one is the level of adoption of enterprises, because the enterprises, the big companies that employ the employees, how fast are they going to adopt the AI that might replace their employees. And what I know firsthand, in that sector is enterprise adoption is not that rapid right now, for a lot of sectors, they're talking about it. But they're not necessarily taking action, because they look at the cost of implementing these large types of models. And they look at things like the cost of GPUs, graphic processing units and servers and other tech enablers. And so that's going to have an effect, how fast is enterprise going to adopt. And then, of course, things like supply chain and all sorts of implications around the supply chain. So that will affect the ability of how rapidly jobs might be displaced because of AI. And so then I'm also asked a lot about my opinion on AI development. And the other day, I won't allude to even who it was, a friend said to me, and they were genuinely worried. They said, Susan, how far ahead are other countries like China, in their AI development? And one thing I want you to all keep in mind, and we have listeners from all over the world, is every country has its own set of priorities. It has its own culture, it has its own reason for doing things. And the thing I will say is that it has been my observation that there is evidence that from what is being shown, and it doesn't mean that this is truth, because, always be cautious. We do not necessarily see every deployment that a government makes, but this was in the news, I'm going to share. So the Chinese showcased AI powered drones, and they gave these three drones a task of finding a key in a park. And what happened was the, that was the only task, they didn't give the drones any instructions of how to find the key. The drones got together. They came up with a plan, they divided the task, they created search, a search perimeter for the key. They went about searching and they had internal rules they created for who ever found the key, what the other ones would do. And sure enough, they were able to procure the key without any human intervention whatsoever. So it's just a story friends that I'm telling you that when you have a AI that can self direct and solve more complex problems, there will be massive implications. And that is the bottom line. And whether q star is the most advanced thing that we have right now in AI in the United States, it's hard to say because we don't know what other companies like Google or Amazon or Microsoft have, in terms of possibly superior AI, we don't know everything other countries are developing. But one thing I will say for certain, my friends, is that there is definitively an AI race that is happening. And the you know, the only way that we're going to lose is to drop out of the race. And it's a very fast paced race, and it is going to have tremendous implications. And I'm gonna keep talking about those on the show. And lastly, just final thoughts on the board of open AI and what happens. So this is a cautionary tale to startup founders, choose your board wisely, make sure there are term limits for the board. And then know that the board of a company is very, very powerful. And so when you're first getting started as a startup founder, often you'll put your friends on or if let's say you have a health startup, your like, friends, a doctor, I'll put them on and or, you know, the lead investor has a board seat, or whoever it is, like people who are just helping initially, choose your board wisely. The board has the ability to remove you even as the founder, the board has the ability to steer the course of the company even if they don't really understand what the company is doing. And it's amazing, I will quote Will Smith, who said Money doesn't change who you are, it only amplifies what is already there. So we have this year with open AI, a company that became incredibly, incredibly valuable. We had a company that essentially opens you know, the Pandora's box of accessible AI for everyone. $20 a month for your GPT4subscription. And the board was also very powerful to the point where they could oust Sam Altman, who was the co founder. And he's not the only co founder to have been ousted. Steve Jobs was ousted and Michael Dell was ousted, Jack Dorsey, there are other ones. And there's something called a boomerang CEO where a CEO is ousted, they'll go, they'll do something, maybe build something cool, sell it, make a bunch of money. And then the investors, employees want that CEO to come back because the company doesn't have the same level of direction and focus. And that does happen. The thing I will say about board is when you're choosing your board, make sure they're willing to get to know the employees of the company that you know where their heart is, interview, perhaps former employees that they've worked with, select your board wisely, friends. And in closing, I am going to say what happened at open AI with the board could happen at any startup, it could happen to any business. That's one that has a board. Number two is let's pay attention to what is happening in AI, what is happening with Q star. What is happening with other AI development. And let's be smart about it. Let's have the conversations. Let's demand from our politicians that they share what their AI policies are, what their intended policies are, what their governance goals are. Because AI is here. This is not a trend, it is not something that we're talking about today that isn't going to be talked about tomorrow. And it is impacting, not possibly could, but it is impacting every aspect of our lives. And with that, if this show has been helpful, please share it on social, tag me. Please give me a five star review. As I said the show is a labor of love. And I do it because I know there are those of you out there who count on the show to have entrepreneurial inspiration and information. And so with that godless, go rock your day, and I will see you in the next episode.

Susan Sly 34:51
Hey, this is the Susan and thanks so much for listening to this episode on Raw and Real Entrepreneurship. If this episode or any episode has been helpful to you, you've gotten at least one solid tip from myself or my guests, I would love it if you would leave a five star review wherever you listen to podcast. After you leave your review, go ahead and email Let us know where you left a review. And if I read your review on air, you could get a $50 amazon giftcard. And we would so appreciate it because reviews do help boost the show and get this message all over the world. If you're interested in any of the resources we discussed on the show, go to That's where all the show notes live. And with that, go out there rock your day, God bless and I will see you in the next episode.

Susan Sly 35:44
Are you currently an employee looking to start your own business? Maybe you've been thinking about it for a while and you're just not sure where to start? Well, my course Employee to Entrepreneur combines my decades of experience as an entrepreneur with proven methods, techniques and skills to help you take that leap and start your own business. This course is self paced, Learn on Demand and comes with an incredible workbook. And that will allow you to go through this content piece by piece by piece, absorb it, take action and then go on to the next module. So check out my course on Employee to Entrepreneur.

Check on previous episodes

Susan Sly

Author Susan Sly

Susan Sly is considered a thought leader in AI, award winning entrepreneur, keynote speaker, best-selling author, and tech investor. Susan has been featured on CNN, CNBC, Fox, Lifetime, ABC Family, and quoted in Forbes Online, Marketwatch, Yahoo Finance, and more. She is the mother of four and has been working in human potential for over two decades.

More posts by Susan Sly