Skip to main content

In the second segment of this discussion, Susan Sly and Elijah Meeks, Co-Founder and CO-CEO of Noteable, delve further into the profound impact of AI-driven tools in the realms of data science and analytics.

Topics covered in the interview

AI ethics and philosophy

AI for data analytics and visualization

AI regulation and it’s impact on society

AI’s impact on culture and personal interaction

AI’s impact on human experiences

AI entrepreneurship

Elijah Meeks’ Bio

Elijah Meeks is known for his pioneering work in the digital humanities while at Stanford, where he was the technical lead for acclaimed works like ORBIS and Kindred Britain. After that, he joined Netflix as its first Senior Data Visualization Engineer, where he created the charting library Semiotic and brought cutting-edge data visualization techniques to the A/B testing platform and analytical applications for stakeholders across the organization, including algorithms, membership, people analytics, content, image testing, and social media. He is a prolific writer, speaker, and leader in the field of data visualization, the co-founder and first executive director of the Data Visualization Society, and the author of the bestselling book D3.js in Action.

Follow Elijah Meeks

Show Notes

Read Full Transcript

Susan Sly 00:00
Well, hey there, I'm excited for you to hear part two of my interview with Elijah Meeks, who is the co founder and Co-CEO of notable, and he, in the first episode, I hope you got a chance to listen to it, we really talked about what it was like in his journey going from a philosophy degree and a post grad in history, to becoming a technical co founder and someone who people sought out and his work at Netflix, also at Apple. And in this episode, we are going to do a deep dive into AI ethics, we are going to do a deep dive into the philosophy of AI, and where AI is going to be more than likely in the near future. And this episode is really one of the most interesting conversations that I've had around AI in a very long time. And there are a lot of, there's a lot of hype around AI, there are a lot of people who have opinions around AI. And as you'll hear, a lot just say we're still in the hype phase, we're in a, you know, in a place where there's a lot of people, much like with crypto, who have opinions, but they didn't necessarily have the background. And for many of you who might know, I have been working in AI, at the forefront of AI in a very specialized field of computer vision at the edge for the last five years. And so Elijah and I get into the weeds of AI, we talk about multi modalities, we talk about what it might look like from a perspective of, is AI going to prevent us from having experiences? And so I hope you'll love this conversation as much as I did. And as an aside, I just want to say thank you so much for being a listener or being a viewer, thank you for sharing the show. Thank you for your comments and your five star reviews, I love your five star reviews. My vision is that anyone can be an entrepreneur, and that entrepreneurs will solve the biggest problems in the world, not your government, not your legislators, God bless them, but it will be the entrepreneurs that will solve them. And it's my hope that you will be one of them. So with that, let's go ahead and get started with part two of my interview with Elijah Meeks.

Susan Sly 02:20
This is Raw and Real Entrepreneurship, the show that brings the no nonsense truth of what is required to start, grow and scale your business. I am your host, Susan Sly.

Susan Sly 02:34
I was like scoping the X, and I was, or some people call Twitter, I loved your Halloween tweet, by the way, that I'm looking at, you know, what has happened with the company and this collaboration, for lack of a better word, with open AI. And it's suddenly you were on a stage but you're on a bigger stage now. And how did that happen?

Elijah Meeks 03:03
So, you know, four months ago, five months ago now, our Chief Architect Pat Kelly, who's got a strong background in Jupyter notebooks and the whole Jupiter space notebook showed off this demo that he put together saying, Hey, open the eyes here, they're allowing you to do plugins. Look at this. And it was a, it was a chat interface, in ChatGPT that was wired up to Notable and I said, okay, my sister in law, she's a psychologist at the VA. I said, I'm a psychologist at VA and I know that she has to read these reports that they publish every month said, here's these reports that the you know, here's the URL for the data that backs these reports that the US government gives out every month about psychological issues, drugs being taken, things like that. I don't know anything about data science. I don't know anything about coding. I don't know anything about EDA, make a notebook that analyzes this and gives me key findings. And this thing, you know, it had some bugs in it. But this demo version of it, Winton access the data, visualize the data, wrote the matplotlib code, trained a model, wrote some predictions, highlighted a couple of different states were these trends were going this way or that way. And I thought, Oh, my God, like this is, it's here. You know, one of the things that when we're talking to funders, when we tried to VCs, one of the pitches that I'd love to get is, you know, you guys are getting all of these imaginative pitches, oh, we could do this or we could do that. We've been doing it for the last five or six months, we've been seeing. And it's remarkable to meet people who don't code at all doing data science, building data science products, doing analysis and telling us that it's incorrect, how incredible it is, how transformational it is. There was this post on Reddit that said, everybody in my office thinks I'm a wizard because I use Notable and now my CEO is asking me to come in to talk to him about something or other. And, you know, it's these kinds of things you couldn't like, if you had them in a commercial, they'd sound cheesy, right? If you had a paid influencer, saying that, you'd roll your eyes, instead, we just have people break to the people that were not paying any money that we weren't looking for making YouTube videos talking about how their mind is blown by, by this, this tool we've offered. And I think that, you know, what's interesting to me is that there's the LLM as a model in there's the LLM is an interface. And there's a lot of folks who think, Oh, good, now natural language will be the interface. And this will open this up to everyone. And what they're going to find out about is that actually it's not the language is a great interface for something. But a graphical user interface is also a great interface, as is a terminal, you know, and so the product can't just be put in text to get back result, it has to be something that allows people when you say multimodal, multimodal interactivity, otherwise, you're going to reach a limit for what you can do, just with the LLM, just with that natural language interface, or just with the UI, or frankly, just with coding. Of course, code can do everything, but the limiter is you have to know how to do it. And without all those other pieces, we wouldn't, we wouldn't be able to, you wouldn't be empowered by that even.

Susan Sly 06:37
Where my mind goes is what we were talking about prior to actually starting the recording, I kept saying, Elijah, we need to start the recording. Elijah and I were talking about vision streams, and pulling in the data. And then whether it's LLM interpretation or the data, you know, I'm still thinking about in my mind the different various steps, right. And then using Notable, and thinking about the different multi forms of data, including voice, including, you know, noise interpretation, including vision, and all of the ways that that can, in addition, pulling in other data streams, right. And so this is, this is where we're actually working on right now. That concept of where we're going in the future. And that future is beginning. Now. I want to ask you a burning question, philosophy to create. I was sharing with Elijah, I commented, if you go to my personal LinkedIn, I had commented on President Biden's executive order on AI and one of the things I cautioned everyone about, is this jumping on the data regulation bandwagon, because for people who are not in this space, it sounds like yes, let's do it. Right. But there's a cost to it. And I want to ask you, where we're going with AI, what we're already seeing, putting on your philosophy hat, putting on your history hat, Elijah, there are a lot of people who have trepidation because they don't work in this world. And there are those of us excited because we know we're going to be able to solve problems really quickly. What does your philosophy mind and your history mind tell you about this moment we're in right now?

Elijah Meeks 08:34
Well, I think you know, I love the authority you've given me. Let's see if, how I can abuse it.

Susan Sly 08:44
It comes from my natural curiosity about the way people think. So

Elijah Meeks 08:48
I think we're still in the hype phase, still in this very heavy phase, where we're finding out just how powerful these tools are. And I think that's going to continue for a bit, but it is hiding some of the, some of the rough edges. So for instance, with Notable because notable creates Jupyter notebooks, and there are 10s of millions, hundreds of millions, maybe Jupyter notebooks out in the public world. That means all of these LLMs that have consumed everything off the internet are very well trained to develop them. And they're very well trained on code. And because of that they give the illusion that they're sophisticated, analytical, when really what they are is their next word predictors. And that's great for a lot of core analysis use cases, because a lot of it overlaps. You can see it in fact, when you ask the LLM to do something that's less traditional. The code it produces is less reliable. And I don't think we're at the stage where people recognize, I don't think that's the concern that you're expressing, but I think it's a concern that people are going to grapple with where you find, I've been a data visualization expert have found that both LLM and statement fusion graphical models, which is terrible at data visualization, they're terrible, they have no idea. The LLM is making its guesses based on what the code is saying. But what the code is writing isn't as easy to analyze to find the features as the image. And that's why we have data visualization, otherwise, you would, you would never make charts. And it can't read the charts in a way where it understands the diagrammatical language of the charts, what the chart is encoding, in color, and link magnitude, things like that. So I think we're gonna get out of that hype phase, eventually, and we'll get a little bit more cynical, and, and a bit more focused on what it does the best. I think, when it comes to where this is going, there's an inevitability to this. Like, you know, when we talk about regulation, if you want me to get really philosophical, you know, there's these scholastic tradition. And the scholastics are basically, you know, philosophers like Aquinas or Augustine, that says, you know, if you can't enforce a law, then it's an unjust law. Right. And so, you know, we're gonna get to the point where, and I'm not saying, I'm not trying to take a position on copyright, or intellectual property or anything like that. But there's an unenforceability, to a certain point where it just overwhelms our ability to evaluate, much less legislate, the consumption and use of data and, and personal material. And like I said, regardless of whether or not you think that it should, or shouldn't, like we've gotten to the point where the floodgates are open, I don't, our government hardly understands the internet. And the internet is 50 years old at this point. So I don't, I don't think regulation is going to be very effective. And, you know, as a result, I think we need to sort of be rational about it, and just say, this thing's here to stay, it is more than likely going to become more robust, rather than less. Certain companies will learn to jump through some regulatory hoops. But you know, we all know this, like, you can go train your own model on whatever data you want. And it's just a matter of basically, you know, what is regulations for? Is the regulation, just to make sure only bad actors can train their models, or only the most sort of dedicated nerds who have access to stuff can train their models? Or is it about equity? Because that's fundamentally what I think the argument is. You and you know, your grandparents, people who don't know how to code should have access to this. And if we put a bunch of regulations in place, then, you know, like any sort of tech regulation, I assume that it's only going to be, it's only going to keep out people who aren't sophisticated enough to evade them. Was that answered? Was that the-

Susan Sly 13:13
It was beautiful.

Elijah Meeks 13:14
Was there a different area you're thinking about? Because there's a lot of ethical conundrums when it comes to, when it comes to AI.

Susan Sly 13:19
It was beautiful, and I concur. It sounds lovely. Yet the reality of enforceability. I am highly, I'm not a skeptical person, Elijah, but I'm highly skeptical, like one of the, one of the key points says, we are going to regulate content created by AI versus humans. And I started laughing because I thought, how are you going to know? You can't, you haven't successfully been able to catch tax evaders. How are you supposed to, even though-

Elijah Meeks 14:00
You can't even register my car properly.

Susan Sly 14:05
Yes, exactly.

Elijah Meeks 14:07
You know, the thing, the thing that, the thing that makes me very nervous, is that in the future, once because you're talking about content creation, like if you think about, like, the equivalent to me is lab grown diamonds, lab grown sapphires, and the only way you can tell them apart is they actually etch onto there, or they put some kind of signal that this is left because it's fundamentally the same thing. The idea that you'll be able to identify that something's been created by an AI, to me seems unlikely, you know, as it gets more sophisticated, but what scares me is that you're gonna go home in two years time, and you're gonna go on to Netflix.GPT and you're going to say show me a movie that has a young Keanu Reeves and Morgan Freeman and you know, Cary Grant in a romantic comedy set in Sweden, and you're going to watch that movie, and you're going to love it, and you're not going to have anyone that you can talk to about it, because no one else has ever seen that movie except for you. You're gonna read books that no one else has ever read, except for you. You're already looking at AI generated art that nobody else is going to look at except for you. And to me, like, how does that affect culture? Like what kind of isolation and inwardness is that going to produce in us? When we live in a world where we're consuming media, that the only other thing that has any experience with that media is the AI. You know, to me, that's, that's the scariest but not about it regulating our personal data and revealing stuff like that, because frankly, we've gone so far beyond I mean, who knows how much Instagram is listening to this conversation right now and keeping track of my biometrics and whatever. To me, it's more like, regardless of what that phone security parameters are. The fact that I've got a phone and I asked them most of my day doing this is the real effect of that phone. The effect of the AI is that we are going to have a friend and collaborator who's always available. And that's going to push out other friends and collaborators that exist in our life. And it's already very sophisticated and very seductive. You know, it's only going to get better. As we, as you know, as we've seen in the past six months, my God, look how much it's improved.

Susan Sly 16:43
Yeah, it's interesting. I hadn't considered that, I've been thinking a lot about healthcare, AI and healthcare. And I think it's, there's a whole variety of reasons, aging father, two in-laws who have dementia. 51 years old, there's so many reasons. So I'm thinking about AI healthcare, a lot of the work I've done at MIT and, and I didn't even think about this, I've thought about customized, you know, health regimes, you know, all sorts of things, but I didn't even think about, hey, you know, Netflix, I want a movie with young Harrison. Mine would be Yeah, I'm gonna ask you this question, too. So mine would be like young Harrison Ford. You know, maybe Michael Douglas, Circle Wall Street. Throw in Demi Moore. Like, I would be like a lot of 80s icon.

Elijah Meeks 17:36
That is a lot of 80s. A lot of mine would be my would be Bruce Campbell. And probably, yeah, I took Cary Grant and Bruce Campbell and Lucille Ball into some kind of like, free code comedy. That was just kind of a little bit, a little bit wild and crazy. Yeah. I love Bruce Campbell. I think he never got a chance. He should have, he should have been in many more things. And soon he will be as the AI just, you know, I mean, it's because you know, I worked at Netflix, I see the tremendous power of culture, right? Power, and the, the portability of culture, right? The things you see at Netflix, where you say, Oh, wow, isn't it interesting, the show that they made the Turkey, they love it in Brazil, who would have ever imagined? And there's that, we have worked toward this point where it is shrinking the world. We've been shrinking the world relentlessly, the last 2 years, you know, these maps of the world that show how, you know how fast you can travel there, how much you can communicate with, the world has gotten so much smaller. And I think what's about to happen as the world's going to get much larger, and much more filled with wilderness because we don't interact in person as much anymore. We interact through comments on social media, we interact through our choices and what we consume from these services that produced media. And we will now you know, no one in Turkey is going to be watching any TV shows from Brazil because they're not even watching TV shows that are from Turkey. They're gonna watch TV shows that are from the AI. I'm probably being a bit too ambitious about when, you know, saying two years. But we're already seen it like, we're already seen the first, the first signal.

Susan Sly 19:36
Well, and to your point, Singapore did a digital twin of their city. And I haven't been to Singapore yet. I've been to 36 countries in the world or something. But let's say I am not of our generation and I'm like younger, and I'm a little more timid. So I can decide to have an experience in Singapore multimodally, whether it's through VR, whether it's through experiential, like, you know, some kind of UI where I say, take me to Singapore, and I don't like it. And let's say, I say, Well, I don't like it. So I'm not going to go. But what if I would have met my soulmate there, right? I think what we are going to see, my personal opinion, is that people will do and we already see it culturally, this try before you buy, even if you look at the environmental impact of people returning products, no one ever talks about it, but it's so significant. I think we're going to see a lot more human reinforcement learning, of being fearful, and not having experiences, and then that me versus we, and maybe not in two years, but it's definitely coming. And how, to the point of how that is going to change us as a culture and society. It's going to be, it's going to be fascinating. There's no question. And as a parent, I would say nerve wracking too, for me.

Elijah Meeks 21:17
No, I think that any kind of inherent, it seems like GenX is a theme of the podcast, but any kind of curated experience makes me nervous. Right? You know, I don't like Tableau not because Tableau is a bad product. But because Tableau only lets me do the things Tableau wants me to do. Digital Singapore is only going to be the experience that digital Singapore thinks I should have. And you know, like you said, if we live our lives in a way, or it's where we're trying before we buy, I have never thought of it that way in the way of actually going to. There's, there's two things here. One is the zero sum game aspect, which is you're taking the time to do that. And therefore you're taking that time from something else which is lived experience. And two, you're not going to Singapore, because ultimately you're not, you're not going to go because even if you had a great experience, it was like, Oh, you had a great experience. And one day I might go there, but you know, what else am I gonna see? 100% digital twin. And that, you know, that effect, let me, let me get more practical about it. The effect, I think that that's the most practical aspect that really changes the way that we think about AI, is that what AI offers us is answers. It's like the ultimate 24 hour news cycle. It's there, it's always on, and it will always give you an answer, no matter how hallucinated and bad it is. And what we have seen with our product is that people just want to answer. Even if they throw them away, even if they, even if it's a healthy use of them where they say no, not the right answer. But this is helping me to think through it. When you talk about being a great teacher requires you to be a great student, I would say that being a great student also requires you to be a great teacher, having that relationship with a computational product really helps goes beyond the StackOverflow or even the collaborative experience of pair programming. But it does put you in a certain kind of reinforcement loop that is curated. And that does have structures in place that we've all run into where you know, I tried to play Zork on chatGPT at one point, I don't know if you've ever had experience with Zork. It was a text adventure where you go east, it puts you in some room, and it's all text, and it says it gives you some description of the room you're in. And at some point, there's a monster that you have to fight and I said hit monster with sword and chatGPT said, Oh, you're, this is a, like, breaks our content policy because you're being violent. You're writing about violence. And I thought, well, that was unintentional. But you know, these assumptions are going to be built into these tools, or these representations, the media, and they're going to be much more effective than like when you talk about regulation. Government regulation is terrible. But boy, tech companies are great at regulating and censoring content. They've gotten really good at that. And so you can only imagine that, as that content becomes more and more of our lived experience that it's going to, we're going to be affected more and more.

Susan Sly 24:45
Well, Elijah, I look forward to our next conversation in the Founders Cave with our Une Femme wines. And I could keep talking to you for hours and I think that like the you know, this this conversation of where we're going, governance, sensibility. And that's why one of the things that, you know, I'm asked to speak about a lot is this whole concept of diversity of stakeholders at the table as we're creating AI. And because we do want a variety of perspectives, and I loved, and that was one of the things I was so attracted to you, in preparing for today's show. I loved your background, because I have had a lot of founders who are engineers, and you know, who do come from Tech, and I'm a non technical founder as well. And by that in like Silicon Valley speak, it's kind of like Elijah's super technical but then we look at the, you know, like-

Elijah Meeks 25:46
No, absolutely.

Susan Sly 25:47
The pedigree of Where did you go to get your, do you have a computer science degree? Do you have a PhD from Carnegie Mellon, and, like, you know, all that stuff. So non technical founders can be quite technical, but they bring, it's like, being a physician, a surgeon and your undergrad degree was in music. And all of the research actually shows that physicians with the best bedside manner have undergrad degrees, they're not science degrees. And I would hypothesize that as we are in this world of AI, going beyond hype, that some of the best AI co founders and creators are going to be people that don't come from that world because they are thinking differently.

Elijah Meeks 26:27
I think so. I think so. And the one thing I wanted to, you know, we got very theoretical, and philosophical. But I think what's exciting about it is all of the things that we talked about are chaotic, like this is, there's a lot of chaos and energy in the system right now. There's a lot that's been disrupted, whether or not you have self identified disruptors trying to do it. So there's a huge amount of opportunity, I think, for businesses that I can't even imagine, because this is fundamentally I mean, this is a fundamental shift. AI is not crypto, right, AI is a fundamental shift in the way that we look at Tech, and all of the things that Tech has touched. So go out there audience and find these companies and figure out how to, how to take advantage of that.

Susan Sly 27:20
Absolutely. Well, Elijah, thank you so much for being on the show. And for everyone listening, watching, please leave a comment. If you have a question about the show, if you just go to, I read all of your comments and questions. And Elijah and I would be bold, and we would love a shout out on social. He's very active on X. So follow him and Notable and on LinkedIn as well, all of his, the links will be to the show notes, and check out Noteable. I think it's, I think it's awesome. And I'm so excited. I'm excited for you guys to do the next, the next round. And you know, and I know that it's going to be, when I look at the company and where you've gone and where you're going, I would, hazard to guess and believe just after interviewing hundreds of founders that that stage, as I said, is going to be a bigger stage. So I'm really excited for you, Elijah and the team.

Elijah Meeks 28:20
Well, thank you, Susan. I really appreciate the time and the some challenging and interesting question.

Susan Sly 28:26
Oh, thank you. Well, with that everyone, this has been another episode of Raw and Real Entrepreneurship. Wherever you are in the world. I hope you go and crush it. And with that, God bless. Go rock your day, and I will see you in the next episode.

Susan Sly 28:43
Hey, this is Susan, and thanks so much for listening to this episode on Raw and Real Entrepreneurship. If this episode or any episode has been helpful to you, you've gotten at least one solid tip from myself or my guests, I would love it if you would leave a five star review where ever you listen to podcast. After you leave your review, go ahead and email Let us know where you left a review. And if I read your review on air, you could get a $50 amazon gift card and we would so appreciate it because reviews do help boost the show and get this message all over the world. If you're interested in any of the resources we discussed on the show, go to That's where all the show notes live. And with that, go out there, rock your day, God bless and I will see you in the next episode.

Susan Sly 29:37
Are you currently an employee looking to start your own business? Maybe you've been thinking about it for a while and you're just not sure where to start? Well my course Employee to Entrepreneur combines my decades of experience as an entrepreneur with proven methods, techniques and skills to help you take that leap and start your own business. This course is self paced, Learn on Demand and comes with an incredible workbook. And that will allow you to go through this content piece by piece by piece, absorb it, take action and then go on to the next module. So check out my course on Employee to Entrepreneur.

Follow Susan Sly

Check on previous episodes

Susan Sly

Author Susan Sly

Susan Sly is considered a thought leader in AI, award winning entrepreneur, keynote speaker, best-selling author, and tech investor. Susan has been featured on CNN, CNBC, Fox, Lifetime, ABC Family, and quoted in Forbes Online, Marketwatch, Yahoo Finance, and more. She is the mother of four and has been working in human potential for over two decades.

More posts by Susan Sly