Transcript: How to Ask an Ugly Question!

In this donor-only show on November 4, 2025, Stefan Molyneux engages with his audience to explore a host of topics, ranging from Bitcoin market trends to the implications of artificial intelligence in our daily lives. Kicking off the session, Stefan emphasizes the importance of the community's support and invites listeners to direct their questions and comments via local chat platforms as he dives into the discussion.

The first point of conversation revolves around the recent significant fluctuations in the Bitcoin market. Stefan reports that Bitcoin has recently dipped below the $100,000 mark, a level that hasn't been seen since June 2025, which has led to widespread concerns about market stability. He elaborates on the backdrop of macroeconomic uncertainty, underpinning this volatility with factors such as cautious comments from the Federal Reserve, the strength of the U.S. dollar, and poor earnings reports from major tech companies like Meta and Microsoft. These economic signals are causing a ripple effect in the cryptocurrency market, where Bitcoin and other cryptocurrencies act as high-beta assets sensitive to market sentiment and liquidity shifts.

Stefan goes on to contextualize the current state of Bitcoin, illustrating how the recent downturn is more than just a simple price correction. He points to the psychological factors affecting traders, indicating an "extreme fear" sentiment among investors. Moreover, he notes that the recent pullback follows a broader trend, which marked the worst October performance for Bitcoin in over a decade. As the discussion progresses, Stefan draws correlations between Bitcoin's fluctuations and macroeconomic indicators—highlighting how the transition from retail-driven enthusiasm to institutional market influence has impacted Bitcoin's value trajectory.

As the conversation shifts towards AI, Stefan reflects on the rapid growth and implementation of artificial intelligence across industries, particularly in terms of business productivity. He expresses skepticism about the pronounced benefits and long-term value of AI, posing thoughtful questions about its effectiveness in creating impactful change versus merely being a trend. The discussion leads to an exploration of AI's potential utility in various tasks, from creative content generation to coding.

Several callers join in, sharing their experiences and opinions about using AI in their daily work, especially in programming. One caller highlights the efficiency AI brings to coding tasks, drastically reducing the time needed for certain activities. However, concerns about AI's limitations are also addressed, particularly the so-called "hallucination problem" where AI outputs may lack accuracy—a significant issue for applications where nuanced understanding is critical, like in healthcare.

Throughout the show, Stefan maintains a philosophical lens over the AI debate, probing deeper into whether AI represents a genuine business revolution or merely a speculative bubble. He juxtaposes the views of tech enthusiasts against the critical skepticism surrounding AI’s actual productivity gains and economic utility, suggesting that true breakthroughs in AI may still be a long way off.

Towards the end of the show, an intriguing philosophical discussion unfolds when a caller challenges Stefan with a personal and poignant question regarding love and relationships. The dialogue takes a serious tone as Stefan reflects on parental responsibility, love's unconditional nature, and the moral responsibilities we carry toward our children compared to our parents. This heartfelt exchange adds depth to the conversation, as it shows the intricate and often painful complexities of personal relationships intertwined with the broader themes of accountability and choice.

As the show wraps up, Stefan expresses his gratitude for the community's participation and support. He also teases the impending release of his new book, "Dissolution," indicating that it will reveal some bold ideas he's tackled in his writing. The session concludes with a hopeful tone as he invites feedback from listeners on his latest work, echoing the collaborative spirit established throughout the evening.

Chapters

0:03 - Donor Exclusive Show
0:55 - Bitcoin Market Update
2:07 - Economic Factors Affecting Bitcoin
5:11 - The Role of AI in Productivity
14:33 - AI's Business Impact
24:01 - Politics and Bitcoin
29:03 - Listener Comments on Bitcoin
32:06 - Parenting and Responsibility
40:41 - AI in Coding and Development
45:57 - Tech and Self-Driving Vehicles
56:39 - Closing Thoughts and Thanks

Transcript

Stefan

[0:00] Good evening, good evening. This is 4th of November, 2025. We are doing donor only tonight. So if you want to join and you're watching on the general platforms, it is, of course, fdrurl.com/locals to do that. And we're just going to go to locals as a home, as a whole, and of course, right here on X. Thank you, everyone. so much for your chats. And I think we are good to go. One of the things that we will be talking about, if you like, with your permission, as you see fit, it is after all, your show as the donors. Let me just move that down because there's no video on X. All right.

[0:03] Donor Exclusive Show

[0:55] Bitcoin Market Update

Stefan

[0:56] So we're going to talk about some whatever's on your mind I'm going to just go on X here if you want to oh we've got somebody who wants to chat, and let's get that cooking away and then we're going to talk some Bitcoin and whatever else is on your mind you can type your chats questions into locals as well alright All right. Uh, oh, go to my chat, go to people who want to talk, you can do it. And speakers. Oh, uh, Peter, are you on?

Caller

[1:32] Uh, yeah, I'm on, but it was automatically designated as a, as a, as a question, but I'm not, I don't have any, any questions or anything like that.

Stefan

[1:39] So, oh, so you're, you entered involuntarily.

Caller

[1:43] Yeah, I was the second, the first one in the chat. So I think it automatically pushed me to trying to speak. But yeah, no questions. Just here to listen. So thank you.

Stefan

[1:54] I appreciate that. And thank you very much for your support. All right. So let me just remove you from speakers to make sure we're not going to accidentally bring you in. Yes, removed, but not canceled. All right.

[2:07] Economic Factors Affecting Bitcoin

Stefan

[2:08] So we're going to talk a little bit of Bitcoin. Happy to take your questions and comments. And Bitcoin. so friday night live october 10th 2025 bitcoin quote crashed around 116k over the past few weeks u.s dollars of bitcoin has continued to decline today we briefly saw a price under 100k u.s and this is about an hour ago it's 635 as a 524 eastern coin market cap reports a price of 101.

[2:38] 1,000 US dollars, or right now we are cooking at 142,640 in Canadian. So that's interesting. Again, there's going to be variables at all times. So what's been happening? So Bitcoin's price dip below 100k marks the first time since late June 2025 that it's traded at this level, with a low of just under 100k on some exchanges. This represents about a 5% decline over the last 24 hours and extends a broader pullback that's raised about 20% from October highs near 123,000.

[3:15] This move is part of a risk-off sentiment across markets. Cryptocurrencies are behaving like high beta assets, sensitive to macroeconomic shifts, equity weakness, and liquidity concerns. So it is a bit of a speculative asset for a lot of people. If they're concerned about stability, they may want to get out of it. So what is going on to pressure Bitcoin downwards? Macro economic uncertainty and Fed signal. So the Federal Reserve recent 25 basis point rate cut in late October was overshadowed by cautious comments from Chair Jerome Powell and other officials indicating no guaranteed further cut in December. This is strength of the U.S. dollar, which typically weighs on risk assets like Bitcoin. As the U.S. dollar gets stronger, people aren't concerned it's going to lose value, so they'll stay in that rather than go into more risky assets such as Bitcoin. Again, as you know, usual caveats. None of this is investment advice. Make your own decisions. Talk to your broker. This doesn't mean buy or sell anything. All right. So federal governor Lisa Cook's indecision on December cuts in Kansas City Fed President Jeffrey Schmitz vote against the latest cut have amplified doubts about ongoing monetary easing.

[4:29] So, weakness in big tech stocks such as Meta and Microsoft spilled into crypto after their earnings reports highlighted massive AI-related capital expenditures. So, Meta has spent over 70 billion US, Microsoft 93 billion, raising fears of an AI bubble. This coincided with their government shutdown entering its fifth week, creating policy paralysis and investor jitters. Ether, the second largest crypto, fell nearly 9% today to around US 3275, reflecting similar pressure.

[5:03] So the fact that people are pouring so much money into AI is pretty wild. And of course, you can see that as AI got put into practice, the hiring of people went down. People, of course.

[5:11] The Role of AI in Productivity

Stefan

[5:22] Is going to cut the need for human capital. Whatever cuts the need for human capital is going to increase productivity if the associated productivity increases are there. Just my particular thought, I think the AI has been helpful for us for cleaning up audio, for transcripts, for the, I don't have to do any particular audio tweaking now. I just pay for AI to do it. The clips, I think they're interesting, they're neat, they haven't driven a bunch of views and certainly not donations, but I think it's really nice to have those brief clips to sort of share around. That's all done through AI. And when I had to edit a book, I gave AI a pass at it to see how it would do. It wasn't too, too bad. I still had to do a lot of manual stuff, but when I had to sort of shorten a book. So it's good and it's useful. I do use AI for research, as does James. But of course, you have to double check everything. So that's quite important as well. So it's the old thing that they're investing massive amounts of money in AI as a whole.

[6:31] And the question is, what is AI for? What is it going to do? Now, if it's data neutral, sorry, if it's value neutral, if it's not, you know, left wing, right wing or woke stuff, AI, I think, has real potential to do some very interesting stuff. But whenever politics or virtues or morals come into AI, AI loses its mind. Now, I don't know. It's interesting because I don't know what Grokipedia uses. And of course, if you guys know, please let me know. I don't know if Grokipedia uses like what it uses as its source, but I'd say it's certainly by far the fairest one as a whole. So.

[7:13] Because the way, of course, that AI generally works when aggregating information is it has to try and rank the credibility of particular sources, right? So it puts mainstream media, of course, a lot of them put mainstream media at the top, which gives undue power to mainstream media, of course, and is generally bad as a whole because it's so biased. So crypto-specific pressures, large whales tell us continued with over 1.3 billion in liquidations across the market, exacerbating the decline. Sentiment indicators like the fear and greed index are in extreme fear territory, around 22 out of 100, signaling capitulation among retail traders. Spot Bitcoin ETF outflows have reduced demand, though some institutional accumulation persists.

[8:00] So, Bitcoin's trajectory had been volatile, but predominantly downward since mid-October, breaking its seven-year October streak with a 3.7% monthly decline, the worst October performance in a decade. it. So what's been happening since last time we talked? October 21st to 24th was, of course, early signs of weakness. Bitcoin hovered around US 107 to 110K, faced resistance at prior highs. An analyst noted a risk-off signal from on-chain data with long-term holders showing reduced selling, but younger coins driving activity. Standard chartist Jeff Kendrick predicted a brief dip below US 100K during the trade war concerns, though he viewed it as a potential buying opportunity if gold prices rebounded.

[8:46] And U.S.-China tensions, of course, escalated after Trump's meeting with Premier Xi, offered limited tariff concessions, sparking fears of renewed trade friction. Liquidations began building, with one trader highlighting structural damage from a massive event that wiped out market makers. October 25 to 31, October close in Fed decision, the month ended with Bitcoin down nearly 4%, about 19 billion U.S. Liquidations mid-month, October 10th to 11th. Profit taking by OG holders since May and a lack of conviction in longs, right? Of which the longs of price predicted to go up short is when prices predicted to go down, of course, right? Fed's rate cut October 30th provided temporary relief, but Powell's comments on no assured December cut triggered fresh selling, pushing Bitcoin towards 106.8K. Tech stock weaknesses added pressure with data forecasting potential dips below 100k. If the AI bubble fear is persisted, is AI going to work? Big, big question. You need AI to be creative, but when you turn up the creativity, you get hallucinations. If you have to double check what AI does, isn't it kind of just easier to do it yourself? So it's tough stuff. The hallucination issue is a very big challenge.

[10:12] So, if people have poured, you know, let's go back up to Meta and Microsoft, it's just wild what they have dumped in, right? Sorry, let me just find that here. I think it was, yeah, $70 billion for Meta and $93 billion for Microsoft and in return for, in return for what? But for, again, I can give you the business case for myself, somewhat personally, for me and for James as well, it is, it's made some of what we do more efficient. So I used to spend half an hour to 45 minutes cleaning up audio. Now that process is largely automated. It's given us nice things like transcripts, but I don't know that transcripts are a huge value add. I just i don't know i don't know how many people care i doubt many people read along it's nice to have the subtitles but that's automatically done by youtube anyway so nice to have cleaned up audio it certainly is more efficient in some ways the shorts are nice but again the you know how much does this nice stuff translate into economic value.

[11:35] To you, the listener and the caller. I mean, I don't honestly, and James, of course, you can let me know if you think this is or is not the case, but I don't think we've had a bunch of people, I haven't seen any messages like, oh, thank goodness you've got these transcripts, you know, and the shorts are doing particularly fantastically in terms of views and so on. And so it's the audio cleanup is nice but it's not a game changer the transcripts are nice, but not again maybe they help searching and so on but not a game changer and the shorts are nice but not a game changer so i wouldn't say like if somebody ripped away ai tomorrow it'd just be like oh, okay, so I have to spend half an hour to an hour extra a day cleaning up and fixing audio. And, you know, there's some enhancement and all of that as well. But it wouldn't be like, oh my gosh, we have to hire five more people because AI is down. So again, obviously there's tons of different ways in which AI could help, but a data intensive and audio and text intensive.

[12:49] Outfit like Free Domain, it's a nice to have. It certainly is not a have to have. And I don't believe, I don't believe that because we have to pay for AI, right? It would be interesting to know, and I don't have a strong sense about this. It's hard to know. But I don't think that AI has been cost beneficial for what it is that I do. Because for you guys, you don't care. Honestly, you don't particularly care if I do half an hour, an hour extra of audio cleanup or something like that. I don't think the transcripts are particularly important. The shorts do fairly mediocre views and so on. So the price that we pay for access to AI certainly makes my job better and easier. And I can do a little bit more of other things, which are more value added than cleaning up audio. Is it a game changer where we say, gee, you know, we invested X amount of dollars in AI, but my gosh, are we getting, you know, X2, X5, X10, or 20X the value?

[13:53] No, no. I would say I've become a little bit addicted to it because after spending 19.5 years cleaning up audio and fixing things up and normalizing and cutting out, you know, when people cough and, you know, people have background noises, sirens, like the AI will take that out for the most part. And that's really nice. That's really nice. So I'm happy to not have to do that, but that is not something that matters much to you guys because you still get the same number of shows. You could say the audio quality is a little bit better, but not any kind of game changer because you could hear before, now it's a little bit nicer now,

[14:30] but I don't know that that's a massive deal. So I think that a lot of people, after their sort of initial enthusiasm about Bitcoin, Join.

[14:33] AI's Business Impact

Stefan

[14:42] Sorry, after the initial enthusiasm, sorry, about AI, and like, well, we got to have it. It's got to happen. We got to have it, right? I think people are saying like, okay, so after the fever and the excitement sort of wears off, what is the actual business case? We got to get into sort of the FOMO, right? The AI, we got to get into AI. We can't be left out. Everyone else moving ahead. And that's, you know, that kind of enthusiasm is kind of fun, kind of interesting, and so on. And of course, all of the AI salespeople are like, you've got to get in and productivity and here's our charts. And then after this sort of tulip mania, maybe it's tulip mania. This is, I think, what the market is trying to figure out. After the tulip mania cools down and the feverish, you know, it's usually about six months for dating while you're in the honeymoon period and so on. So after all of this fever pitch stuff calms down, then people actually say, okay, so let's forget that it's, you know, really cool. Uh, let's forget that, uh, we got sold the moon and let's actually look at how productive the AI has made us. And after that initial burst of cocaine excitement, you, you have a crash down to reality. And if people cannot make the case for AI.

[16:06] Then it's going to be like pets.com, right? It's going to be a flash. In the pan, it's going to be, you know, the kind of thing. It's not a general technology. It's good for niche markets and so on. Because, you know, when I'm working in a word processor, the AI pops up and says, let me help you write this. I'm like, please go away. It's like Clippy with a me too moment, right? Get your hands off my writing bowls. All right. So we'll see, right? So, So... Get to the right spot here.

[16:40] All right. So on-chain metrics remained mixed. LTH has accumulated at the far-long-term holders, accumulated at the fastest pace since 2013. Shark wallets, holding 100 to 1,000 Bitcoin, but ETF outflows and high leverage led to snowballing liquidations below 108,000. Support levels around 105 to 107,000 were tested multiple times with high volume node providing temporary defense. So just this month, escalating decline, November opened with Bitcoin slipping from 109,560, extending Octopus losses amid ongoing shutdown uncertainty and weak risk appetite. Analysts like those at OCP Capital pointed to upcoming CPI data as a potential pivot. If soft, around plus 0.2%, it could revive soft landing narratives if hotter liquidity risks rise, right? Whale movements to exchanges and derivative open interests around 105,000 to 110,000 strikes fueled choppy action. By November 3rd, 4th, the drop accelerated below 104K, could be 95 to 100K if supports fail. So the supports in general are both the traders and the automated algorithms that say if it dips below this, buy, buy, buy. And people, of course, scale that stuff. And if it, you know, if it dips below 97 pi, but they're like, ooh, but if it goes any further down, they make it 96, 95 go. So then the bottom falls out of the bottom, so to speak, right?

[18:08] Some see this as a healthy leverage unwind with unrealized losses, low. It's only 1.3% of market cap compared to past bear markets. So, again, we talked about this for the last year or two. Bitcoin has shifted to an institutionally dominated market, less prone to retail-driven hype, but more tied to macro factors like rates, US dollar strength, and equities, right? So, when the hobbyists and the evangelists were in the Bitcoin space, they were in dedicated this is the end of war, this is the end of human slavery, this is the end of exploitation, all kinds of great stuff. And now that the institutions are in though, Bitcoin has become outward looking to macroeconomic indicators. So it's a little confusing when you're in the Bitcoin space to say, well, hang on a sec, why does it really matter what the CPI is? Or why does it really matter what AI is doing? Or why does it really matter? Blah, blah, blah, right? Right. Well, if companies have invested madly in AI, but it turns out that AI does not provide the return on investment that is expected, then the companies that invested less will become more valuable because the company who invested more will not get a return on their capital. The companies who said, eh, you know, maybe it's great, maybe it's not, we're going to wait and see, right? So.

[19:27] It's sort of like if you think of the sort of California gold rush, right? So if somebody says, wow, there's a bunch of gold in them, there are hills, you know, it's, it's five days hike away, but there's tons of gold there, right? Then everyone invests in gold panning equipment and the shovels and the shaky sieves and stuff like that. And then they get the food, they get the tents, the sleeping bags, and they just, the, the, the, the mules and they, they go off right in there. And, and if there's lots of gold out there, then that was all a really good investment, right? But if there isn't much gold out there, then that was a really bad investment. I mean, good for the people selling you the gold stuff, but not great for you as a whole. So some people are like, well, the moment I hear about gold in them, their hills, I'm going out to them, their hills. And other people are like, I think I'll wait and see, and maybe we'll head out if it does. So let other people take the bleeding edge and you can do the second or third wave of that. So if people are invested in companies and are not sure that those companies' massive investment in AI is going to pay off in objective, reproducible productivity gains, that's a challenge. The other thing, of course, is that if AI is not great, then it's been a malinvestment. If AI turns out to be fantastic, then it means that.

[20:47] Those companies will gain some value for sure, but whenever there is a whole industry is affected or a wide variety of industries are affected, let's say that AI is absolutely fantastic because it cuts your need for your workforce by 25% or something. It's like, okay. So then 25% of people lose their jobs at a time when there's massive amounts of immigration and at a time where there's a lot of political uncertainty. And so it's not great for businesses when they lay a bunch of people off. In some ways, if those people can't get new jobs very easily, because it just means that companies want to sell, hey, we've made it cheaper to sell our goods, but if people don't have an income, they don't want to buy them anyway. So it's one thing if there is a big, robust job market to soak up new entrants to the workplace, but it doesn't really seem to be the case at the moment. I'm sure you know the famous story at the end of the Second World War, when all of the American troops were coming home, the government was freaking out and panicking because.

[21:45] Oh god who's gonna who's gonna hire them they got all the hundreds of thousands or millions of troops coming back and and they set up this whole big department to try and but by the time the department set was set up everyone already had their jobs anyway the market had sort of soaked up all the workers and so on when you got people kicked off the land in the enclosure movement in the 18th century well there were factory jobs available in the 19th century so they could leave the farm and go and work in the cities and go from the fresh air to the, well, slightly not so fresh air. So there was places to soak things up. When the war ended, there was a big demand for labor and it soaked everyone up. Where do people go who are low rent workers who, especially if they're because AI is AI plus robotics, right? You can't just think of AI, you have to think of AI plus robotics. And so if there's a bunch of low-skilled jobs are going to vanish, and AI is pretty good at a lot of this stuff, right? Because AI.

[22:49] Can talk to you in any language you want without an accent, right? So, it's going to be interesting. If AI is not successful, there's going to be a shift towards investing in companies who invested less in AI. If AI is successful, you still have a lot of uncertainty, because where are all the people who are going to, where are all the people going to go who've been laid off because of AI? Are they going to have an income? Are they going to live off savings? If you're living off savings or expected inheritance or borrowing that's the value of your house, you're not spending much. You know, I don't know if you've ever lived lean and hard and low to the bone, but it's, it's kind of rough. So yeah, we'll, we'll see. So when there's this level of uncertainty, you know, if there's some really great investment, like when I first got into the business world, there were secretaries. But then after, you know, Outlook and other forms of scheduling mechanisms and email and so on, there weren't. But there were places for them to go. Where is everyone going to go? We don't know yet. All right.

[24:01] Politics and Bitcoin

Stefan

[24:01] So there's positive undercurrents, sustained ETF inflows earlier in Q3, $7.8 billion net, and MicroStrategy's ongoing bias, 388 Bitcoin in mid-October. Cycle analysts remain optimistic for a rebound, with some models giving a 75% chance of closing October above $114K, although that didn't pan out. And November historically was averaging 42% gains again, because it's institutional. I wouldn't particularly look at that as reasonable. Key levels to watch is their support at 95K to 100K. If 103 to 105K breaks, is their resistance 108 to 110K for any relief rally? So this appears to be a correction resetting overextended positions rather than the end of the bull cycle, but sustained macro headwinds could prolong the pain. So it's a little bit of a buzzwordy thing. I would also say that there is a sort of ongoing crisis in American politics to do with the relationship between POTUS and the judges, because this has been going on really since Trump was around the first time, and it seems to be more intense this time, which is, and sorry, I see, Tim, you wanted to jump in, which is great. I'll talk to you in a sec. So there's a conflict between POTUS and the judges, because the judges keep saying, you got to do this, and the sort of, the Trump presidency is like, do I? Do I really? Is that absolutely a fact? For sure.

[25:31] So, let's get to your comments and then I'll get to your sort of.

[25:37] Think. Aha, I'm using some AI to write some tests right now. Well, the other thing, sorry, just very briefly to touch on this. Thank you. This reminds me of this. So.

[25:51] The other thing, too, is that AI is being used extensively by young people to cheat, right? I mean, you can see that the chat GPT drop-off right after school ends. So it's being used a lot to cheat. And what that means is that marks are going to be progressively more uncertain in the future. What economic impact is that going to have if you can't differentiate the smart? You're going to have to find some other way to differentiate smart people from not-so-smart people other than marks. Hopefully the IQ tests will come back, but that would take in America and a lot of places, a big legal challenge.

[26:29] The summaries are hit and miss. Yes, that's right. Usually decent, but inconsistent. First person versus plural versus third person. Yeah, so that's when my shows get summarized. Sim says, there's likely a bubble to some extent. It'll shake out all the business models that aren't going to work and open the market up for the businesses that will work if a bubble does pop. Bubbles are just caused by more money than cents. James says, the tags it generates for the shows, AI, are usually not good, right? Aren't the transcripts more convenient for inputting the shows into your own AIs, or do you have a different process for that? On a personal note, my wife enjoys reading through them on occasion. Sure, now listen, this is sort of my old business brain. So, it's true that AI generated transcripts certainly do help inform the AIs that we run for the show, for the shows as a whole. But that still doesn't mean that they're economically valuable. Because as you say, my wife enjoys reading through them on occasion. And I like it is nice. But what you really need is I'll pay for it. Right?

[27:41] Sorry, just one sec here. We got a wee bit of a hiccup. Enable audio playback. Yeah, sorry about that. A little bit of a hiccup. The browser closed. Why? Why? Why? I don't know. And yeah, sorry about that. We're back. Back. Just let me know. I think we're good. So thank you, Varison, for the tip. I appreciate that. freedomain.com/donate if you'd like to help out the show. So, sorry, if you had comments, I lost them. I don't know what happened. The browser closed and I just reopened it. So, sorry, you need to redo your comments again. If you don't mind, I appreciate that. Why did it close? Why did the browser close? I do not know. I do not know. Anyway. So, yeah, it's nice. People say, well, I like reading through your transcripts, But the question is, will you pay for it, right? Nice to have versus have to have. So, yeah, so I think the political uncertainty of POTUS versus the judges and was AI an investment or a malinvestment and all of that. And if it was not a malinvestment, what happens to all the people who lose their jobs because of AI or?

[29:03] Listener Comments on Bitcoin

Stefan

[29:04] Anthony Pompliano, a very positive voice, wrote today, years ago, we dreamed of the headline, Bitcoin is crashing. People lose perspective quickly. Yeah, of course, of course, right? Oh, Bitcoin is crashing to $100,000. We dreamed of the headline, Bitcoin is crashing to $100,000. That's kind of funny, right? Yeah, try not to get this stuff. Try not to lose this perspective. Luke Broyles wrote, Bitcoin is pain when the price goes up. You wish you had more. When the price goes down, you hate being underwater. When the price goes sideways, you were insanely bored. Embrace the pain. That's actually quite true. That is quite true. I try not to have that approaches. Peter B wrote, Scared Investor. Bro, I'd throw everything ahead of Bitcoin if it ever hit 100K again, I swear. Bitcoin hits 100K. Scared Investor. Garbage, it's over. You should sell. I knew it. It's finished. Bitcoin goes to $150K. Scared investor, bro, I wish I'd bought at $125K. Right. Very true. Very true. Yeah, the buying power of a dollar over time. So yeah, you've got to compare Bitcoin to fiat. And of course, fiat is total ass when it comes to maintaining value. All right, hang on a sec here. Let me just...

[30:25] Simon Dixon wrote, 14 years ago this month, I spoke at the first Bitcoin conference in Prague. Bitcoin had just crashed from $30 to $3. None of us cared. It was us versus the banks. My talk disrupting banking not helped banks own Bitcoin for you. Who's still with me? Right. And that's also a very, a very good point. All right. Let's see here. Oh, did they come back? I've only used the transcripts once or twice myself. Yeah, for sure. Yeah. So Alex says about his wife. It's more of a nice to have as opposed to willing to pay for it, right? So yeah, I mean, let's say that it costs $2 to produce a transcript for a show. The question is, would you pay $2.25 to read a transcript for the show? And the answer is probably not.

[31:09] It's there if you'll use it, but would you pay for it to be there is a sort of foundational question of entrepreneurship, right? The market will flush out the true value of AI to the world if it's a gimmick or if it's of actual value? Well, yes, of course. Okay. Somebody says, oh, a slightly different question. In the past, you've said that love is a natural response to virtue. Natural, I think my phrasing has been love is our involuntary response to virtue if we're virtuous. You chose to stop having a relationship with your parents because they abused you and didn't heal the relationship. I'm curious to know, would you apply that same standard to your relationship with Izzy? If God forbid she were to go through a period of self-destruction and as a result was abusive towards you, would you stop having a relationship with her? Or is your love for Izzy unconditional? Well, that's a nasty question, boy. That is a nasty, ugly question. I wonder why would you ask that?

[32:06] Parenting and Responsibility

Stefan

[32:07] It's interesting. So this is from my guardian angel. What a, yeah, that's a nasty, ugly question. And I'm trying to think now, why would you ask that? I think what you would be trying to do is figure out if I would just be a rank hypocrite.

[32:26] Love is a natural response to virtue. My parents abused me and didn't heal the relationship. If Izzy suddenly became abusive towards me, would you stop having a relationship with her?

[32:41] That, I mean, I know I'm repeating myself, but that is a, there's a nasty, ugly question. To compare the child that I raised with the parents who raised me very badly, or as my mother always said, like, you did it all yourself, right? So it's a nasty question because I had no control over the variables in my relationship with my parents. Somebody says, it wasn't meant to be nasty. I'm genuinely curious to know if your love for Izzy is unconditional in a way that your love for your parents wasn't. No, it is a nasty question. I mean, you need to own that, right? Because, I mean, it just takes a moment's thought, right? So.

[33:35] If somebody hits me in a car, I'm completely innocent, right? They run a red light or they T-bone me and whatever it is, right? Then I have the right to be angry. If I T-bone somebody else, obviously, that's a completely different situation, right? So if somebody injects me with drugs while I'm unconscious or roofies my drink or whatever it is and does that to me, then clearly that's inflicted upon me. If I do it to myself, that's an entirely different situation. So I was in control of zero variables with regards to my parents, right? I was just born into the family, didn't choose the relationship, had no control over the relationship, no say, no choice. I didn't choose where we lived. I didn't choose to go to boarding school. I didn't choose to come to Canada. I didn't choose any of it. I didn't choose for my mother to go crazy. I didn't choose for my mother to be institutionalized. I didn't choose to have to get a job at the age of 10. I didn't choose to have to have three jobs in high school. I didn't choose any of that. I'm just trying to survive in a situation in which I have zero control and never chose any of it, right? So that's the one example. And this is so obvious. This is why I'm saying the question is nasty, because this is completely obvious, right? This is not some big philosophical challenge to sort of mull over or comprehend.

[34:55] So on the other hand, And of course, I chose neither my mother nor my father. They chose each other. And then they chose to have me. And then they made all the choices that controlled my behavior as a child or controlled all of the environmental circumstances as a child.

[35:12] So, they were responsible as parents, right? Now, when it comes to Izzy, I chose my wife, we chose to have a child, and I chose to stay home, and I chose to raise her, and I'm saying I, of course, my wife's 50-50, right? But my wife and I controlled almost all the variables in her early life. And so it's the difference between being hit by a car and driving well, right? So it's sort of like saying, if you got hit by a car because some drunken idiot who was texting ran a red, would you be angry at that person. Yes. Well, what if you were a drunken texter and drove into a pole, would you be equally angry with yourself? And it's like, I guess, yeah, but I mean, I wouldn't be angry at someone else. I wouldn't be angry at the pole. And so... I'm not quite sure that I understand why it would be at all hard to understand that.

[36:26] The issues that I have with my parents would be completely different from what goes on with Izzy. I had no responsibility for my parents' choices and no capacity to control their choices because they didn't listen. Whereas I was, I made the primary choices and stayed home and educated at Izzy, right? Raised her from, I mean, I was reading to her when she was in the womb, right? I was sort of sitting while my wife sat on the couch having some herbal tea. I would read stories to Izzy because I could feel her wriggling in the womb. I could sort of feel her, you know, and I know that deeper voices tend to penetrate the amniotic sac. And I also know that deeper voices are recognized by babies, right? And of course, my wife practiced psychology for like a quarter century, and she is an expert in sort of child development and so on. So the idea that I would judge Izzy according to the same standards that I would judge my parents is so incomprehensible. And again, this isn't complicated. So that's why I think that there's an unpleasant motive to your question. And look, I'm not saying this out. I'm not wildly offended or anything like that. I'm just saying that it's such a weird question. And it's so obvious that the two situations are completely different. If Izzy were to go through something, and again, I know you say God forbid, right? But if Izzy were to just go and become self-destructive and become abusive towards me, that would be on me.

[37:54] Because when my parents were abusive towards me, that was on them because they're the parents and they're responsible for the relationship. And so, especially as a stay-at-home dad and, you know, somebody who considers himself an expert on parenting, if Izzy were to be, negative or hostile or abusive towards me, that would be on me because I raised her. So again, I don't really understand why that's, hard to understand. That's pretty obvious. So it's, it's, it's a nasty question because you are somebody, he says, I think I understand what I was trying to understand. If you had a problem in your relationship with Izzy, you would take part of the responsibility. No, no, no. I take all the responses. I've raised her. I raised her. I take all the responsibilities. Ah, yes. Well, she has outside influences. Like, yes, but I know she's going to have outside influences. It's my job to make sure that she has, you know, good, solid values. so, if Izzy did something negative.

[39:01] Or quote abusive then that would be on me as her as her teacher, I controlled all the variables most of the variables when she was little so I don't yeah again it's such an odd question because it's just that's blindingly obvious isn't it, I hold my parents responsible because they were the parents So naturally, I would hold myself responsible as the parent, right? It's the same. It's exactly the same principle.

[39:30] So I'm confused as to why that's an issue. All right. And of course, it is an... an ugly thing to talk about my daughter becoming abusive. That's just an ugly thing to talk about. Oh, and I'm just asking questions. It's like, but you have to look at your own motive as to why you're asking the question. All right. Chris says, might be a bubble in AI, but I think that's just beginning. The AI used in Tesla vehicles for autonomous driving is remarkable. Maybe in the short term, it would be merely a productivity enhancer. All right. Graham says, I commented previously that AI-heavy code bases are realistically write-only. You cannot read them. AI does not abstract, so the code is incomprehensible. It's like outsourcing all over again. I don't quite understand that. I mean, I assume that the AI is also writing comments and documenting the code. I assume it would, because it would be good at that. I don't think it's obvious. I've seen relationships where parents ostracize their kids for behaving in ways they don't like. It is obvious, though. It is obvious.

[40:41] AI in Coding and Development

Stefan

[40:41] I mean, if you've been around for a long time, right? You know that I'm a big one for taking 150% responsibility, right? Whatever's going on in your life, take 150% responsibility. So if my child is going off the rails, I take 150% responsibility. I don't know why this is again, I don't know. So I've seen relationships where parents ostracize their kids. Okay, but they're shitty parents.

[41:04] Right? So it's like, well, Stef, what would you do if your wife strangled you to death overnight? And I'm like, what the hell are you talking about? It's like, well, I've known. I read in the paper that some wife strangled her husband over. What does that have to do with? Well, but you got, you got, again, he says, I thought it would be a good question. Sorry, it came across nasty. Now it is a nasty question. What if your child became evil? What if your child became abusive? It's a nasty fucking question, bro. It's a nasty question. And you have to look at your own because it's easy to reason through. It's easy to reason through, right? Right. So it is a nasty question. Well, Stef, what if, what if your, what if your beloved child just turned evil? What if, would you still love it? That's a nasty question. And you have to ask yourself why you'd want to ask that. And again, the answer is very clear and very simple. Right. And saying that, that the same standards would apply to my parents as to my child is crazy. Like, honestly, that's incomprehensible. You know, well, your father was supposed to get a job and pay the bills. So why isn't your five-year-old getting a job to pay your bills? It's like, because he's my kid, right? There's a completely different standard. Well, a relationship where you're in control of the variables is analogous to a relationship where you have no control over the variables.

[42:27] It's like if some woman chooses a bad guy and dates a bad guy, gets married to a bad guy, gives three kids to a bad guy, then she has some causality in the matter. If she's in some horrible culture where she's forced to marry some guy and he basically rapes her and produces children, would you say, well, that's the same. The same standards would apply to the situations. One is voluntary and the other one is coerced, right? My relationship with my parents was not voluntary. And I controlled none of the variables. I chose to have a child. I chose the lovely woman to have a child with. And we raised my daughter well. And so, but what if she just became evil? Then what? It's like, it's a weird question. Because you're saying, and it's one thing if you were to say, well, what if she just went weird, right? Went evil, right? But what you're actually talking about is, would you ostracize your daughter when you control all the variables to do with raising her, would you ostracize your daughter in the same way that you ostracize your parents as if somehow the relationships are analogous, right? And you just, I'm just saying, you need to ask yourself, why would you ask that kind of question?

[43:39] Right? Why would you ask your question? And also, um, why wouldn't you say this is a, this is an ugly question, you know, like prepare me a little bit because it's a really nasty, nasty question. All right. Uh, anyway, way, I didn't say evil. I said went through a period of self-destruction. No, you said became abusive, right? And as a result, was abusive towards you, right? So that's being nasty, right? All right. Any other questions, comments, issues, challenges, problems? On X, if you would like to raise your... And maybe you just don't love people enough. Like, it'd be sort of like saying, well, what if your wife just turned evil tomorrow? Like, she's not going to turn evil tomorrow. What if your wife just became abusive tomorrow? It's like, but she's not going to become abusive tomorrow. Right? All right. Oh, James, did you mean to?

Caller

[44:33] I did mean to.

Stefan

[44:34] Yes.

Caller

[44:34] Hey, how are you doing?

Stefan

[44:35] Good, how are you doing?

Caller

[44:36] Oh, not too shabby. So, just a brief little note. It looks like we tried something new with setting up, with me setting up the space, and it looks like I don't have the option to give you video.

Stefan

[44:47] Ah, okay. Good to know.

Caller

[44:49] Yeah, yeah. So we know that now. And as far as the AI coding stuff, I mean, I've always been just basically skeptical of it myself. And I know there were big, big, big, big claims made around it. But I saw, oh, AI is doing some auto-completion. That's kind of nice. It's not that groundbreaking, but it's like, oh, really excited. But then whenever I came to doing my own coding, it's like I just can't split my attention to set up the time to up and get everything involved and other people.

Stefan

[45:24] Sorry you're cutting in and out.

Caller

[45:25] I was nuts oh I am yeah you are.

Stefan

[45:30] Are you in a bad spot you have no bars.

Caller

[45:35] You know I'm my wifi looks like I don't control is it still bad um yeah Thank you.

[45:57] Tech and Self-Driving Vehicles

Stefan

[45:58] All right, I think we've lost James. All right, he is gone. All right, time flowing like a river. If you want to unmute time.

Caller

[46:12] Hey, how's it going?

Stefan

[46:13] Good, how are you doing?

Caller

[46:14] Good. I was just going to throw my opinion on the AI a little bit. As far as coding, I find it remarkable. I'm a full-time developer, and I'm using AI all day, pretty much every day. So I think for coding, it's kind of on a different level than sort of these other mundane tasks. And I think there's a lot of people that are thinking it's going to take over complete workflows and businesses. But I just see the hallucination problem being a major issue, especially in healthcare industries and anything where sensitive data and one mistake can cost a life and things like that. I just don't quite see it crossing that moat. But I do think for coding, it's remarkable.

Stefan

[46:56] So tell me, what kind of coding do you do, and how do you use the AI, and what sort of results are you getting?

Caller

[47:03] Yeah, so I started a company about five, six years ago. I guess software as a service company. I'm the lead developer. So I'm just adding new features every day. Yeah. Getting new data loaded in but really i mean i can just in one prompt a lot of times if it's pretty straightforward task i can have it you know five minutes what would have taken me five hours or maybe even more um so yeah it's pretty remarkable and.

Stefan

[47:32] Is it nicely indented and has all of the right.

Caller

[47:35] Uh commenting.

Stefan

[47:37] And and you can sort of follow it if you need to right.

Caller

[47:39] Yeah not in the comments but then you could like you mentioned earlier i can say hey give me a readme file of everything you just did. And then what I'll do is to avoid getting the context too big, because you want to keep your conversations relatively focused. I'll take the readme that I just have it generate, and I'll give it to a new AI agent and say, hey, here's this readme that I just made, and now I need you to do this to it. And then I'll kind of like, you know, daisy chain the prompts that way.

Stefan

[48:10] Oh, interesting. Okay. And does it matter what language it is? I guess AI doesn't care, right?

Caller

[48:18] Doesn't care. No, I think it's, I mean, obviously the more used the language is, the better because there's more reference material, more training material out there. So like some obscure language, like I guess Go would be a little more, less use. And I code in TypeScript, which is like JavaScript basically. Right.

Stefan

[48:40] Okay. Interesting. So you said five minutes, five hours. That's a massive, of course, massive productivity increase, right?

Caller

[48:50] Right.

Stefan

[48:50] And so, I mean, what is that? That's 30x, right? More. So...

Caller

[48:56] Yeah.

Stefan

[48:57] Is that common? Is that typical? Or how does that work?

Caller

[49:01] I think it's pretty typical. I think the issue for most people is probably that they don't have something to work on, on that level of detail. Because today, anyone can go to AI and spit out a, you could make an X clone in about an hour. But as far as making features that people really want and are difficult, because you have to have the product already, you have to have the user base, you have to know what they want and all that. So the problem is always knowing what to build. But it makes it a lot easier for sure.

Stefan

[49:41] Okay, and you build on it, right? So as the AI has built code for you, when you want to build on that code, the fact that the AI has already built it, I assume, is an advantage as well, because it already understands what it's built, or at least has some reference to that.

Caller

[49:56] No, it doesn't really. That's why I have it spit out all the readme files, because it doesn't remember anything it did.

Stefan

[50:03] But there must be memory-persistent AIs for coding, right? That it remembers what it did.

Caller

[50:10] Yes but i believe it i think on the back end it's just building a documentation base but i guess unless you're feeding it unless you're feeding it some sort of historical uh conversation it's not going to remember that.

Stefan

[50:24] Okay uh all right james is saying recent survey across developers says that they feel like they're 20 faster but when productivity was measured they were actually 20 slower well but see 20 slower with regards to coding is not necessarily bad if the code is more readable more consistent better documented with more comments uh because we've all i mean those of us who are coders have every now and then get caught up in some quagmire spaghetti code with lots of go-to statements and uh it's it's uh and you know variables named a dollar b dollar c dollar and not documented and And so, yeah, so if the AI builds better documented code, that's to an advantage.

Caller

[51:05] Definitely. I think one flaw of that study, I did see that, is that it doesn't take into account the fact that I can do multiple things at a time now. So I can put in the prompt, AI starts coding, I can respond to emails, I can start another prompt, I can start another task. So I think the study showed that the time it takes to complete a single task, was longer or maybe about the same. But I think multitasking might change that.

Stefan

[51:36] Yeah, it's like if you have 10 trains going 20% slower, you're still getting a lot more stuff delivered than one train that's going 20% faster.

Caller

[51:43] Exactly, exactly.

Stefan

[51:46] Interesting. And have you used AI in any other way in your business?

Caller

[51:52] We're kind of struggling to find uses. Actually, AI is really bad at my niche. It gets everything wrong. If we ask questions that relate to my app and what we present, it gets it all wrong. So we're kind of struggling on that. I mean, I've used it for fun, for images and songs and stuff like that. but yeah.

Stefan

[52:17] Okay, that's interesting. Interesting. And with regards to your question about, say, legal and medical, tough call, man. And the question is all about liability. So it is certainly true that AI makes mistakes when it comes to medical information, but it is also equally true that doctors suck a lot of times.

Caller

[52:41] True.

Stefan

[52:42] Like medical error is one of the leading causes of death in America or other places. So it's always a question because you don't want to compare AI to perfection, right? What you want to do is you want to compare AI to... To human error right so say oh ai hallucinates it's like well yeah but so do people right.

Caller

[53:06] So but let's say you had a company that automated uh texting a a patient you know to say something i guess the problem would be that the company that implements that they would have to be insured, for like every you know for every potential customer right because if something happened the liability would be on the software company right.

Stefan

[53:26] Well there's something but the software company wouldn't sell it that way the software company would say you assume all liability we can we can tell you that the software is going to work but we can't tell you that uh the content of what like you if you use um if you use a word processor to write a ransom note right then you're on the hook for the ransom note not not the word processor company right so they would they would write it so that look we can tell you the technology is going to work and the measure is to get to the the um the uh the message is going to get delivered but the question is if you go to ai and say you know here are my symptoms and the ai gets it wrong.

[54:06] Okay, that obviously can happen. But the question is, does AI in general get things more right than a human doctor? And not like some brilliant house doctor, but your average human doctor. Because, I mean, obviously I have a, like I did a call the other night with a guy who's been 20 plus years in agonizing pain, and it turned out it could be largely fixed. He just needed the right doctor which he didn't get and so uh the question is does it do better on average like you know every time there's a crash and i think it's been a while but there were crashes of course with uh ai driven cars driving using using the tesla thing and it's like okay that's that's that's interesting and that's a shame but is it more or less than human error because lord knows, I drive in particularly paranoid fashion because I drive like everyone is drunk and texting and playing a video game with their feet because I think drivers are pretty terrible these days. I know IQ is dropping, so I just have to drive much more carefully. So yeah, is it better then? So say, oh, AI, some of the code it produces isn't great. It's like compared to what? Sorry, go ahead.

Caller

[55:19] Oh, I was just going to say that an interesting thought experiment, I guess, for self-driving is like if these cars are all automated and let's say the the the self-driving, swerves out of the way of an accident but then hits a pedestrian i mean i guess that's like it's kind of like a trolley trolley problem of like does it hit does it create more of an accident or does it swerve and hit someone on the sidewalk well.

Stefan

[55:43] Maybe it would make a mistake in that split second moment but so would people.

Caller

[55:48] Well, I don't even mean it's a mistake. Maybe it makes the correct decision. Maybe it says, I'm not going to hit the car in front of me because there's three kids in there, and I'll hit the person on the sidewalk instead.

Stefan

[55:57] Well, I assume that the AI make those decisions in conjunction with insurance companies because insurance companies would have, you know, risk-reward algorithms up the yin-yang and all of that.

Caller

[56:09] Hmm, that's interesting. Yeah, makes sense. I use, I have the self-driving in my Tesla every day. I use it, so it's amazing.

Stefan

[56:16] Yeah i did try test drive and tesla literally that's the most futuristic thing i've seen since i first saw a tablet um yeah and it's way cooler than a tablet in my opinion so it is really the most um the most futuristic thing that's just wild.

Caller

[56:33] Absolutely. Cool. Well, good chatting with you as always.

Stefan

[56:36] Thank you and I appreciate your support. Thank you so much. All right. Any other questions, challenges, issues? I don't want to milk it, but I'm happy to chat if anybody else has anything else that they wanted to add. And again, I do really thank you guys. And listen to the guy who gave me the ugly question.

[56:39] Closing Thoughts and Thanks

Stefan

[56:53] The coaching that I would give you is just think about it before. Like if you can answer the question yourself, right. Or if you are going to ask someone a really ugly what if your child turned malicious and abusive at least warn the person ahead of time so you can sort of you know because i was like oh my god is he's evil he turned abusive so yeah prepare people it's the first thing you can do to avoid asking really upsetting questions first thing you can do is just find think through it yourself which i'm sure you can do and the second thing is at least just prepare people for a sort of such an ugly, and a difficult question emotionally, right? So that would be my suggestion. Or, you know, and just be aware that what you're asking, right? It's just a sort of awareness thing. Just be aware that you are stepping on that particular landmine. All right, I think, I don't think we've got anyone else calling in. All right, well, listen, so sorry we couldn't get to it yesterday. Thank you guys for dropping by tonight. Thank you for your support. It is massively, humbly and deeply and gratefully accepted. I should be done. my book tomorrow. I should be done my book tomorrow. I'm a little hungry for feedback, so if you're listening to Dissolution, if you could post on your friend in neighborhood FDR site, I would really, really appreciate that.

[58:07] Boo for T.S. Not for you, bro. You're a legend. Congrats on the biz. Okay. I'm not sure what that means, but maybe that's to me or someone else. All right. So, yeah. Dissolution, my new novel, I do the last chapter tomorrow and we'll have it out and then you can sort of get the soup to nuts thing and maybe we can get together to chat about the book. I'd love to know what you think because, boy, did I take some giant risks with this book and we'll find out from you guys if it paid off or not. All right. Thanks, Emil. Have a great night.

Join Stefan Molyneux's Freedomain Community on Locals

Get my new series on the Truth About the French Revolution, access to the audiobook for my new book ‘Peaceful Parenting,’ StefBOT-AI, private livestreams, premium call in shows, the 22 Part History of Philosophers series and more!
Become A Member on LOCALS
Already have a Locals account? Log in
Let me view this content first 

Support Stefan Molyneux on freedomain.com

SUBSCRIBE ON FREEDOMAIN
Already have a freedomain.com account? Log in