
0:02 - Introduction to the Podcast
2:00 - The Complexity of Defining Harm
5:07 - Moral Ambiguities in Everyday Decisions
5:52 - The Use of Violence for Greater Good
8:09 - The Problem of Future Moral Claims
11:42 - Intentions and Mind Reading in Morality
14:06 - The He-Said-She-Said Dilemma
15:33 - The Challenge of Moral Adjudication
20:44 - The Limits of Moral Systems
24:30 - The Seen vs. the Unseen in Harm
27:15 - The Flaws of Intentionality in Ethics
35:50 - Confronting Anti-Rational Thoughts
36:45 - Conclusion and Reflection
In this episode, I delve into the complex topic of moral philosophy, focusing on the definition and implications of evil, particularly in the context of intentional harm. I illuminate the innate contradictions in defining harm purely as the intentional infliction of injury. Through my own experiences and observations, I draw parallels to famous figures like Frank Sinatra, to emphasize how instinct and nuanced understanding play crucial roles in moral reasoning. While many classify evil as targeting others with intent to harm, I argue that such definitions oversimplify a multifaceted issue and consequently shroud morality in ambiguity.
I recount a recent conversation where examples of harm were presented, pushing back against the simplistic notion that intentional harm is the litmus test for evil. I present counterlogical instances, like emergency medical procedures that may cause pain but serve a greater, beneficial purpose. This notion forces us to question conventional wisdom—does the road to moral good necessitate short-term pain? I explore this through examples like parental discipline or the role of coaches, who inflict temporary discomfort to foster growth. Such scenarios exemplify the intricate balance between immediate and long-term outcomes. I propose that establishing universally applicable principles is imperative to navigate the moral gray areas of life.
A significant focus of this discussion revolves around the perceived irony within our moral frameworks, specifically the principle of "do no harm." I argue that this guideline can inadvertently justify harmful actions when couched in the promise of future benefits. Totalitarian regimes, for instance, may inflict grave harm under the guise of collective advancement, thus revealing the danger in accepting harm as a necessary evil for a greater good. I emphasize the importance of strict moral clarity, pointing out that moral justifications requiring future validation cannot achieve certainty in the present, making them vulnerable to manipulation.
As we dissect examples of bullying and theft, I call for rigorous scrutiny of motives and contextualize them within the broader moral landscape. The difficulty of adjudicating moral disputes arising from subjective experiences emphasizes the need for objective and clear moral guidelines. Furthermore, my exploration touches upon deeper societal dilemmas, including the he-said-she-said dynamics in cases of alleged sexual misconduct, which emphasizes the impact of ambiguity in moral judgment.
The discussion transitions to the broader implications of moral philosophy on everyday decision-making and the need for clear denotations of right and wrong. The examples I provide reinforce my argument that morality must maintain a level of absolute clarity to avoid derailing into subjective interpretations. I emphasize that moral systems dependent on mind reading or unfounded future outcomes fall short of serving humanity's best interests. In ideal moral frameworks like Universal Preferable Behavior (UPB), the absoluteness of the principles cannot mix with ambiguity over intentions or potential future harm.
Finally, I explore the insights from my own experience in navigating moral dilemmas, highlighting the inherent struggle when advising individuals against self-destructive behaviors or beliefs. I reiterate the necessity of truth and the potential benefits of sharing wisdom, which can serve as a vital antidote to the myriad dilemmas confronting people today. The discussion closes on the importance of self-respect in moral dialogue and the social mechanisms available to address those who veer out of line with established moral standards. By solidifying our grasp on the complexities of moral philosophy, I aim to equip listeners with a perspective that fosters critical thinking and deeper understanding for ethical navigation in their own lives.
[0:00] Well, good morning, everybody. Stefan Molyneux from Freedomain.
[0:02] Freedomain.com slash donate. Pinch punch first day of the month. We are talking about August the 1st, 2025. Hope you're having a great day. I wanted to tie up some loose ends in my brain, and you might as well bear witness. I think it'll do some good for the world. But yesterday I did a show where we talked about the origins of evil, and people were talking about evil being the infliction of intentional harm. The infliction of intentional harm to others. Now, it's funny, you know, because I like to think of myself as a fairly rational guy, but I'm telling you, a lot of it is just gut. Now, the gut doesn't prove anything, but my gut is like, that is not, that can't be right. That's not satisfying. And not to compare myself, of course, to illustrious folks, but I always remember the story of Frank Sinatra. And Frank Sinatra would be like singing with a full orchestra. I don't know, like 80 instruments or whatever, right? And Frank Sinatra would be able to.
[1:18] Pick out one, say, bassoon that was a little off. Oh, oh, oh, I think we've got a little stranger in there. And I've always loved that story. I mean, any sufficiently advanced technology is indistinguishable from magic, and any sufficiently advanced skill seems indistinguishable from psychic abilities. So, of course, I have been studying and debating and reasoning for over 40 over 40 years now. And it gives me good instincts about this stuff. And yesterday, my instincts were going off full tilt boogie.
[2:01] And this argument about the intentional infliction of harm, I pushed back against it by providing some counter examples, where you have to say harm becomes complicated to define, right? And the reason that it becomes complicated to define is because, obviously, if you need an emergency tracheotomy, you're pretty happy if there's a doctor around to do it. The Heimlich maneuver, I actually interviewed the woman, the daughter of Dr. Heimlich, Janet Heimlich, many years ago. And if somebody needs to give you the Heimlich You can maneuver because you're choking and they break your rib. Well, that's the intentional infliction of harm, but it's not sadistic and it's with the larger goal and good. So then you have to balance present harms and future harms. It all becomes very complicated. And things that are very complicated become impossible to manage from a moral standpoint. Right?
[3:03] And we, you know, I always go back to this. We expect maybe two, three, four-year-old little kids to be moral, right?
[3:11] We expect them to be moral. And if we expect kids to be moral, it can't be that it's so complicated. And a coach who's pushing you to run faster, harder, personal trainer who's telling you to lift more weights or whatever, is definitely causing you harm. Ah, yes, but in the long run, all that kind of stuff, right? Now, what does it mean by the long run? When is the balance? All of these things are very complicated. And I think they would have to do with aesthetically preferable behavior. But it's really tough to say, well, you got to find just the right calibration and balance between short-term pain and long-term gain and so on, right? And of course, these claims don't exist outside of assertion. So for instance, a bully could say, well, the reason I'm bullying this kid is to toughen him up, right? That's why I'm this kid. I'm bullying him so I can toughen him up, right?
[4:16] Well, does it happen sometimes that kids who are bullied toughen up? Yes, it does. So can a bully then say, I'm trying to toughen him up? Sure. Is it true sometimes? Yeah. Does that make bullying okay or moral or good? No. No, oh, that doesn't, that doesn't, we understand that doesn't work, right? Sometimes if kids are really irresponsible with their property, the parents will take it away.
[4:47] And this is to teach the kid to be more careful or to treat his possessions more carefully or things like that, right? Can a thief then say, well, I'm just teaching him to respect his property. I mean, he left something, he left his bike out on the front lawn. He doesn't ever put it away. I'm taking it away from him to teach him a lesson. See, these claims become virtually impossible to adjudicate.
[5:08] I mean, obviously, there are clear ones where it's bad, and there are clear ones where it's good, but there's a lot of gray areas. And the reason we need principles is there are a lot of gray areas in life. Most of the decisions that we make are not...
[5:26] World-spanning Genghis Khan good or evil decisions. They're little decisions to tell the truth or to hold our tongue, to confront someone or to back away, to do something shady at work or not to do something shady at work. All of these, those are the decisions that we normally have to make. We don't usually have the decision to go to war or not, but just to tell the truth in the public square is the big decision that we have to make.
[5:53] So, anyone can claim that what they're doing is for the great good, right? I mean, the communists do this all the time. And the communist philosophy is justifying the use of violence in order to secure a happy, productive, peaceful, wealthy world for the proletariat. So, the communists, it's a famous statement, you cannot make an omelette without breaking a few eggs. You cannot achieve the good in the world without the use of violence. And of course, the eggs are broken, the omelette never shows up, but the sadists really enjoy breaking the eggs for sure.
[6:32] So, I'm always suspicious of, and this is true within myself, not of others, I'm always suspicious when the examples are obvious, because morality is about the non-obvious examples. There's no nutritional book that says don't eat arsenic and gravel, because obviously you shouldn't eat arsenic or gravel, so you don't need a book for that. We need a book for the challenging cases, the non-obvious cases. So, when people say, well, evil is when you intentionally inflict harm on someone, I mean, that sounds good, but then, of course, you bring in the, well, you can harm people in the short run for the greater good in the long run. I mean, totalitarian regimes that euthanize people are saying, well, we have to do this for the sake of preserving our scarce medical resources for others and so on, right?
[7:26] COVID was a lot about short-term sacrifices for the greater good in the long-term, which, you know, in general, very often did not turn out to be the case. So, anyone can make that claim. And since the proof of the claim resides in the future, how do you deny it now? Right? So, a bully who says, I'm bullying this kid to toughen him up, and coaches do it all the time. Coaches push kids, sometimes make them cry, and they say, but it'll toughen them up in the long run. it worked for me, it'll work for them. So the problem is that all moral claims that require the future to be validated can never be proved in the present. That was an awkward way to put it. Let me take another run at that sentence.
[8:10] It is impossible to prove the current validity of moral claims when the proof exists only in the future. The future isn't here yet. So, is this right or is this wrong? Is this right or is this wrong? Well, if the proof of the rightness or wrongness of the action lies in the future, then you can't ever have moral certainty in the present. Now, UPB, abstract principles, give you moral certainty. The initiation of force, respect for property, no rape, theft, assault, and murder, these are all validated universally, and therefore they're not dependent upon time, whereas the intentional infliction of harm.
[8:56] Requires a couple of things. It requires that you read someone's mind because everybody who will claim, oh, I didn't mean to, that's what you always hear, right? Oh, I didn't mean to. No, I didn't mean it was an accident, right? I mean, even somebody who, like, it's pretty wildlife. Even people on X, they will, you know, say that I'm wrong and dumb and things like that, right? Then I sort of call them out for their rudeness. Like, I mean, I didn't mean to offend you. I was just being blunt, right? So, intentions are very difficult to read. So, all moral systems that rely upon mind reading cannot be validated, objectively.
[9:38] So, a kid who takes another kid's toy, and the other kid complains, runs to the teacher, the teacher goes to the kid who took the toy and says, why did you take the toy? He says, I thought he wanted me to have it. I thought he was done with it. I thought he wanted me to have it. Okay, how do you adjudicate that? I mean, it's true, of course, that you could make that claim even under UPB. But that is a specific instance that needs to be adjudicated, but UPB does not require mind reading. It does not require an analysis of intentions. I mean, as a theory, right? Sometimes you have to, I mean, the difference between first-degree murder and, I don't know, negligent manslaughter or something like that, negligent homicide. No, homicide's just any death, right? Anyway, some sort of manslaughter, negligent, whatever, right? The difference is that one is willed, the other one is not willed, but results from negligence. So there's a certain amount of intention there. So you do have to, like murder is wrong, but first degree versus third degree crime of passion versus, you know, you hired a hitman or whatever you planned out a week in advance. Those are different things. So, UPB, in the theory, does not require.
[10:54] Calculating effects in the future as a theory. It does not require calculating effects in the future, and UPB does not require an analysis of intention. Now, specific adjudications of UPB, well, I'm trying to think. So, specific, so, if you steal something from someone's yard, and you thought it was, that they didn't want you to have it, like, maybe there was a sign next to a couch that says, take me, and then like 15 feet away, there was a bike, and you took the bike thinking that both things were being offered, right? So, there could be something like that, again, pretty rare stuff, but it could, you know, you could, but the theory doesn't require that.
[11:39] The adjudication of a particular instance may require that.
[11:43] The theory doesn't require an analysis of intentionality, and it doesn't require the guessing of future effects. Because if your moral theory requires that which can be lied about, you don't have an objective moral theory. And I think that's what I'm really talking about here. I mean, this is the classic, He said, she said stuff regarding non-injurious rape. So, I mean, sorry to discuss such an ugly subject, but rape is the unambiguous moral wrong. Stealing, you could be stealing something back. Assault could be self-defense. Murder could also be self-defense, or killing could be self-defense. But a rapist there's no ambiguity it's like it's just it's evil in and of itself there's no self-defense rape right so with regards to rape the, The big challenge societies have always faced is the he-said-she-said dilemma, where she voluntarily went to the man's house, she had a couple of drinks but not enough to be incapacitated, they had sexual activity, and then there's no injuries of any kind. And she stays over. She leaves the next morning.
[13:06] And then later, she says that the sex was non-consensual. But there are no witnesses, there are no injuries, and there's no evidence.
[13:14] And this is a horrible situation. I mean, honestly, horrible, because there are certainly times where the woman really felt bullied or pressured, or the man made some kind of threat that was not recorded or something like that, right? So, there's absolutely times where it could be non-consensual and there are other times where the man had every reason to believe or reasonable reasons to believe that it was consensual, but later there's a withdrawal of the consent and it's just a big, ugly, difficult, impossible really to adjudicate kind of mess. So what do you do? Well, of course, society, by not allowing men and women to be alone in those kinds of situations, for them to be married, and so on. That is really all that society can do, because you can't adjudicate these kinds of things.
[14:07] That's the he-said-she-said dilemma.
[14:10] And so, if you have a moral system that requires mind-reading or the guessing of future effects, which is the intentional infliction of harm, then you have a problem because you have a non-objective, non-universal, non-rational, because it's relying on intentions in future guesswork. The theory, right? The theory. Now, again, I'm not, UPB does not adjudicate individual instances, right? UPB says that ceiling can never be universally preferable behavior. UPB is the respect for property rights, it doesn't adjudicate every complex land dispute or neighbor dispute over a tree. Those things would have to be adjudicated.
[14:56] But the adjudication, of course, adjudication exists because of ambiguity as a whole, but the moral theory cannot itself contain ambiguity. The moral theory itself cannot contain, which is why UPB doesn't contain ambiguity. It doesn't require mind reading. It doesn't require, the theory does not require the guessing of future consequences, right? I mean, to take an example that, you know, sounds extreme, but that's all right. Somebody cuts somebody else's throat in a restaurant and then says, oh, I thought he was choking.
[15:33] I heard him cough. I thought he was choking. Mean, that's a challenge, right? Let's say that the guy is a doctor or whatever, right? It's tough. You know, but then as I was trying to give him the tracheotomy, he writhed and, you know, the knife went in. Like, that's complicated stuff. So that's somebody who's intentionally inflicting harm with the claim that future harm will be alleviated. Like, you won't choke to death because I'm giving him a tracheotomy or something like that, right?
[16:06] So or you know some guy gives a woman that breaks a woman's ribs giving her the heimlich maneuver and then says well i thought she was choking and she said like i had a mild cough what are you doing right so upb says of course that assault the initiation of force is absolutely wrong with without regard to future results without regard to mind reading intentionality or anything like that. These things are wrong. The do-no-harm theory runs into complications even in the theory, not just the adjudication of individual disputes or questions. If the theory is ambiguous and requires facts not in evidence, right, then the theory can't work. The theory needs to be absolute and then the adjudication deals with complex cases. So, the law needs to say that murder is wrong. Every court trial is there to adjudicate individual charges of murder, the law and the court. And the court deals with all the complex requirements for proof beyond a reasonable doubt. So, if the theory requires mind reading and balancing between present and future.
[17:22] Harms, then the theory itself is ambiguous and requires facts, not in evidence. So, that's a problem. So, there's a difference between the theory and the adjudication, between the law and the trial. And if the trial deals with ambiguity, the trial deals with, you know, facts, not in evidence, the trial, you know, rejects hearsay, requires cross-examination, and is not absolute. I mean, if it was absolute, then like if there was video of the person actually killing, murdering the guy or whatever, and it was unambiguous, there wouldn't be a trial, right? I mean, almost certainly. Or if there was a trial, it'd be very short.
[18:06] So ambiguity is for the adjudication, it's for the trial to unravel all of that. If the theory contains ambiguity, then you can't know what is right or wrong, in principle. In other words, every examination of moral activity is a trial, which is like trying to build a bridge through trial and error, rather than having principles of engineering that are absolute.
[18:33] So you have principles of engineering and physics that are absolute. And then you have building a bridge, which, you know, you don't want to over-engineer it, make it too strong as a waste of resources. You don't want to make it too weak because then it collapsed. So that's the tension and the ambiguity. What's the right amount of time and money and energy to spend on building a bridge? I don't know. It depends. Is it a bicycle bridge or is it a bridge that has to take a hundred trucks? All of these things are are different right it's easy to over-engineer and built to build a bridge that is too strong and the waste resources and it is easy to under-engineer so the physical building of a bridge is you know complex and ambiguous and there isn't a final right answer there's just a right-ish answer a good enough-ish answer like you can't build a bridge and say well, this bridge is objectively 0.5% over-engineered or under-engineered. You can't say that. You just come up with a thing, right? You come up with a bridge. Now, the laws of physics are universal and absolute. The building of the bridge is, to some degree, subjective and ambiguous. What is the right strength of the bridge, right? In Toronto, there's a CN Tower. CN Tower has a glass floor, which can take the weight of four hippopotami.
[19:57] Over-engineered or under-engineered? I imagine it's quite over-engineered to be on the safe side. Could you get away with something that didn't take four hippopotami, but only 3.999 hippopotami? Yeah, probably. But it's a nice round number. So that's what they do. So you need objective universal rules. And then sometimes the applications of those rules is going to have some subjectivity and some complexity and some ambiguity, and you just kind of make your best choice. But if your theory is kind of an ambiguous, complex, multifaceted, rule-of-thumb, squint-down-the-line bunch of guesswork, then you don't have a moral theory.
[20:45] And that's why feelings and consequentialist-based and mind-reading intentionality-based, quote, ethics aren't ethics. It's confusing engineering with physics.
[20:57] So, if there's a medicine for a particular illness, then sometimes people might need more or less of that medicine based upon their height, their weight, their size, or whatever, right? So, this medicine is good for this illness, it's the absolute, I mean, without allergies and stuff, just in general. But how much of a dose you give, that is a different matter. So, moral theories need to be absolute, which means self-contained, which means not ambiguous, not consequentialist, because if you say something is good or bad based upon its results in six months, three months, five years, ten years, or whatever, if you say that something is good based upon consequences in the future, then you cannot say at the moment, whether it is good or not. And, of course, morality is about the future. Morality is about making decisions in the present to have integrity and virtue in the future. But if you cannot make decisions about morality in the present because you have to wait for the consequences in the future, then you have no standard by which you can morally judge your actions. You know, as we were talking about in the show last night. If I break up with some woman, then she's sad, she's going to cry, and she could be sad for months. So, am I intentionally inflicting harm?
[22:25] Well, I mean, my purpose is not to inflict harm, but harm is going to be the inevitable consequences of me withdrawing my affections. Of course, the same would happen in reverse if I wanted the relationship to continue and she didn't and so on, right?
[22:36] So, your moral theory needs to be absolute. It cannot rely on things that people can lie about. I didn't mean to, it wasn't my intention, and it can't rely on facts not in evidence, such as the effects, weeks or months or years or decades or centuries down the road. Also, the issue is the seen versus the unseen.
[22:57] Is people see, like, if you look at the deportations in the United States or other places, then people see, you know, crying families being hustled to the border or removed from the country, and that's very vivid, that harm. However, when you have a bunch of extra people in the country, the price of housing is higher, access to health care is diminished, and so on, right? And traffic is worse, and people speed because they're frustrated, and then there's car crashes. like there's a whole bunch of harm that happens. And the problem is the seen versus the unseen, is that if you're going to make decisions on what appears to be inflicting harm, then you are going to be seeing the most obvious harms, but not the subtle, unrecorded, abstract harms, which are very real, but can't really be traced.
[23:43] It's the old argument that if the government spends $5 million to create 50 jobs, then the people who get those jobs are very happy. But the 100 people who didn't get jobs because the $5 million was taken out of the economy, they don't even know that they lost their jobs, right? So, I mean, this is the seen versus the unseen was big under COVID, right? In that there were people who lost their jobs, wasn't really recorded. There were people who lost their businesses, weren't really recorded. There were people who didn't go to the doctor or the hospital, that wasn't really recorded. And so, that's all sort of scattered and diminished. And then there There were people who died of COVID and those were recorded and vivid. And so if you have a moral system based upon not doing harm.
[24:30] Then obvious harms will be opposed by that moral system, which will often create ripple harms that are not detected by the system, which are actually worse than the initial harms, right? So, yeah, it doesn't work. It doesn't work as a moral system. And moral system cannot have within it things which cannot be known at the time. A moral system cannot have elements of decision-making that cannot be known at the time, and intentionality cannot be known, because people can lie about it and fake it. And the moment that intentionality becomes a big deal in a moral system, then people would just fake intentionality to make it impossible to prove that they had malign intent, right? I mean, if you look at the libel laws in the United States, it says, well, you know, if you're a public figure, people can say whatever they want about you, as long as it's not done with actual malice or a reckless disregard for the truth. And everybody knows that who's in the media, so they just make sure that they never regard, they don't write themselves down sound saying, well, I have a reckless disregard for the truth of this, or I have actual malice towards this person, right? They don't do anything like that at all. And so it becomes impossible really to protect yourself in many instances as a public figure according to American defamation laws.
[25:44] If intentionality becomes important when judging a harm, then people will simply say to all of their friends, oh, I hope to really do good with this. And they write things in their diary saying, oh, I really want to do good with this. And then they send emails and say, oh, I really want to do good with this. And they create a whole trail saying that they really want to do good with this. And then they have protection against the charge that they were intentionally inflicting harm. I don't know. I've got a paper trail mile wide and a light year long about how I wanted to do good, right? So you can just get around it that way. And you can't oppose bullying because bullying toughens up some kids, some people, right? Somebody could say, oh, I didn't pay my employees because I really want them to become entrepreneurs. And I remember when I wasn't paid for a job when I was younger, I became an entrepreneur, which has been great for me. So I just decided not to pay my employees because I want them to become entrepreneurs because it was the best thing that ever happened to me. Yeah, good luck with that, right? So, a moral system cannot contain a requirement for facts impossible to know ahead of time. And of course, the future result or future effect of a moral choice by definition cannot be known in the present.
[26:55] So, mind reading, you can't know. You can't do it. And so, it can't be part of your moral system. And future effects, you can't know by definition in the present, therefore can't be part of your moral system, which is why the do no harm stuff, does not satisfy any of the requirements of a moral system, and will be subject to enormous amounts of manipulation and corruption.
[27:15] And also, of course, telling the truth causes people harm. An obvious example is a doctor who tells you that you have a disease, causes you emotional harm. Okay, now you can get it treated, maybe you can get better, but it does cause harm. And again, you're looking at the future consequences that are positive down the road. And of course, sometimes the doctor tells you that you have a disease that you can't get cured, you can't get better. And if you tell someone who told a lie, you told a lie, that causes them harm.
[27:48] If a cop catches a criminal, that causes the criminal harm. And if someone believes that they're a great singer and you tell them that they're not a great singer, you know, this sort of Simon Cowell stuff, if someone thinks they're a great singer and you tell them they're not a great singer, they get very upset and unhappy and that causes them a harm and it might break their heart for months or maybe even years. If there are a hundred actors up for a role, only one actor gets chosen, the other 99, who aren't as good or appropriate to the role, experience harm. And I mean, honestly, this just can go on and on. If someone believes that all humanity is a blank slate made out of silly putty that society can mold into whatever it wants, if people believe that, and then you prove to them that human beings are not a blank slate, that there are built-in capacities that vary between people that they cannot surmount, then those people get upset.
[28:54] So sometimes you are upset about something, and it turns out that it was not a bad thing, but in fact a good thing, right? I mean, you're mad because you miss a plane leaving, you're late, you miss the plane leaving, and the plane crashes, right? So you think, oh my gosh, I'm harmed, my interests are harmed, things are bad, things are negative. And then you find out that things are positive. You get fired from a job, you're very unhappy, but it turns out that because you got fired from the job, you end up starting your own business, and that ends up being more satisfying and successful, and that is how you feel better, right? I certainly had a lot of things in my life. I really work hard and pretty successfully at this point to not judge whether things are good or bad. It was negative for me to be deplatformed, but it opened up a whole bunch of other things that were very positive for me. So am I going to say that's bad? Well, it's kind of hard to say that for me. Every relationship that I had that didn't work out.
[30:06] Negative as a whole, because you want your relationships to work out. So it was negative, but I would trade all of those relationships, as I guess I did, for the wonderful marriage I have with my wife. So although it was negative at the time, it turned out to be positive.
[30:24] You know, when I left theater school, I was unhappy because I loved the acting world and so on, but then it turned out to be, it put me into a much better and happier and more productive direction. So, I mean, this idea that in the infliction of harm and so on, it requires all of this mind reading, it requires this guess about what's going to happen in the future, and it is just, it's a form of hedonism, right? It is a form of hedonism. Because if you say, well, causing people harm, the intentional infliction of harm. Again, intentional is mind reading. And people can just lie about it. I didn't mean to. So you don't have any objective moral standard. People could just wriggle out of the moral standard. You can't wriggle out of stealing can never be universally preferable behavior. Like you can never wriggle out of that. That is an absolute. So if you're giving people all of these get out of jail free cards and all of this objectivism, you can't say that you have a moral standard. It's hedonism. And how does it rope in people who don't agree with you? This is the most fundamental thing, right? How do you deal with people who don't agree?
[31:38] Well, a scientist, a scientific convention does not invite people who reject science, who are opposed to science, right? That is not what they do. They won't invite you to that. So, UPB, because it is erotite logic that a child can understand, right? Just understand. UPB is an erotite logic that children can understand. And.
[32:08] Way you can reject UPB is to reject logic, reality, language, and embrace rank hypocrisy, because you can only reject UPB by accepting UPB. It is universally preferable behavior. To reject universally preferable behavior is a ridiculous self-contradiction, and so you would be revealed as incredibly emotionally immature, you would be revealed as manipulative, you would be revealed as maybe insane, either epistemologically or morally, and you would just be rejected, right? You would just be rejected from, you would be ejected from any rational debate, and people would have no problem condemning you for your, well.
[32:58] Mental issues, mental problems, immaturity, hypocrisy, manipulation. I mean, you'd just be kicked out and dumped from all of that, right? So, and society having, like, because society, including children upwards, would accept UPB, people would have no problem if you acted to violate UPB using ostracism or coercion against you, right? This would be fully accepted and fully understood. Like, people who advocate for, oh, let's bring back, you know, they would say something crazy like, oh, let's bring back slavery, right? Those people would be ostracized from decent or civil society. They wouldn't be invited to conferences. They would never achieve any particular artistic or social or business or political success. And so this would be, that's how it's dealt with. That's how it's run. So it's just a form of hedonism, which is what I don't like to.
[33:58] Inflict emotional harm, which I don't either. It's not like I wake up in the morning and say, ooh, who can I harm today? But the way that I view it is that if people have irrational, anti-rational thoughts in their mind, that means they can't be happy and they need to be confronted on those anti-rational thoughts so that they can be happy.
[34:22] And in the same way that if I'm at the gym and somebody is exercising in a way that is going to injure them for sure, right? They're just going to hurt themselves. Then I might, I would feel pretty honor bound to say, you know, hey, you shouldn't do it that way because, you know, here's what's going to happen, right? Like when I was doing sort of labor, physical labor, if somebody was like lifting, not with their knees, but with their back, right? You know, that sort of, you just come up like one of those dipping birds, I would say you should lift with your knees, not with your back. That's going to hurt your back. You know, that kind of stuff, right? If somebody were to say to me, I haven't, I've never jogged before, but I'm going to run a marathon this weekend. I would say that's a bad idea. You're going to hurt yourself. You're going to, you know, really have a bad time because you need to work your way up to that. You can't just go and run 26 miles and change without any preparation, because I want to help people. I want to help people. Particularly, of course, having seen in my family...
[35:33] Seeing how mysticism wrecks people's lives, and seeing in my own life and the lives of other people that I know, how my rationality has helped save and create great, wonderful, happy lives, and all of that.
[35:51] I mean, if you have the cure to an illness that afflicts most people, then why wouldn't you want to spread that cure right i mean if if people are in you know chronic pain and and unhappiness and anger and you know just discontented and and frustrated and tense and can't fall in love and and you have a cure for that which is free at all the cost is your pride right the only thing that the truth costs you is your pride well then you you should you should spread that. I mean, I want to hoard it for myself. I mean, for both selfless and selfish reasons. So, yeah, it's just a form of hedonism. Let's say somebody does like or prefer inflicting harm on people. Well, how do you prove to them that they're wrong? That's the big question. I think that's pretty good. I think we're 80% of the way there.
[36:45] I'm sure there's a little bit more, but that's, I have a meeting now, so I'm going to stop. Look forward to your feedback. Thank you for this great conversation. Hugely appreciate it. FreeDomain.com slash donate. Love you guys. Bye.
Support the show, using a variety of donation methods
Support the show