upvote
Audio tapes reveal mass rule-breaking in Milgram's obedience experiments

(www.psypost.org)

To be clear, this doesn't seem like it invalidates anything in the original experiment.

The "rule-breaking" isn't referring to anything the researchers were doing.

It's referring to what the participants were doing. It points out that the compliant subjects who delivered the shocks weren't always following the procedure they were given perfectly. Which is, of course, expected, since people in general don't follow instructions 100% perfectly all the time, and especially not the first time they do something.

> Kaposi and Sumeghy interpret these patterns as a complete breakdown of the supposedly legitimate scientific environment. The subjects were not committing violence for the sake of an orderly memory study. With the scientific elements either forgotten or rushed, the laboratory changed into a setting for unauthorized and senseless violence.

This feels like a huge stretch. Forgetting a step at one point or reading something out loud too early isn't a "complete breakdown of the supposedly legitimate scientific environment" -- a "scientific environment" that is completely fictional to begin with.

reply
> It points out that the compliant subjects who delivered the shocks weren't always following the procedure they were given perfectly. Which is, of course, expected, since people in general don't follow instructions 100% perfectly all the time

The article quantifies the amount of rulebreaking. The article actually compares rule breaking across participants and notes that those who were better at obeying the instructions of the experiment are the ones who refused to continue till the end.

The article doesn't invalidate the milgrim experiments. It claims that the interpretation from traditional literature is possibly wrong.

reply
Yeah one of the take-away interpretations I’ve always heard of it is the implication that the deferral to an authority figure led people to conscientiously proceed with administering fatal shocks. But this additional detail suggests that conscientiousness is actually negatively correlated with following through to the point of ethical compromise and it is, in fact, the less conscientious people who were rushing to just do what was asked of them.

This does suggest that subjects who are bought into and understand the purpose behind what they’re doing, and are attentive to how the specific tasks they’re doing tie into the bigger picture, are more likely to be actively engaging their judgement as they go. And subjects who are just trying to follow the tasks as given to them are sort of washing their hands of the outcomes as long as they’re following the directions (which is, ironically, causing them to fail at following the directions too).

reply
Well, if you're supposed to administrate shocks to teach or test someone's memory, asking the question while they're screaming isn't just about protocol, it does break down the purpose of these shocks. Saying that participants did administrate shocks because they trusted the legitimacy of what they thought they were doing doesn't hold up under these circumstances.
reply
No, because you'd have to show that the participants thought there was a breakdown of the procedure and purpose, and that they continued despite that.

If they think the procedure is to read the next question when the previous one has been completed, and they do, even if the other person is screaming, they think they're "following rules". They're not the ones who came up with the procedure.

Which is the whole point: the participants were trying to follow rules, even if they made mistakes in following those rules. The idea that there was a total "breakdown" of the rules doesn't seem supported at all.

reply
Fair point, but there's a logical relationship between 'testing someone' and 'following a set of instructions that don't achieve that effect'.

Your point is fair, but what is really nuanced is that the people who 'stopped' were the best ones at following the rules.

This seems interesting to me - they were conscientious about 'what was happening' - not just blithly following orders.

The 'rule followers' maybe were conscientiously applying the 'spirit of the test' and quit when they realized it was not reasonable.

The others were 'pressing buttons'.

Even then, it's subject to interpretation. There's a perfectly rational reason why people might subject to 'following the rules' if that's what they've been asked to do and have a sense of 'dutiful civic conduct' and 'trust in institutions'.

reply
This reevaluation postulates that the participants didn't deviate by mistake, but deliberately. The participant could have waited for the respondent to be in a state in which they could answer. (Reminder: the exercise was officially about answering questions, not enduring shocks).

Instead, most participants rushed through, most likely to end their own negative experience. Which is much more nuanced that "gosh, they told me to do it."

reply
If I'm not mistaken, they were told the point of the experiment was supposed to be about "memory and learning". If a teacher was doing a "commission" as they put it, they aren't really following the purpose of the experiment any longer.
reply
Context is important. Maybe that was told in the first 3 minutes of the briefing, and them came 30 minutes about the shocks. I would not assume the briefing was so thorough.
reply
[dead]
reply
The “complete breakdown” does not refer to the experiment, but the fictional setting of the experiment.

The article doesn’t claim that the experiment was invalidated, but that some conclusions drawn from it are not well founded.

reply
deleted
reply
The interesting bit is that the group the quit the experiment part way through (presumably over ethical objections) were consistently better at following the rules, which indicates that the rules may have actually been designed to prevent some of the problems that the obedient group experienced, which might prevent them from seeing the ethical or moral issues involved in the experiment.

Now the interesting question is _why_ did those people who followed the rules quit at a greater rate? _Why_ did those people follow the rules more closely in the first place? Was there any variation in how the rules were presented? What is the difference in between folks who follow the rules more closely and folks who don't? What can we learn about the human condition from this?

reply
Maybe the disobedient were just a bit smarter and therefore more likely to figure out that they should refuse, but also had more inherent instruction following capabilities.
reply
If anything, this makes the study more revealing and terrifying.

Basically under ill guidance of authority, people can become real monsters. That is the conclusion I got from it, and is now still worse.

reply
> people can become real monsters

It's being consistently verified in real time if you track current events.

reply
I do feel like the conclusion is a bit of a stretch, but there is a slight discrepancy where disobedient participants followed the rules more than the obedient ones, which is an interesting observation. It just feels a bit weak.
reply
deleted
reply
> a "scientific environment" that is completely fictional to begin with.

Smooth shiny white walls, beakers and test tubes filled with brightly colored liquids on shiny metal tables… Science!

reply
It wasn't a properly controlled experiment to begin with, nor was it repeated. General conclusions should not be drawn from a single, flawed study. But it makes for good headlines and talking points.
reply
reply
6 of the 7 "replications" mentioned in that Wikipedia section are literally TV shows and performance artists.

...Which is a good metaphor for the "experiment" as a whole.

reply
Actually, all of them are bs. There’s no records of the experiment in Australia. I would guess it’s just a hoax by the author of “behind the shock machine” if not, it still certainly doesn’t count as a replication.
reply
And some people really WANT to believe it's true. They've built their entire worldview around it and the idea they've been duped would cause a massive narcissistic injury.
reply
Small but important nitpick. I think, most commonly their worldview was already built, and would have been the same regardless. Milgram just provides a veneer of legitimacy, losing which would cause problems for them.
reply
If you read Gina Perry’s critique, her conclusions is that fewer than half of the participants thought it was real.

These were Yale students, so probably smarter than average, and the study didn’t do a very convincing job make it seem believable from what I’ve read.

When I took psychology in college I had to submit to random experiments to as part of my grade (there were alternatives but the experiments were easier). Before I’d ever heard of Milgram, if one of those studies had put me in a similar situation I would have smelled a rat immediately.

When I was in middle school the teachers created a fake “government decree” to convince us that there was a new sin tax on products kids use (as a simulation). I immediately knew it was fake as did many other students, but that didn’t stop us from playing along for fun. I talked to a few of my teachers later and they genuinely believed that we fell for it.

reply
That's pretty fun that your teachers did that. I wish teachers attempted to immerse students in the things they're teaching about more often, rather than just reading about it in abstract through a textbook or whatever.
reply
I had a Junior High School teacher who did a variety of immersion lessons. The problem was even a small deviation from the real world structure turns the exercise into a pretty simple game. Essentially, the results are too complex and muddy to extract overall lesson.

And social science/history/economics is about learning the standard lessons of the field (even if those lessons are themselves simplistic compared to the real world, they are a baseline of common knowledge).

reply
Interesting. If we can assume the experimenter's failure to enforce the rules was mere clumsiness or incompetence, rather than an indicator of underlying intentional manipulation of the experimental conditions à la Stanford prison experiment, this can be interpreted in many different ways.

The (eventually) disobedient subjects were better at respecting the experimental process they were given than the "obedient" ones who went all the way to the maximum voltage. Why was that?

Could it be a sign that the disobedient subjects were on average more concentrated on the task at hand (smarter? less stressed? better educated? more conscientious?) than the ultimately obedient ones, and therefore were more likely to realise they were "hurting" the alleged learner and stop?

Or could it be that the obedient subjects were more likely to realise there was something fishy going on, suspecting the "learner" wasn't really being shocked, and thus were paying less attention to the learning rules?

Or was it, as the article suggests, that the obedient ones may have shut down emotionally under pressure to follow through, and their mistakes are the result of that?

Or were the obedient ones more likely to be actual sadists, who were enjoying the shocks so much that they didn't even care if the "learner" didn't hear their question, giving them a greater chance of shocking them again?

Unfortunately I think the Milgram experiment has become so entrenched in popular culture that there's absolutely no way it can be properly repeated to explore these questions.

reply
It really calls into questions the conclusions drawn from the last 50 years. Here's the ones disproven I remember:

* kids grow to be rich because they accept delayed gratification

* alpha males are the leader of the pack and all other males are useless

* people accept violence if there is a higher authority which justifies it with a reason

How many people suffered or delivered suffering because of their beliefs in the above?

reply
Didn't the Dunedin study also find that childhood self-control and delayed gratification correlated with adult life outcomes?

https://dunedinstudy.otago.ac.nz/files/1571970023782.pdf

reply
Last I checked, the delayed gratification was also highly correlates with having wealthy parents.
reply
Source?
reply
On that second point - I can strongly recommend the book Goliath's Curse by Luke Kemp:

https://en.wikipedia.org/wiki/Goliath%27s_Curse

reply
Wikipedia makes it sound like questionable at best. I'll wait a decade and see if it comes out looking like milk or wine.
reply
The first point, and I can see in my own life, is valid. Not properly rich by any means, but vastly surpassed any expectations and most of my peers from earlier life (which is rather easy when coming from poor eastern Europe but somehow most folks from back home didn't, too deep in their little comfort zones or fears of risks that were mostly made up).

It can be reframed as cca discipline too, willingness to suffer a bit for later rewards. Can see this as massive success multiplier in many real world situations.

reply
>> willingness to suffer a bit for later rewards.

Almost every person I went to college with had this viewpoint. There's also something comforting knowing you and your friends are all doing the same thing. We all were dirt poor in college trying to support ourselves with crappy part-time jobs working delivering pizza, working in fast food joints, cleaning offices at night. The idea was we all believed we were working towards something better than our current situation. The suffering some how made you a better person, more resilient, made you understand what it was like to really earn something.

All of my close friends I had in college all went on to do successful things. Engineers, attorneys, stock brokers, software engineers, pharmacists. We all eventually got to where we wanted to be, but the suffering is what still binds us together to this day. Talking about some of the houses we lived in that should've been condemned. Having to work 60 hours a week, and still do well on that exam on Friday.

The willingness to suffer is eased when you have a shared experience with others around you.

reply
The great thing is you can just focus on the one person who "worked hard" or "self disciplined" or "studied well" and got rich while ignoring all the other people who did the same thing and didn't.
reply
I don't think experimental psychology ever validated those extremely simplistic conclusions. I'd rather these simplistic conclusions are a "folk summary"/mythical-version of a few experiments and they come from already existing cultural tropes, cultural tropes that were simplified and made more cruel and ruthless by various self-marketing consultants.
reply
Making someone think they're an accomplice to torture is itself recognized as a form of psychological torture. Telling someone that they're helping to advance science proves nothing, except that people can be deceived, manipulated, and exploited by bad actors.

Milgram decided to repeat his gross ethical violation 30 times(!), with dozens of test subjects each time. Overall, the majority of people actually disobeyed the orders to continue with higher voltages.

I think the only reason it's become so popular is because it makes for a shocking story, with grandiose implications. The specific "agentic state theory" Milgram invented is not backed up by his data, and personally, I find it philosophically dubious and psychologically concerning that he gravitated to it.

See:

https://www.bps.org.uk/psychologist/why-almost-everything-yo...

https://journals.sagepub.com/doi/abs/10.1177/095935431560539...

reply
Alot of the problem with these “disproven” things is over broad scope or abused in the popular media beyond comprehension.

The delayed gratification thing in particular is correlation vs. causation. It was really more about trust. Forcing kids to delay gratification is meaningless or counterproductive.

reply
Agree. But according to Gemini [for what's worth] the final 1990 Mashmallow's study [since first versions were cautious] did indeed jump to conclusions to point there was a causation to a better later life. The media might have amplified, but the wrong (or misleading) conclusion was already present in the _scientific_ paper.
reply
If a scientific paper makes a conclusion, that doesn't mean its a correct, valid, or properly supported conclusion.

You instead look at the claim and the data and the experiment methodology. It often says something far far less generalizable or significant than the conclusion section of the paper.

reply
The thing about experimental science is that you should not make much conclusions from one study or one paper. Those should wait till consensus is reached, till there are many independent studies confirming the same thing under various conditions.
reply
The Milgram experiment also couldn't be repeated today as it was completely unethical. It caused huge psychological distress to participants to the point that some participants had seizures.
reply
Maybe we can do a meta-Milgram. A group of junior researchers are tasked with implementing what they believe to be a Milgram experiment, and while performing it the subjects (actually actors) start faking psychological distress in response to having to shock the completely fake learner subjects.

One of the researchers feels guilty from the apparent panic attack his subject appears to be going through, so he excuses himself from the experimental room and approaches the lead investigator who's watching on CCTV from outside:

“Professor, this subject is really suffering from their belief that they are electrocuting the learner. I believe this is unethical, can we stop please?”

The professor replies:

“The experiment requires that you continue.”

reply
My guess is that it is the pressure to conform working in multiple ways.

The reading of questions while the subject was screaming is acting in a way that seems like that it is a performative action of conforming to the pattern and that the failure of the pattern is caused by the answerer failing to conform to the pattern. That makes the shocks a punishment for failing to conform. The questioner has a facade of doing the right thing by going through the motions, even though they are breaking the rules by doing so, because if the other party were compliant that rule wouldn't have been broken. That the shocks were painful would feel appropriate to those who had a strong sense that nonconformity should be punished. It is less them following the rules and more them assuming the intent of the rules and permitting abuse because the intent was not their decision. It might make them less willing participants to the abuse and more 'not my problem' active participants.

reply
The reason you have psychology experiments with controls and parameters is that extracting definite conclusions from the simple observation of human behavior is extremely difficult given the wide variety of individuals, groups and cultures.

Once you have an experiment that degenerates into just an event, a situation where the controls have failed, you come up with many potential conclusions but you've lost any science-specific-conclusion to the observations and you may as well look any series of events.

That said, I think experimental psychology just generally fails to establish enough controls to merit the scientific quality it aspires to.

reply
> By staying silent and letting the memory study fall apart, the experimenter allowed an atmosphere of illegitimate violence to flourish.

Many people are cruel. Not all people, maybe; not most people, also maybe; but some people enjoy hurting others. We see this everywhere. Isn't it possible that this kind of profile jumped on the occasion to inflict pain on people with no fear of repercussions?

In other words, isn't this study just a sort filter to triage / order students from most cruel to less cruel?

reply
No. I highly encourage people to read his book. What you are describing is a classic example of Fundamental Attribution Error - the assumption that people’s actions are primarily the result of some innate trait, versus that of circumstance.

His study plainly shows that most people, in the right circumstances, will act in unimaginably cruel ways.

reply
People that have been abused are more likely to abuse others.

If we remove this cycle of abuse, what is the natural rate of humans that will hurt others?

An uncomfortable idea, as victims become perpetrators, it may be best to segregate victims to prevent future abuse and victimization.

reply
People that have been treated well are more likely to treat other people well.

If we remove this cycle of decency, what is the natural rate of humans that will hurt others?

The premise is flawed, humans learn from their environment and there's really no way to put a human in a coffin until they're 20 and see what they do then.

reply
> The premise is flawed, humans learn from their environment and there's really no way to put a human in a coffin until they're 20 and see what they do then.

Yeah, but you can also find that rate if you remove the trigger (abuse) from the environment (society) and see how the rate changes.

You don't have to lock someone in a coffin, or something ridiculous like that (and that would be counterproductive anyway). You create a society, or a least a sub-society, where there's no abuse, and see how much abuse is invented by the people raised in that environment.

reply
Right but then you don't need to change anything, simply measure how many people act the opposite way to what they were raised, and then you'll know.
reply
> Right but then you don't need to change anything, simply measure how many people act the opposite way to what they were raised, and then you'll know.

That's presuming the only influence on a child's development are the adults who are raising them, which is not true.

reply
In that case, preventing abuse has the same issue (it only changes the adults).
reply
If a child is sexually abused, perhaps society would benefit from segregating the victims of abuse to prevent the cycle of abuse from continuing?

Let’s put it another way, if a catholic priest touches a choirboy, it’s not a good idea to let the choirboy become a priest and victimize the next generation of choirboys.

Gross but perhaps a benefit to society

reply
Nobody can answer that. Abuse can be low intensity, spread across large period of time or intense 1-off event and resulting damage can be similar. Spread across whole lifetimes till the point of experiment.

Extremely individual reactions, what makes one tougher breaks another completely and permanently, and everything in between.

I'd say everybody experienced some sort and level of abuse, typical school bullies (which were usually also bullied somehow, hence the behavior).

reply
Back in the course of human evolution there must at some point have been mammals who were not yet riding on the dysfunctional cycle of violence. That means the natural rate must be non-zero, at least, or else the cycle would have no starting momentum.
reply
> as victims become perpetrators, it may be best to segregate victims to prevent future abuse and victimization

Wonderful idea. Let's not forget to segregate the poors, since they commit violent crimes at higher rates too. We can build a perfect utopia if only we just get rid of all the undesirables!

reply
In practice this just stops victims from coming forward and deepens the cycle
reply
Without study of the internal motivations, the conclusions of the study are pure conjectures.

You are trapped in an experiment and you have the impression that things went too far and you think you can't escape? You rush it. You hear horrible noises? You just pretend you don't hear them. These are all classical mental patterns. There are million ways to explain them.

reply
In what way were they trapped?
reply
I assume they meant "If you feel trapped", and followed by imagining what could happen in such a situation.
reply
> With the scientific elements either forgotten or rushed, the laboratory changed into a setting for unauthorized and senseless violence.

> The study authors propose that the experimenter played a major, passive role in establishing this dynamic. When the participants broke the rules and skipped steps, the authority figure rarely intervened to correct them or pause the session. By staying silent and letting the memory study fall apart, the experimenter allowed an atmosphere of illegitimate violence to flourish.

This sounds like looting scenarios to me. ie. When a situation descend into chaos, some people will just surf/leverage that chaos, instead of attempting a return to normalcy, for whatever reason.

reply
This study is so flawed in so many ways that it doesn’t prove or disprove anything in any way. The most obvious thing is that the assumption that the test subjects did not realise it was fake. It was not controlled in any way and many of the subjects (presumably Yale students and so hardly complete dumb-dumbs) propably thought it was just a lark.
reply
This is well-documented in Humankind: A Hopeful History by Rutger Bregman. It's worth a read, as it also dispels other experiments in human behaviour that have subsequently been difficult to replicate (for a variety of reasons).

https://www.goodreads.com/book/show/52879286-humankind

reply
Yep, great book. Also dispels the Lord of the Lies myth that people have taken as gospel.
reply
I have always been pretty critical about "psychology" as a field, but always kept famous successful experiments (like Milgram and the Stanford prison experiment) as examples that "sometimes it's possible to actually get interesting results".

Turns out those are not valid examples either. So I am genuinely wondering: what remains of the field of psychology, except for a group of people who find it interesting to think about how other people think/behave? Are there examples of actual, useful and valid conclusions coming from that field?

reply
I'd think the conclusion you should draw is not that "even the famous experiments were not valid, so nothing in psychology is" but rather "the validity of an experiment does not correlate with how famous it is".
reply
A direct conclusion. The insight I'll draw from that is that academia gives voice to the results the current zeitgeist finds interesting and believable without properly verifying the evidence.

See also the replication crisis.

reply
I don't think academia runs fox news and cnn but I'll withhold judgement
reply
s/voice/authority/
reply
Famous experiments are not chosen by academia. They are chosen by non academics. What you usually find is academics being much more reserved and more critical of these then journalists, bloggers or random commenters on HN.
reply
I don't know about "much more reserved"... Citation needed. In the absence of evidence otherwise I assume academics are just people.
reply
> Are there examples of actual, useful and valid conclusions coming from that field?

In order for someone to answer this, I think you need to come up with some sort of definition what "actual", "useful" and "valid" actually means here in this context.

Lots of stuff from psychology been successfully applied to treat people in therapy with various issues, but is that "valid" enough for you? Something tells me you already know some people are being helped in therapy one way or another, yet it seems to me those might not be "useful" enough, since I don't clearly understand what would be "useful" to you if not those examples.

reply
Psychology "knows" that people don't enter treatment until things are really bad, and then they get better - no matter what treatment is provided. Finding treatment that is better than others is the important part and they also know they are not very good at that.
reply
> and then they get better - no matter what treatment is provided

I don't know what experience of therapy you've had in the past, but this is typically not how it works. People get better when a treatment is applied that is suitable to them as a person and the context, not sure where you'd get the whole "people get better no matter what treatment is applied", haven't been true in my experience.

reply
I'm only reporting what I heard in my intro to psychology class years ago... Still, this is more revision to a mean applying. There are for sure treatments that are better than doing nothing, there are also treatments worse than doing nothing. But in general people tend to get better after a time. (they often get worse again in a few months, but this was not covered in class).
reply
deleted
reply
deleted
reply
The results absolutely are interesting - in fact they’re far stronger for the willingness of many to inflict violence than the original description suggested.

> While every obedient participant reliably pressed the shock lever, they regularly neglected or ruined the other steps required to justify the shock.

Procedural violations here include things like asking the question while the person in the other room was still screaming.

reply
The Hawthorne effect is real. And I don’t think we will ever get a 100% solid grip on what’s happening in others’ minds. Well, until we can actually read, understand, and interpret brain activity at the cellular level.
reply
In Dan Ariely's book, "predictably irrational", there's a chapter about how everyone cheats a little.

And based on everyone I've met, and on Dan Ariely's own actions (1), I've concluded this one is true.

We all cheat a little from time to time.

Ex : for me, driving a few km/h above the speed limit is "cheating a little"

1 : https://www.businessinsider.com/dan-ariely-duke-fraud-invest...

reply
The ironic part is the recent fabrication controversy with Ariely. He’s recently had to retract fraudulent papers (one of them, most ironically, on the topic of honesty) because of falsified data. It makes one question the validity of all of his work.

His relationship with Jeffrey Epstein isn’t a good look either.

reply
"Irony regards every simple truth as a challenge."

Mason Cooley

reply
Those two experiments are over 50 years old. Its a bit like dismissing physics because Hubble got his constant wrong. Psychology has a lot of issues, but its also an enormous field. If your frame of reference is half a century out of date you should probably start with some encyclopedia articles.
reply
It raises the point that if the results are questionable, why not just repeat the experiment?

Here is Derren Brown's attempt at repeating the experiment: https://www.youtube.com/watch?v=Xxq4QtK3j0Y

reply
I wonder what percentage of "obedient" teachers saw through the facade, realized that the learner wasn't a very good actor, and was just having a good time playing along with what must've seemed like some psychology professor's weird pain kink.
reply
I guess evil is even more banal than we thought!
reply
What they teach undergrads about the experiment: People blindly follow orders. If the Nazis ordered you to commit atrocities, you probably would!

What the experiment actually showed: People follow orders when the orders are justified within a persuasive ideological context, e.g. you value science and the scientific researcher is telling you to proceed for the sake of science.

In the first, people who follow the orders of Nazis are not necessarily ideologically aligned with the Nazis, they might just be in a brainless order-following trance. But this isn't real, and in reality the people who were "just following orders" were in fact ideological committed to the cause and should be judged accordingly.

reply
always thought it seemed flawed
reply
You should have said something
reply
That's an interesting perspective, and it does expand how we can interpret the Milgram experiment

That said the study has been replicated many times since the original, with researchers adjusting different parameters like participant screening, changing the gender balance, or varying the roles (teacher/student, researcher/technician...) Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.

Other experiments have also looked at which factors make this more likely, and for example, diffusing responsibility seems to be one of the most effective ones.

reply
> Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.

The pop culture version of what happened in those experiments is “regular people will administer potentially lethal shocks when told to”, and that claim has been refuted experimentally many times over.

Contrary to most reports, the original experimenters never told participants that the shocks are supposedly lethal or even dangerous. When participants were actually told that there was a health risk, and that they should ignore it, the vast majority of participants refused to administer the shocks in a later recreation.[1]

In other words, the Milgram experiment, as commonly understood, is somewhere between sensationalism and an outright lie.

[1] https://www.mdpi.com/2076-0760/3/2/194

reply
deleted
reply
It should have been rejected from the outset. What Milgram did in his experiments was nothing less than construct an elaborate setup so he could psychologically torture dozens of well-meaning people. The ethical violation was already recognized at the time, and given that, nothing else he claims about method or implications can be trusted.
reply
Is there any information on how many of the participants realized the victim was just acting? Surely it can’t be zero.

https://en.wikipedia.org/wiki/Milgram_experiment

reply
This one is actually interesting: The statistical difference highlights that the people who eventually quit were actually better at following the scientific protocol than those who went to the end.

And also this: The most frequent violation in obedient sessions (those who shocked till the end) involved reading the memory test questions over the simulated screams of the learner. Doing this effectively guaranteed that the learner would fail the test and receive another shock.

Basically, being willing to shock other people without stopping was more about violence itself being permitted then about being obedient person. Rule followers followed the protocol until they concluded "nope, this is too much" and stopped mistreating the victim.

reply
Appearance of rule-following is of primary importance, not actual rule-following.

The performance, or signal, or whatever we're calling it. That's the important thing.

reply
This isn't an experiment. It's just some idiot running pseudoscience. Predictably the pop science morons have decided this fake 'research' needs more attention than just dismissal.
reply
Milgram gets thrown around as proof that everyone is just a few steps away from being an agent of evil. Finding out that it actually shows that there are psychopaths among us, and most people actually refused (left the experiment), somehow "clicks" and fits with reality a lot better. We see this in historical genocides - not everyone is in on it, and in fact it has to be covered-up internally because only the psychopaths are able to stomach it.
reply
I think you are incorrectly guessing the content of the article based on the title.

The article doesn't say that more people refused than was previously known.

It just concludes that most people weren't following instructions in a way that would have supported the validity of the supposed memory experiment.

reply
Indeed. Just knowing that the subjects who followed through with the shocks were less likely to obey the rules could be interpreted in many ways, some invalidating the results of the experiment, some just suggesting a mechanistic explanation, and some making the results even more concerning.

* Did the subjects who went full voltage stop caring about the "learning" protocol because they realised it was all fake? Then the conclusions of Milgram's experiment are invalid.

* Did the subjects who went full voltage make more mistakes because they were more anxious and fearful of the experimenter? Then underlying fear might be a mechanism for blind obedience, and further research would be interesting.

* Did the subjects who went full voltage just enjoy electrocuting the dude so much that they stopped caring about asking the questions correctly? Then blind obedience is the least of our worries, widespread sadism is much more concerning.

reply
It says more then that. The "psychopats" were NOT following the rules. The rule followers were not cruel.

The act of torturing was not due to the torturer obeying the rules. Instead, torturers broke the rules and created conditions that allowed them more torture.

reply
[dead]
reply