🕒 7 min
It is in the nature of medicine that you are gonna screw up; you are gonna kill someone.Gregory House, on the TV show House
While the stakes may not be as high in most sciences as they are in medicine (are you really going to be risking lives doing computer science?), the quote seems as a fitting opening for this post. However, a more general version of it holds: it is in human nature that you’re gonna screw up. So, what can happen, and how do we deal with that?
A few cautionary tales
While it does really seem that you can’t really harm someone with bad computer science, or bad physics, or bad studies, I’d like to try and suggest otherwise.
When safety measures fail: killed by a bug
Imagine you’re working on developing a radiation therapy device. You’d like to make sure it’s as safe as it can be – so you implement error checks to prevent it from failure. Previous models also had hardware error handling, but your bosses want to cut down on cost, so they tell you to implement it only in software.
You do your best with the given time and resources, but you miss a very subtle thing. A race condition, as it’s called. The possibility that one safety test fails at the exact same time as a different one passes. Something like that could make your software think everything’s going well when in fact something’s broken.
You didn’t have the chance to test the code on actual devices, as your company is very cheap and building radiation therapy devices is expensive. The combination of hardware and software will be tested on installation in the hospitals, they tell you. But you know they’re going to test only if it works, not how it works, don’t you? So you tested the code on a simulator on your computer, and it worked properly. You did the best testing you could do, as the code was old, and wasn’t really suited for automated testing.
You did reuse the old code, so surely that should help you build good safety checks, right? You added additional checks and fixed some bugs in the old ones. But it doesn’t occur to you that the old code was broken. How could it be? There were no incidents before. Those devices did have hardware error checks, but surely they had good software checks as well?
And then the calls start rolling in. Patients screaming in pain, their flesh burning. They start showing symptoms of radiation poisoning.
And you don’t stop to think it could be your device. You’ve tested it, it was fine! You did a good job, it must be the operator’s fault.
Your overconfidence just killed or injured at least six people in the span of two years.
This is the true story of the Therac-25, a radiation therapy device built by Atomic Energy of Canada Limited. The device now serves as a cautionary tale for all software engineers building safety-critical devices.
Sticking your head into a particle accelerator
Imagine you’re a physicist working on a particle accelerator.
You’re sitting in the control room, doing whatever control room personnel of particle accelerators do, when your colleague, also a physicist, calls you to let you know that he’ll be going down to the particle accelerator to check up on some equipment that was malfunctioning. He said he’s going to be down there in about five minutes, so just stop the experiments while he’s in there.
“Sure, okay”, you tell him. Of course, you wouldn’t like to harm your colleague. So you look at the clock and remember the time.
But he forgot his watch. He didn’t chat anyone up on the hallway, he didn’t call you to check if the experiment had stopped, he just looked at the light that said if the particle accelerator was on. And it said it was off. And the door, which should have been locked if the experiment was in progress, was unlocked.
It did seem to him – correctly – that he took less than five minutes. And he did wonder if he should enter. But the door was unlocked. The light was off. He thought it must be safe.
And so, he leaned over the equipment to do his job. It wasn’t long until he was struck by a 76 GeV proton beam, seeing a flash brighter than a thousand suns as it passed through the back of his head, the occipital and temporal lobes of his brain, the left middle ear and out of the left side of his nose.
Curiously, after being struck by up to 300 000 roentgens, he continued his work as he felt no pain. He was a physicist, he knew what had happened and he knew his life was in serious danger. But, he went home that day, not telling anyone what had happened.
At home, in the following days, the left side of his face was swelling beyond recognition and his skin was peeling.
Luckily, he did receive treatment that allowed him to live. His mental faculties weren’t diminished, it seems – he finished his PhD and continued work as a particle physicist. But his personality changed. He couldn’t move half of his face. He didn’t hear in his left ear, he was getting tired from thinking and he had seizures every once in a while.
Your confidence in safety protocols and everything going well just changed the course of your colleague’s life.
This is the true story of Anatoli Bugorski‘s accident, a Russian particle physicist who, depending on your viewpoint, either had the worst luck when he stuck his head into a proton beam, or the best as he lived to tell the tale.
Professional liar, £150/hr
Imagine you’re a doctor.
A somewhat established, although highly controversional one. You’ve been published a few hypothetical studies here and there, but never something that’s an outright lie. You were a university lecturer, you still cared about the truth.
And then you were approached by a lawyer looking for expert witnesses in “vaccine damage”, offering you £150/hour plus expenses to work on finding a connection between vaccines and developmental issues.
So, you found 12 children with developmental issues and published a study in which you described findings you’d characterize as a new syndrome and claim a link between autism and gastrointestinal issues. You said it didn’t prove a link between the novel syndrome and the MMR vaccine you’ve been investigating and where the link was hypothesized by the children’s parents, but you just couldn’t let people not know there could be a link until you could investigate it further – so you gathered press and made a press conference about it.
The plot twist? It was all a lie. You manipulated their patient histories, modified their histopatology results and just outright lied about everything.
And the public believed it. They still believe it.
You started the antivax movement. But hey, you got half a million of pounds for that.
This is the true story of Andrew Wakefield and the Lancet MMR autism fraud.
What’s our duty?
Doctors have the Hippocratic Oath.
As the stories above show, as scientists, we also should have a duty, a moral imperative, to uphold.
And as medicine shows more than any other science, screw ups happen. It’ll happen to you, whether you like it or not. The consequences vary, the ability to mess up – not so much.
And it’s seems to me that it’s okay to mess up. Sometimes. When it’s really not your fault. When you do your best, it’s a matter of circumstance, not of fault.
So, how should we behave as scientists? What is our moral imperative? What is there to uphold?
It’s a question I’ve been asking myself lately, and a question that I really don’t have a great response to.
Sure, do no harm is a good start, but is that enough? It’s painfully obvious that we should seek the truth and report on our findings honestly, unlike Wakefield.
But doing no harm also implies, at the very least, maintaining good security protocols and inspections, unlike Bugorski’s institute.
And it means admitting when you’re wrong and doing your best to minimize the damage, unlike the Therac-25 developers.
It means that we should give our best, even when we aren’t at our best, and keep humble. It means thinking about your work and its consequences.
I’d love to end this with a brilliant conclusion. I’d like to end this with a good rule of thumb for future scientists and engineers and life, but I can’t think of any that’d be good enough. So, please join me in this philosophical discussion on the moral imperative for scientists in the comments.