I saw this same article posted over on r/ChatGPT and every single top comment is people saying “so what, why does the doctor need to be skilled if the AI can do it” 🤦♂️
A podcast I listened to recently spoke about failure modes of AI. They used an example of a toll bridge in Denmark where it was impassable recently because it only took card payments, and the payment processing system was down. It would be sensible in this scenario for the failure mode to be for the toll barrier to be open and for them to just let cars through if technical problems means it’s impossible for people to pay the toll. Unfortunately, this wasn’t the case, and no-one had the ability to manually make the barrier go up. Apparently they ended up having to dismantle the barrier while the payment system was down.
This is very silly, and highlights one of the big dangers of how AI systems are currently being used (even though this particular problem doesn’t have AI involved, I don’t think, just regular tech problems). The point is that tech can be awesome at empowering us, but we need to think about “okay, but what happens when things go wrong?”, and we need to be asking that question in a manner that puts humans at the centre.
That was a far more trivial scenario than the situation described in the article. If AI tools help improve detection rates, then that’s awesome. But we need to actually address what happens if those technologies cease to be available (whether because the tools rely on proprietary models, or power outages, or countless other ways that this could go wrong)
I saw this same article posted over on r/ChatGPT and every single top comment is people saying “so what, why does the doctor need to be skilled if the AI can do it” 🤦♂️
Love to only have access to doctors whose abilities are at the mercy of whether the computer works 😌
“Is there a doctor in the house?”
“Yes. Let me load up ChatGPT. Ah damn, no signal. Sorry guys.”
That would never happen! When have you ever been to a doctor’s appointment where the computer and network didn’t work instantly and seamlessly?! 🤪
Wait, they’re supposed to work?
cries in UK GP surgeries with broken IT infrastructure
A podcast I listened to recently spoke about failure modes of AI. They used an example of a toll bridge in Denmark where it was impassable recently because it only took card payments, and the payment processing system was down. It would be sensible in this scenario for the failure mode to be for the toll barrier to be open and for them to just let cars through if technical problems means it’s impossible for people to pay the toll. Unfortunately, this wasn’t the case, and no-one had the ability to manually make the barrier go up. Apparently they ended up having to dismantle the barrier while the payment system was down.
This is very silly, and highlights one of the big dangers of how AI systems are currently being used (even though this particular problem doesn’t have AI involved, I don’t think, just regular tech problems). The point is that tech can be awesome at empowering us, but we need to think about “okay, but what happens when things go wrong?”, and we need to be asking that question in a manner that puts humans at the centre.
That was a far more trivial scenario than the situation described in the article. If AI tools help improve detection rates, then that’s awesome. But we need to actually address what happens if those technologies cease to be available (whether because the tools rely on proprietary models, or power outages, or countless other ways that this could go wrong)
I suspect the whole problem could be avoided with some judicious UX to force the doctors to make and log their estimations first.