Fans of “Star Trek: Voyager” are perhaps used to the idea of a completely computerized medical professional, the doctor on that program having been an artificial intelligence manifesting as a holographic projection (with actual density, something real-life holographs have yet to attain).
On the show, the crew of the Voyager had little choice but to trust their artificial doctor, all the real medical staff having been killed in the pilot episode.
However, despite the obvious fact that this is not 2371, the year of the Voyager’s fictional launch, it appears that there are already plenty of Americans ready to jump into medicine-by-AI feet first.
That, at least, is what data from UserTesting, a company that evaluates how people interact with websites and assists companies in site design, indicates about current attitudes toward computerized medicine.
The company surveyed 2,000 American consumers, as well as 1,000 each from the U.K. and Australia, and found Americans more willing than the others to trust AI with their health issues.
Much more willing, in fact: UserTesting said that while 44 percent and 27 percent, respectively, of British and Australian consumers expressed some distrust of AI-related health care, only about 6% of American agreed with them.
Some of the other results were even more surprising.
While it may seem reasonable — even advisable — to search online (which is free) regarding symptoms before deciding whether to see a doctor (which often isn’t), why anyone would consult social media for literally anything other than entertainment is beyond me.
So if 53 percent of Americans search webMD or whatever, that seems fair, even if only 44 percent of them go on to see a medical professional.
Would you consult AI for your medical needs?
But 46 percent of Americans going to social media — where they along with his 28 other followers can be told by Marvin Stiebler that no one really knows what causes cancer — seems ignorant to the point of danger. (Cancer is simply a gene mutation, and we know lots and lots and lots about what causes gene mutations, Marvin’s protests to the contrary notwithstanding.)
By the way, I didn’t have to scour the depths of the dark web to come up with Marvin’s nugget of wisdom, here — I literally searched “What causes cancer” on X and his post was the third response.
It was immediately preceded by a link to a story about oral sex causing cancer and the (presumably rhetorical) question, “What if I told you stress causes cancer?“
Pleeeeassse….they don’t even know how cancer really develops let alone what actually causes it 🙄
— Marvin Stiebler (@MarvinStiebler) December 20, 2023
This is where nearly half of Americans are searching for health information? Please, please tell me that this half isn’t registered to vote — although they probably are, and that probably explains some things.
Slightly more than half of Americans — 52 percent, to be exact — told UserTesting that they had “given a list of their symptoms to a large language model (LLM) like ChatGPT, looking for a diagnosis.”
“Of them, 81% have been given a diagnosis from the LLM,” the company reported, “and when asked for their diagnosis after consulting a doctor, 84% said the diagnosis was accurate.”
Kind of stinks to be in the other 16 percent, I suppose — and that’s part of the problem here.
If you get a bad diagnosis from a doctor, that could be a real problem, but it’s a problem than can at least be mitigated in some cases by holding that doctor accountable. That’s what, in extreme cases, medical malpractice lawsuits are for.
To whom do you turn when ChatGPT recommends an an analgesic cream for your skin irritation that later turned out to be Lyme disease once you talked to a real doctor about it? Starfleet Medical?