Everyone loves the idea of personalised healthcare, but when it’s been trialled in the past, the cost is often too high, both for the pharmaceutical companies producing the drugs and for the patients and healthcare systems.
“That’s why the pharma companies like to develop ‘blockbusters’. It’s just one drug for all,” Dr Alex Ng told Euronews Health.
But what if AI could change all that?
We sat down with Dr Alex Ng, president of Tencent Healthcare, to hear about the game-changing possibilities of AI in healthcare and living up to the company’s motto of ‘tech for good’.
Access to healthcare
Whilst we all know technology can bring some great innovations and development to an industry, Ng is very conscious of the problems a lack of access can create as a service becomes more digitised.
“What we have seen with history is that with digitisation, with more and more machines, you are actually inadvertently widening the inequality gap,” Ng explained.
“A lot of that may be due to access, and a lot will be due to reimbursement, payments, [and] ability to pay. Now, we’ve seen that with every single wave of technology development, it gets wider and wider”.
Tencent is a Chinese technology conglomerate but it is best known for its popular messaging app, WeChat.
By leveraging the popularity of the app, Tencent have added extra features to help increase access to healthcare for their users in China. Users can book hospital or clinic appointments, get a tele-consultation and browse health and drug information through their equivalent of WhatsApp.
“We used to write a lot of medical articles to do health education, health awareness. But now, people are going to AI, whether it’s actually ChatGPT or chatbots,” Ng said.
“We can make sure that the answer that they get is much more rigid and verified and much more peer-reviewed, versus a lot of the hallucinations that AI can sometimes get”.
This is a particularly important service in China, where, Ng explained, patients receive their test results at the same time as a doctor. And like most of us would do too, many patients look to the internet to interpret the results before their follow-up appointment with their doctor.
“A lot of the time, we provide tools against AI to actually explain to them what these test results mean. Of course, with the usual caveat, that we cannot replace the doctor’s advice,” he explained.
“Even if we didn’t provide a rigorous AI engine to help them with that, they would just do it on any other random AI that is not healthcare specific. If they just go to a random tool and they do it, the hallucination inaccuracy might be even higher.
“And so we come from a vantage point where we are well aware of the seriousness of supporting the patient and the user in interpreting some of the results. So we try to offer a much more ground truth alternative”.
Can we accept AI that makes mistakes in healthcare?
The results of using AI for the greater good of public health seem obvious but where the debate gets interesting is our capacity to accept mistakes. When working with humans, we appear to be much more accepting of human error, yet expect AI, a human-created technology, not to make any mistakes.
“AI is never perfect. Just like drugs, each drug is intended for a certain indication, but you also accept that it will have side effects and some mild, some moderate, some serious, for the whole population. And there’s a regulator, there are laws around to protect that, because otherwise no one will be investing in developing new drugs,” Ng said.
“I think the same for AI. If we have one AI being developed for a specific task, it might be better than humans already, but it is not faultless. If there are certain faults within a boundary, that is acceptable, how do you regulate that?
“How do we, as a health system, work with the regulators, work with health systems in some way, work with society on what is acceptable? And I think that line is very different for China, Southeast Asia, the US, Europe, and the UK because I think the expectation is very different”.