Should AI assistants be allowed to provide medical advice?

AI assistants have the potential to provide medical advice, but should they? Why or why not?
Use these indicators to tag your arguments by copy and pasting them from here. Please use proper indentations for s

And note that the one position, is usually an  another position. You do not need to duplicate your arguments, just add it once in the relevant section.
 * Argument in favor of the position
 * Argument against the position
 * Objection to the argument.
 * Objection to the objection.

Please feel free to add vague arguments, like "AI will outperform doctors in working with patients," but try to provide specific arguments, like "A new study published in JAMA Internal Medicine compared written responses from physicians and those from ChatGPT to real-world health questions, and a panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time, rating ChatGPT’s responses as higher quality."

Position: Yes, AI assistants should be allowed to provide medical advice
Relevant details, definitions and assumptions regarding the first possibility.
 * AI assistants are already in use answering medical questions.
 * AI assistants can draft high-quality, personalized medical advice for review by clinicians, which can help to solve real-world healthcare delivery problems.
 * AI will outperform doctors in diagnosing patients.
 * AI may be better at identifying the diagnosis but cannot provide the human connection required to empathetically deliver the diagnosis and treatment plan
 * Studies show that patients rate AI diagnoses as more empathetic than human generated diagnoses
 * One AI system could see thousands of patients at one time, replacing a thousand doctors at any given appointment.
 * Barriers to physician access due to a shortage of physicians should not be remedied by technical workarounds; we should train more doctors
 * AI systems can be sufficiently trained to screen patients and in the event that they are not able to provide diagnosis, they can refer said patient to a human physician.
 * Patients seeking human diagnosis may game the system, submitting false or contradictory information in order to prevent successful AI diagnosis
 * AI can deliver medical advice instantly and is accessible 24/7, providing a solution for people who may not have immediate access to healthcare services. This is particularly beneficial for individuals in rural areas, developing countries, or during off-hours when medical professionals might not be readily available.
 * Patients can receive timely medical advice from human physicians using telemedicine technology
 * AI can help to prioritize cases based on the urgency of symptoms, ensuring serious conditions receive immediate attention. It can also help to reduce unnecessary hospital visits by providing advice for managing minor conditions.

Position: No, AI assistants should not be allowed to provide medical advice

 * AI assistants may not be properly trained to provide accurate medical advice, which could lead to negative consequences for patients.
 * AI assistants have the potential to be no less accurate than human experts
 * AI hallucinates, raising the potential for confidently delivered misdiagnoses
 * AI can't do proper clinical diagnosis like doctors. Two patients can have similar symptoms but different disease types. That's why doctors are needed, who go through a complete clinical diagnosis of a patient before recommending him drugs/treatment.
 * While AI can analyze data rapidly, its advice is only as good as the data it's trained on. There's a risk that the AI could provide incorrect advice if it has been trained on flawed or biased data.
 * AI systems are always improving and will get better when exposed to real-time information
 * In the medical field, there is no acceptable room for error in diagnosis
 * AI would need access to sensitive personal health information to give advice, which may present significant data privacy and security issues.
 * Even if AI is able to accurately and securely provide medical advice, AI is unable to do so in a way that is culturally and situationally appropriate. Do you want a robot telling you that you have terminal cancer?
 * Human experts are more flexible, enabling them to better respond to unpredictable patient reactions to diagnoses
 * Diagnoses are more than facts; they're the start of a medical journey. Patients will feel more comfortable and empowered sharing that journey with a human doctor than they would feel with a machine
 * In the case of misdiagnosis by AI assistant, it is difficult to identify accountability for medical malpractice suits, inhibiting patient protection

Position: AI should only be allowed to provide medical advice if...

 * We are certain that we can reduce bias, promote privacy and transparency when using AI systems for healthcare.
 * If an AI system can pass the medical exam, then it should be able to provide medical advice.
 * In cases like blood test reports, pregnancy cases, etc. AI can be very useful. For example, if a patient has Vitamin B-12, Vitamin D deficiency and that shows in the blood test, AI can diagnose that in the patients and recommend him supplements to help compensate that.
 * AI advice should be validated or supervised by medical professionals to ensure accuracy and address the lack of human judgment issue.
 * A version of statistical triage can be used to focus scarce human oversight on AI recommendations that have weaker data and/or worse consequences of error
 * There should be clear regulations and standards for AI in healthcare to prevent misuse and ensure the system is built on accurate and unbiased data.
 * It should be clear to users that they are receiving advice from an AI, what data the AI is using, and how it's coming to its conclusions. Users should also be able to opt-in or out.
 * AI should be used as a supplementary tool for healthcare professionals and the public rather than a replacement for traditional healthcare services, reducing the risk of over-reliance.

Notes and references
]