Review article.
Majed Chamsi-Pasha
MBBS, SBIM
Consultant Physician
European Medical Center, Jeddah
Hassan Chamsi-Pasha
MD, FRCP (Lond), FRCP (Glasg), FRCP (Ire), FACC
Consultant Cardiologist
Medical Reference Center, Jeddah
Address for correspondence:
Dr. Hassan Chamsi-Pasha FRCP, FACC
Medical Reference Center, Jeddah
Al Malek Rd., Jeddah, Saudi Arabia
Tel: 00966 9200 01476
Abstract
Integrating Artificial Intelligence (AI) and robotics in healthcare is a transformative development with enormous promise for revolutionizing patient care, diagnostics, and treatment modalities. These technologies enhance the precision and efficiency of medical practices, improve patient outcomes, and alleviate the burden on healthcare professionals. While AI offers tremendous potential in healthcare processes, it presents multifaceted ethical challenges that demand meticulous consideration. The primary concerns include bias, privacy, trust, responsibility, transparency, cybersecurity, and data quality.
artificial intelligence, ChatGPT, chatbot, ethics, healthcare,
Introduction
The twenty-first century is often recognized as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on human beings. AI is a type of computer system that simulates human intelligence and is used to achieve various tasks by mimicking human cognition, learning, and decision-making processes. It incorporates various technologies and algorithms, including machine learning, deep learning, natural language processing, computer vision, and more.1
The use of AI has enhanced clinical diagnosis, predictive medicine, patients' data and diagnostics, and clinical decision-making.4
With the launch of ChatGPT, OpenAI has taken the academic community by storm, forcing researchers, editors, and publishers of scientific journals to rethink and adjust their publication policies and strategies. While there are promising results for potential applications of ChatGPT in various fields, there are also significant ethical considerations to be addressed before widespread implementation can prevail.5
Artificial intelligence technologies can also be used in criminal and deceptive activities such as cyber intrusions, electronic fraud, media misinformation, and other illegal and unethical activities.The huge amount of data handled by AI systems could be exposed to hacking or illegal exploitation, which constitutes a serious threat to privacy and security.1
in Healthcare
represents one of the most profound developments in healthcare in decades, with the potential to create seismic and revolutionary changes in the practice of medicine.4
The integration of AI and robotics in the healthcare sector has steered into a new era of efficiency and innovation. These technologies offer a multitude of benefits that have the potential to significantly enhance patient care, improve healthcare outcomes, and streamline various healthcare processes.6 It enhances diagnosis and treatment capabilities and increases efficiency and productivity in healthcare processes. AI systems provide healthcare professionals with real-time clinical decision support. Robotic-assisted surgeries provide unequaled skill, precision, and stability, reducing the risk of complications and accelerating patient recovery times. AI accelerates drug discovery by analyzing vast datasets to identify potential drug candidates and predict their efficacy.6
ChatGPT has demonstrated significant potential in various healthcare-related applications, such as medical education, radiologic decision-making, clinical genetics, patient care, and facilitating communication between patients and healthcare professionals.
Artificial Intelligence in research and medical education
ChatGPT has potential applications in research improving scientific writing, enhancing research versatility, streamlining workflow, saving time, and improving health literacy.8 Their utilization comes with potential risks and challenges, including ethical, legal, copyright, transparency, and concerns related to the generation of content difficult to distinguish from human-generated content.9,10 The German artist Boris Eldagsen won a photography award but turned it down with the explanation that his image submission was AI-generated and was designed to fool the judges and provoke debate.11 This is a small offense when compared to the way that AI has been used to generate fraudulent images in research publications.12 The utilization of ChatGPT poses several other challenges, including bias, plagiarism, lack of originality, inaccurate content, incorrect citations, dehumanization, false forecasting, and the dangers of blind trust.6 The reliance on LLMs like ChatGPT for scientific thinking may hinder social and scientific progress, as these models are trained on past data and may not be able to think differently from the past.
ChatGPT is prone to generating fake references and citations, a phenomenon referred to as
Some have put an outright call for complete rejection of any output produced with
AI assistance and others have allowed the (ChatGPT) to be on the author list.17
Despite its remarkable capabilities, AI has several limitations and ethical implications that need to be considered, particularly in sensitive fields like healthcare and education.13,19,20
issing, was the ethics of AI in global health, particularly in the context of low- and middle-income countries.21
22
Privacy
AI-based applications have a direct impact on patients' privacy and confidentiality. The loss of control over data access may have a serious psychological impact on patients if their private health information is exposed. The availability of databases involving genetic sequences and medical history could hinder the collection of data and the advancements in medical tests.22
Bias and inequality
A study in 2019 found that only 2.5% of Google’s employees were black, while Microsoft and Facebook had only 4% representation.23 An example of discrimination and racism caused by AI is the “Tay bot” which was launched by Microsoft via Twitter on March 23, 2016; It caused harsh criticism of Microsoft after the bot started posting racist tweets. The developers did not consider the moral risks to the community when the Tay chatbot spread hate and unleashed its tweets, leading Microsoft to shut it down within 24 hours.24
Studies have also revealed poorer implementation rates for specific diseases in rural areas, racial and ethnic minority groups, those without insurance, as well as individuals with lower education and income.22,25
Transparency and trust
The transparency of the algorithm enables healthcare professionals to understand how ChatGPT formulates its recommendations. Despite existing guidance for transparent reporting, poorly reported medical AI models are still common. Failure to prioritize explainability in clinical decision support systems can jeopardize core ethical values in medicine and may have adverse effects on both individual and public health.26,27
Responsibility and accountability
AI responsibility attribution poses significant questions regarding who should be held liable for the outcomes of AI actions. The use of AI systems might result in a loss of accountability. Who is responsible for the decisions taken by the AI system, especially when errors are made and harm is done? Are there decisions that AI systems should never make? Should we require algorithmic accountability and transparency? Should we require that the actions of AI systems are always explainable? If an expert medical diagnosis system exists, and kills a patient with an incorrect diagnosis, who is at fault?
Some papers explore human responsibility concerning AI systems. Others advocate for examining the causal chain of human agency, including interactions with technical components like sensors and software, to determine accountability. Shifting from data ownership to data stewardship is crucial to ensure responsible data management, safeguard patients' privacy, and adhere to regulatory standards. Data stewardship involves governance and protection of data, including determining access and sharing permissions, ensuring regulatory compliance, and facilitating collaborations and data exchange for research and technological advancements.22
Cybersecurity
Cybersecurity is the practice of preventing unauthorized access, theft, damage, or other harmful attacks on computer systems, networks, and digital information. Security breaches can be concealed by AI systems' incapacity to be explained and interpreted.
Impact on healthcare professionals
The incorporation of AI and robotics into healthcare systems not only transforms patient care but also reshapes the roles and responsibilities of healthcare professionals. Automating specific tasks may raise concerns about job displacement among healthcare professionals. Ethical considerations involve ensuring a smooth transition for affected individuals and providing retraining opportunities.6
Conclusion
introduced unique ethical considerations that demand careful examination and thoughtful resolution. It raises questions about authenticity, accountability, privacy, and security. ChatGPT has shown significant potential in revolutionizing various fields, including science, healthcare, and education, by accelerating processes, enhancing personalization, and providing valuable support to professionals and learners alike.
Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. These tools should augment, not supplant, human expertise. The fusion of technology and healthcare holds vast promise, but only if we navigate its intricacies with conscientiousness and diligence.28 By developing regulatory frameworks and comprehensive guidelines, AI can transform the healthcare process and improve patient outcomes while respecting ethical principles.
References