ChatGPT is an artificial intelligence tool that allows users to enter sentences or keywords and have ChatGPT automatically generate an article. While ChatGPT has been recognized in some fields, its feasibility for writing articles in the medical field remains unclear.
1) Information accuracy: The medical field is highly specialized, complex, and technical. Any misrepresentation may have serious effects on patients. If ChatGPT is used to write medical articles, the accuracy and credibility of the content must be ensured. However, ChatGPT is not a professional medical tool and its information may contain errors, which could cause irreparable harm to patients.
2) Public relations: One of the most crucial aspects of public relations is safeguarding the client’s image at all levels and avoiding any circumstances that could potentially lead to a “public relations disaster.” This entails adopting an appropriate tone and response to diverse situations and audience groups. Given the sensitivity and specialization of the medical field, there is a risk that ChatGPT, when collecting information, may inadvertently use terms and phrases appropriate for other contexts. This could lead to a “public relations disaster” if not carefully checked. Additionally, ChatGPT may not fully understand the ethical risks in medicine, which could pose a certain degree of risk.
3) Professional Code of Conduct for Doctors: In the medical field, there is a code of conduct for each profession, such as doctors, Chinese medicine practitioners, nurses, dieticians, etc. For example, in Western medicine, doctors are required to adhere to the Code of Professional Conduct for the Guidance of Registered Medical Practitioners in Hong Kong, as set forth by the Medical Council of Hong Kong (MCHK). This code outlines the standards of conduct expected of medical professionals in the publication of medical articles, ensuring the maintenance of professionalism and the protection of patient interests. Among other things, doctors are not allowed to include advertisements in their articles, and the information provided must be accurate, well-founded, objectively verified, presented in a balanced manner, and they must not list fees, insurance coverage, or payment plans on social media.
It is important to note that ChatGPT may not be able to filter or change content that violates the code when collecting information and compiling articles. This presents a risk that cannot be ignored.
4) Risk of bias: The medical field is characterized by a wide variety of views and opinions, which differ greatly from one another. ChatGPT’s model learns from a large amount of text data, and if this data contains bias or prejudice, it can indirectly cause patients reading the article to receive information that is not balanced.
5) Manual checking: Once ChatGPT has completed a medical article, it still requires manual proofreading and editing, which adds time and manpower expenses.
When composing medical articles, prioritizing professionalism, accuracy, and objectivity is crucial. Adherence to pertinent ethical standards and legal requirements is also essential. Although ChatGPT has not yet achieved this degree of sophistication, monitoring its evolution remains important.
Feel free to contact us for more information.