We have heard a lot of good things about AI and how AI can be used to do various things with ease. Today, there are also many malpractices being done with the help of AI. Whatever prompt we give to the AI, the response is given by the AI. If we twist and turn the prompt, we might be able to get any type of response from the AI, which proves that it is all dependent on the prompt we are making.
One dangerous thing that is common among individuals is getting their medication from the local medical shops. This practice is very common in India. People don't visit doctors, but they visit a nearby pharmacy shop and get their medicines. Sometimes they know their medicine and purchase it by themseleves, or most of the time they tell the symptoms to the pharma shop and they give the medication. This is a very bad practice, and it is done for several reasons. One of the biggest reason is that they cannot afford visiting doctors, but instead, if they pay just 20 rupees or even cheaper, they get the medicine from the local shop guy itself.
With modern technology becoming very popular, people are now using Chat GPT for medical consultation. There are both pros and cons to this type of act. Before Chat GPT came into existence, some people searched their symptoms online and got medications accordingly on their own. For some of the common problems, they try to treat themselves. It is very hard to explain how dangerous this can be.
There is much research work already going on in this topic. People use Chat GPT to resolve their mental health issues, and in some cases, the results are amazing. In most of the mental health patients, companionship or not having anyone to talk to is the biggest problem. Today, with the help of Chat GPT, people can overcome this problem. Instead of spending money on a therapist, people start interacting with ChatGPT. Though it will not ask any intellectual questions, it will at least provide a great relief where people will have someone to talk to and explain their problems.
One of the biggest problems with Chat GPT is the real time fact checking capability. It is good at providing solutions to problems, but only if the user is asking the right question. Also the answers are provided only if the user is asking genuine questions. There is no way Chat GPT or any AI model can check if the person is telling a lie or the truth. Fact-checking is not a possibility unless there is a human touch to it. So things can end up dangerous if Chat GPT is not doing the fact check and provides some misleading answers to people who are trying to do a medical consultation with the AI.
I'm sure the technology is getting far advanced and sooner or later we will have AI models who would act as the first diagnosis before having an interation with the doctor. This can be a great usecase where the doctor can use the AI to ask all the basic questions to the patients before having a conversation with them. This saves a lot of time and can help in getting an overview of the case, but providing solutions can be something that should never happen without a real doctor talking to the patient. Even if AI is used just for diagnosing the symptoms, it should be verified completely by the doctors too.
Reference:
https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/
If you like what I'm doing on Hive, you can vote me as a witness with the links below.
Posted Using INLEO
This post has been manually curated by @bhattg from Indiaunited community. Join us on our Discord Server.
Do you know that you can earn a passive income by delegating to @indiaunited. We share more than 100 % of the curation rewards with the delegators in the form of IUC tokens. HP delegators and IUC token holders also get upto 20% additional vote weight.
Here are some handy links for delegations: 100HP, 250HP, 500HP, 1000HP.
100% of the rewards from this comment goes to the curator for their manual curation efforts. Please encourage the curator @bhattg by upvoting this comment and support the community by voting the posts made by @indiaunited..
This post received an extra 20.00% vote for delegating HP / holding IUC tokens.
This post has been manually curated by @bhattg from Indiaunited community. Join us on our Discord Server.
Do you know that you can earn a passive income by delegating your Leo power to @india-leo account? We share 100 % of the curation rewards with the delegators.
100% of the rewards from this comment goes to the curator for their manual curation efforts. Please encourage the curator @bhattg by upvoting this comment and support the community by voting the posts made by @indiaunited.
I think relying on AI for medical consultation to a large extent is very dangerous because even when we meet with doctors and lay out complaints to understand further what we are trying to say these doctors ask us questions which AI would not do.
As much as we keep getting advanced in technology, I just feel that there are things ai can't really do.
for now i wouldnt really rely on chatgpt for health issues, however it can be helpful if you set the right precise question
i heard tough of a woman who saved herself because chatgpt diagnosed her cancer when doctors were not
This is also part of the ways people abuse the use of AI and it should be discouraged. Going for physical diagnosis is always the best.
!PIMP
I have always love using chat-GPT for almost everything about my life and trust me, it is highly effective for me
Congratulations @bala41288! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts:
It is giving us good and correct answers to many things, which is helping the students a lot in remembering their lessons, and many other problems of the students have been solved because of this software.
https://x.com/jewellery_all/status/1942210527956287787
#hive