You are viewing a single comment's thread:
Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments.
One of the things I point out to my students with respect to machine learning is that, because it is merely aggregating and prioritizing human-generated content, it can be extremely unreliable with edge cases.
I use radiology as an example. ML algorithms applied to reading X-rays will do extremely well in those cases where 90% of all radiologists would reach the same conclusion. In those instances where radiologists cannot agree, the ML will fail, and may do so in a horrible and dangerous fashion.
This is why machines will never replace radiologists. ML algos will handle the easy cases, leaving the hard cases for the human judgment of expert radiologists. Ultimately this will lead to an increase in the overall expertise of radiologists, because more and more radiologists will be studying edge cases, because they will be set free from unproductive time spent on mundane cases.
I think the biggest risk in the use of Chat GPT will be with general education up through undergraduate studies in colleges. Anything above that and Chat GPT can't be used to plagiarized.
As an engineer in Radiation Protection, I get asked some far out questions or requests to solve odd problems. It sometimes makes me think of Scotty on Star Trek. Examples include:
These have been the craziest (and most serious) questions by far.
Chat GPT would only give general answers without delving into the details. Even if the AI was developed enough to do the research, the analysis required would be too cutting edge to be programmed.
View more
Well, it can’t replace radiologists yet. All the bot needs is data. However, like you said radiologists would be freed up to study the more odd cases out there and that is a great thing.