You are viewing a single comment's thread:

RE: ChatGPT vs. University Professors -- Not a Fair Fight (Not Even Close)

(edited)

Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments. I wrote some time ago about my Dupuytren's disease. Radiation therapy is one form of treatment available to those experiencing it in its early stages. However, since this is not a common form of treatment Chat GPT is not able to discuss it.

If a professor were to ask students for forms of treatment on Dupuytren's disease and the student used Chat GPT, then Radiation Therapy would likely not be on their list of answers on a test.

I found an oncology center in the town where I live that can provide such treatment. When discussing this with Chat GPT it would only mention that the therapy was not an option.

Using references in the essays is also not currently possible for Chat GPT. Hive users will downvote articles that appear to be plagiarized or fraudulent when sources aren't available for viewing. However, I do believe that Chat GPT will not only be able to provide references, but also in virtually every format available.

How then could professors combat plagiarism if Chat GPT is available in its final form? You could require students to write their essays during class time periods, restrict the Chat GPT address, and monitor their keystrokes. You'd have to employ some draconian measures unfortunately, I think.

0.00039818 BEE
1 comments

Chat GPT is great, but it's not foolproof. One example of this is with respect to disease treatments.

One of the things I point out to my students with respect to machine learning is that, because it is merely aggregating and prioritizing human-generated content, it can be extremely unreliable with edge cases.

I use radiology as an example. ML algorithms applied to reading X-rays will do extremely well in those cases where 90% of all radiologists would reach the same conclusion. In those instances where radiologists cannot agree, the ML will fail, and may do so in a horrible and dangerous fashion.

This is why machines will never replace radiologists. ML algos will handle the easy cases, leaving the hard cases for the human judgment of expert radiologists. Ultimately this will lead to an increase in the overall expertise of radiologists, because more and more radiologists will be studying edge cases, because they will be set free from unproductive time spent on mundane cases.

0.00011283 BEE

Well, it can’t replace radiologists yet. All the bot needs is data. However, like you said radiologists would be freed up to study the more odd cases out there and that is a great thing.

0.00019146 BEE

I think the biggest risk in the use of Chat GPT will be with general education up through undergraduate studies in colleges. Anything above that and Chat GPT can't be used to plagiarized.

As an engineer in Radiation Protection, I get asked some far out questions or requests to solve odd problems. It sometimes makes me think of Scotty on Star Trek. Examples include:

  • Find me a way to enter a high radiation area without receiving any radiation exposure.
  • Find me a way to perform radiation surveys with zero risks to safety.
  • Find me a way to negate the effects of radioactive wastes during shipment

These have been the craziest (and most serious) questions by far.

Chat GPT would only give general answers without delving into the details. Even if the AI was developed enough to do the research, the analysis required would be too cutting edge to be programmed.

0.00028497 BEE