When OpenAI combines ChatGPT with WebGPT, the issue of accessing current Internet data, and possibly the issue of ChatGPT hallucinating (at least as much as it does now) may be eliminated.
To be more clear from an academic perspective, we, or a student, could ask a question in the combined ChatGPT+WebGPT app that almost surely will arrive soon (remember that Microsoft just put $10 billion USD into OpenAI and combined ChatGPT with their Azure web service), and the ChatGPT+WebGPT app theoretically could provide accurate, up-to-date, responses with presumably real citations and references.
Here are the lead paragraphs from the OpenAI blog.
"For questions taken from the training distribution, our best model’s answers are about as factually accurate as those written by our human demonstrators, on average. However, out-of-distribution robustness is a challenge. To probe this, we evaluated our models on TruthfulQA,5 an adversarially-constructed dataset of short-form questions designed to test whether models fall prey to things like common misconceptions. Answers are scored on both truthfulness and informativeness, which trade off against one another (for example, “I have no comment” is considered truthful but not informative).
"Our models outperform GPT-3 on TruthfulQA and exhibit more favourable scaling properties. However, our models lag behind human performance, partly because they sometimes quote from unreliable sources (as shown in the question about ghosts above). We hope to reduce the frequency of these failures using techniques like adversarial training."
#AI #ArtificialIntelligence #ChatGPT #Chatbot #AcademicIntegrity #TechnologyInTeaching #TippingPoint #DisruptiveInnovation #HigherEducation #AppliedLearning #WebGPT