When The Doctor Tells You The Truth

GPT-4 responds to open-ended, clinical reasoning questions

đź”· Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.

Pretty amazing sequence of events at Stanford University, where in this paper, researchers found:

  • GPT-4 with a little prompt engineering

  • Passed 1st and 2nd year medical school exams at Stanford

  • 93% of the time, vs typical human med student performance of 85%

  • The questions were open-ended, clinical reasoning questions

  • And graded by medical school examiners

Typical answer (hat tip @emollick)

What is also fascinating is that Stanford Medical School has already started to react to this:

  • converting open-book exams which tested clinical reasoning and could be gamed with ChatGPT; to

  • closed-book exams where students must memorize everything (seems regressive! We are going back to the 1970s here!)

We get ever closer to where we must answer… is the expert the dispenser of knowledge? Or the dispenser of wisdom?

Become a subscriber for daily breakdowns of what’s happening in the AI world:

Reply

or to participate.