SAN FRANCISCO (AP) – The developer of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers identify whether a student or an artificial intelligence typed the homework.
The new AI Text Classifier launched on Tuesday by OpenAI follows weeks of research at the school and colleges for fear that ChatGPT’s ability to document anything legally could lead to academic dishonesty and hinder learning.
OpenAI warns that its new tool – like the others already there – it’s not stupid. AI’s code recognition system is “imperfect and sometimes wrong,” said Jan Leike, head of the OpenAI regulatory group tasked with making its systems more secure.
“Therefore, it should not be trusted when making decisions,” said Leike.
Teens and college students were among the millions of people who started trying out ChatGPT after it launched Nov. 30 as free software on the OpenAI website. And while many have found ways to use it efficiently and without problems, the ease with which it answers homework questions and helps with other tasks has caused fear among some teachers.
By the time schools opened for the new year, New York City, Los Angeles and other major public school districts began banning its use in classrooms and on school equipment.
The Seattle Public Schools district initially banned ChatGPT from all school devices in December but later opened it up to teachers who want to use it as an instructional tool, said Tim Robinson, a district spokesman.
“We can’t afford to ignore it,” Robinson said.
This section also discusses expanding the use of ChatGPT in classrooms so that teachers can use it to teach students how to think critically and allow students to use it as a “personal advisor” or to help generate new ideas while working. , said Robinson.
School districts around the country say they’re seeing ChatGPT conversations quickly change.
“Our first reaction was ‘OMG, how are we going to deal with all the cheating that’s going to happen with ChatGPT,'” said Devin Page, technology specialist at the Calvert County Public School District in Maryland. Now there’s a growing awareness that “it has a future” and blocking is not the answer, he said.
“I think we would be naive if we didn’t know the dangers of this weapon, and we would be failing to help our students if we stopped them and us from using its full potential,” said Page, who thinks the districts will. as his will open ChatGPT, especially when the company’s recognition service is established.
OpenAI emphasized the limitations of its detection tool in a blog post on Tuesday, but said that in addition to preventing hacking, it could help identify targeted campaigns. and misuse of AI to manipulate people.
The longer the paragraph, the better the tool can identify whether AI or a human wrote something. Type in any word — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will call it “highly questionable, dubious, unclear if it is, maybe, or probably” AI. – made.
But like ChatGPT itself, it was taught on a large database of digital books, newspapers and articles on the Internet but often spit out lies or nonsense with confidence, it is not easy to interpret how it had an effect.
“We don’t really know what kind of model it is, or how it works internally,” Leike said. “There’s not much to say at this point about the team’s performance.”
Higher education institutions around the world have also begun to debate the appropriate use of AI technology. Sciences Po, one of France’s most prestigious universities, banned the use last week and warned that anyone caught secretly using ChatGPT and other AI tools to produce written or oral assignments could be banned from Sciences Po and other institutions.
In response to the review, OpenAI said it had been working for weeks to develop new guidelines to help teachers.
“Like many other technologies, maybe one district has decided it’s not suitable for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them any other way. We just want to give them the information they need to make the right decisions for them. “
It’s a prominent part of San Francisco’s research-focused startup community, which is now backed by billions of dollars in investment. from its partner Microsoft and is facing increasing interest from the public and governments.
French Finance Minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the minister – a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris – said that there are also difficult ethical questions that will need to be answered.
“So if you’re in the law enforcement community, you’re worried because obviously ChatGPT, among other tools, will be able to provide tests that are interesting,” he said. “If you’re in the financial sector, then you’re in the right place because ChatGPT is difficult to find or deliver what you expect when you’re in the financial sector.”
He added that it should be important for users to understand the basics of how these systems work in order to identify potential biases.
O’Brien reported from Providence, Rhode Island. AP reporter John Leicester contributed to this report from Paris.