CONCERNS about ethics and artificial intelligence have been growing among experts.
今年の初め, a Swedish researcher tasked an artificial intelligence (に) algorithm dubbed GPT-3 to write a 500-word academic thesis about itself.
The researcher, Almira Osmanovic Thunström, admitted that she was “畏怖の念を抱いて” as the program began to create the content, she recounted for Scientific American.
“Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context,” 彼女は言いました.
実際には, the thesis was so good, that Thunström hoped to publish it in a peer-reviewed academic journal.
しかしながら, this task presented many ethical and legal questions for the scientist.
Read more on AI
She noted that philosophical arguments about nonhuman authorship also began to plague her thoughts.
“All we know is, we opened a gate,” Thunström wrote. “We just hope we didn’t open a Pandora’s box.”
An AI’s consent
Before scientific articles can get peer-reviewed, authors need to give consent for publishing.
When Thunström reached this stage, she admitted that she “panicked for a second.”
“How would I know? It’s not human! I had no intention of breaking the law or my own ethics,” 彼女は付け加えた.
She then asked the program directly if it agreed to be the first author of a paper together with herself and her colleague Steinn Steingrimsson.
Once it answered and wrote back “はい”, Thunström said she was relieved.
“If it had said no, my conscience could not have allowed me to go on further,” Thunström added.
The researchers also asked the AI if it had any conflicts of interest, to which the algorithm replied “番号。”
その時点で, the process had gotten a bit funny for Thunström and her colleague, as they were beginning to treat GPT-3 as a sentient being, even though they “fully” understood it’s not, 彼女は言いました.
Whether AI can be sentient or not has recently garnered a lot of attention in the media.
This is especially the case after Google employee Blake Lemoine claimed that the tech giant had created a ‘sentient AI child’ that ‘could escape’.
Lemoine was put on suspension shortly after making such claims about the AI project named LaMDA, with Google citing a data confidentiality breach as the reason.
Before being suspended, Lemoine sent his findings in an email to 200 people and titled it “LaMDA is sentient”.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” 彼が書きました.
His claims were dismissed by Google’s top brass.