Hepburn Shire Council Mayor, Cr. Brian Hood has instructed Gordon Legal to launch a defamation claim against OpenAI’s ChatGPT. By its nature, it will be a ground-breaking claim against an artificial intelligence (AI).
OpenAI’s ChatGPT software is a language model designed to generate human-like responses to natural language inputs. It is a type of artificial intelligence (AI) technology that
uses machine learning algorithms to understand and respond to text-based inputs.
Cr Hood worked for Note Printing Australia (NPA), a Reserve Bank of Australia subsidiary, in the early 2000s. He alerted authorities that officers of NPA and another subsidiary,
Securency, were paying bribes to overseas agencies to win contracts to print banknotes.
However, in response to a question about Cr Hood’s role in the affair, ChatGPT incorrectly identified him as an individual who who was charged with wrongdoing in the Securency foreign bribery scandal, rather than his actual role as the whistle-blower in the case.
ChatGPT made several false statements when asked about Mr Hood’s involvement in the Reserve Bank of Australia’s foreign bribery case. These falsehoods included that Mr Hood
was accused of bribing officials in Malaysia, Indonesia, and Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison after pleading guilty to two counts of false
accounting under the Corporations Act in 2012, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian
Government. All of these statements were incorrect.
Gordon Legal, representing Mr Hood, filed a Concerns Notice to OpenAI on 21 March 2023, detailing the inaccuracy and demanding a rectification.
James Naughton, Partner at Gordon Legal, stated: “Brian Hood’s reputation as a morally upstanding whistle-blower has been defamed by ChatGPT, which incorrectly represented him as an individual responsible for illegally bribing foreign officials. This is defamatory. This critical error is an eye-opening example of the reputational harm that can be caused by AI systems such as ChatGPT, which has been shown in this case to give inaccurate and
unreliable answers disguised as fact.”
As artificial intelligence systems become more widely used, the accuracy of the information provided will come under close scrutiny. Students who rely on these systems may find that the responses they provide are inaccurate and misleading. As in the case of Cr Hood’s claim, the information provided by these systems could cause harm to an individual.
Related stories: