Restoring Trust and Relevance in Accounting Research

Prof. Janek Ratnatunga, CEO, CMA ANZ

Australian Academics Caught in Generative AI Scandal

A group of Australian academics has “unreservedly apologised” for submitting factually false claims about Big-4 consulting firms in a submission to a Senate enquiry in Australia.

Emeritus Professor James Guthrie AM, a professor in Macquarie University’s Department of Accounting and Corporate Governance, acknowledged using Google Bard AI to gather data for a submission to a parliamentary investigation into the practices of the Big 4 consulting firms. Several false claims were made, he acknowledged in a letter to the Senate (Belot, 2023).

His co-authors in the group that made the factually incorrect submission were Professor John Dumay (Macquarie University), Dr. Erin Twyford (University of Wollongong), and Associate Professor James Hazelton (Macquarie University). However, they were quick to distance themselves from the scandal, emphasising in their amended submission that Professor Guthrie was solely to blame (Sadler, 2023).

The submission itself is a commendable attempt to address an important issue in accounting practice with practical recommendations. Unfortunately, the validity of the conclusion and recommendations of the submission is put into doubt due to the false claims.

There are two issues about accounting research that arise from this debacle — that of ‘Integrity’ and ‘Relevance’.

False Allegations

In their Senate submission, the professors presented case studies—completely fictionalised, but created by Bard—about suspected misconduct by big consulting firms. Some of the false allegations made were (Croft, 2023):

  • That KPMG was accused of two cases of misconduct, alleging that the consultancy firm had run an audit of the Commonwealth Bank during a financial scandal when it never had.
  • That KPMG acted complicit during a 7-Eleven wage theft scandal, that resulted in the resignation of multiple partners. This was a false allegation.
  • That the liquidators of insolvent Aussie construction company Probuild were suing Deloitte for an improper order of the company accounts, while in reality, an audit was never undertaken.
  • That Deloitte was involved in a “NAB financial planning scandal”, and the firm had done an audit of Westpac during a scandal. This was a false allegation.

In a letter to the Australian Senate, Professor Guthrie stated:

“I am solely responsible for the part of the submissions pointed to in these letters, in which I used the AI program Google Bard Language model generator to assist with the submission preparation. There has been much talk about the use of AI, particularly in academia, with much promise of what it holds for the future and its current capabilities. This was my first-time using Google Bard in research, as it had only been released that week. I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.” (Guthrie, 2023).

In normal circumstances, such an apology by an academic would indicate sloppy research practices and would seriously damage the academic’s reputation in terms of integrity.

Trust in Academic Research

Professor Guthrie is, however, no ordinary academic. He is the co-editor of Accounting, Auditing & Accountability Journal (AAAJ), Australia’s highest ranked (A* Rated) academic accounting journal (ABDC , 2023). His co-authors are also highly respected accounting researchers.

This takes the excuse of flawed research and lack of verification to an entirely new level.

One would expect that every submission to a journal of the stature of AAAJ would be thoroughly checked for accuracy and veracity. However, as the gatekeepers of a highly ranked accounting journal, neither Professor Guthrie nor his co-authors checked the accuracy of the serious claims being made in their submission to the Senate. This brings into question the integrity of the ‘double-blind’ review process of academic accounting research that they are maintaining.

Did his co-authors not verify the claims made in their submission because they originated from a known and trusted individual?

Double-blind Study’ vs. ‘Double-blind Review’

In medical and psychological research, a double-blind study refers to a study or research where both the subjects or participants of a study and the researchers are oblivious of the treatment being given and the subjects receiving the treatment. Both the participants and the experimenter are kept in the dark. This is done to eliminate all presence of biases (e.g., the placebo effect) in the outcome of the research. A double-blind study has the added advantage of being able to be replicated by other researchers, i.e., under similar control conditions the outcomes should be the same.

Most social science research journals (including accounting journals) do not insist on a double-blind study. Instead, they rely on only a “double-blind review”, which means that both the reviewer and author identities are concealed from the reviewers, and vice versa, throughout the review process. To facilitate this, authors are supposed to ensure that their manuscripts are prepared in a way that does not give away their identity.

As such, double-blind refereeing supposedly allows the merits of a particular paper to be assessed without regard to characteristics of its author(s) such as rank, gender, institution, and seniority.

However, often the identity of the author may be guessed by looking at the subject matter or the citations in the paper (self-citation is very common). Further, if an editor of a journal submits an article to his or her own journal (even if the paper is ‘managed’ by another co-editor), or to a journal in which he or she is on any of the Editorial Boards, it can be argued that this can result in a significant bias in that it will be subject to less scrutiny and verification.

For example, let us review the publications of Professor Guthrie in 2018-2022 and his collaborations with Professor John Dumay — another full professor who was a co-author of the factually incorrect Senate submission (Table 1).

Year Publica-tions In      AAAJ In Meditari % in both Journals With John Dumay % with Dumay
2018 4 1 1 50% 1 25%
2019 7 2 1 43% 2 29%
2020 7 1 2 43% 2 29%
2021 6 2 2 67% 3 50%
2022 4 1 1 50% 0 0%
Total 28 7 7 50% 8 29%
Source: Macquarie University (2023)

Professor Guthrie is a co-editor of the AAAJ. He is also on the Editorial Advisory Board of Meditari Accountancy Research, another highly ranked journal. Professor Dumay is an Associate Editor in both the AAAJ and Meditari.

Table 1 shows that 50% of Guthrie’s published articles in 2018-2022 were in one of those two journals, and 29% of his articles in those years were jointly with Dumay. Whilst not saying that there was any impropriety in the double-blind refereeing processes in these two journals, clearly more transparency is required to ensure it is a level playing field for researchers who may not have the connections to the editorial board to have their papers refereed with a more favourable environment, and arguably with less scrutiny.

Professors Guthrie and Dumay are only used as examples because they were co-authors of the ill-fated Senate submission, which showed the dangers of misplaced trust and bias that can arise when author identity is known.

There are numerous examples of the ‘club effect’ in journal editorial boards where the same members appear in different roles within highly ranked journals and are well aware of the identity of the authors of articles submitted to them (Dada, et al., 2022).

Restoring Integrity

The question of should editors-in-chief (EICs) publish in their own journals has been hotly debated in academic spheres. The concern is that scientific journal editors submitting papers to their own journals may receive preferential treatment.

Helgesson, et. al. (2022) conducted a systematic review of editors publishing in their own journals using the following databases Medline, PsycInfo, Scopus and Web of Science. They found that there was large variability of self-publishing across fields, journals, and editors; ranging from those who never published in their own journal to those publishing extensively in their own journal.

Their results show that across journals, editors publish from 3.3% to 43.6% of their articles in their own journal, with an overall average of 10.1%.

Helgesson, et. al. (2022) recommend that editors-in-chief and associate editors have considerable power in journals; and, therefore, should refrain from publishing research articles in their own journals. They stated that it is preferable that at least an editor-in-chief should strive to avoid publishing research papers in their own journals. Journals should also have clear processes in place for the treatment of articles submitted by editorial board members.

Professor Guthrie’s 50% of publications in the journal in which he is the co-editor in chief, i.e., AAAJ, is well above the maximum found in the above study.

My personal opinion is similar to that of the Helgesson, et. al., (2022) findings, i.e., that any Editors-in-Chief of a highly ranked A* or A journal should not be able to publish in their own journal; and a member of a journal’s editorial board should be subjected to clear transparent processes regarding the treatment of articles they submit.

Author Accountability

Going back to the Senate submission by Guthrie, et.al., (2023), can the other listed co-authors absolve themselves from responsibility for the factual inaccuracies? Professor Dumay and other co-authors were quick to distance themselves stating that Professor Guthrie was solely to blame (Sadler, 2023).

However, in social science research articles, all listed authors are collectively accountable for the whole research output. In other words, if they take the credit and get the rewards collectively, they must also take the risks collectively. Authors should have confidence in the accuracy and integrity of the contributions of their co-authors. The lead author is ultimately responsible for ensuring that all other authors meet the requirements for authorship as well as ensuring the integrity of the work itself. (Tarkang, et.al., 2017).

Whilst a Senate submission is different from a referred research article, it should be held to at least the same (if not higher) standard of veracity as it has a potential impact on government policy. It is beyond comprehension that such senior researchers did not check the serious allegations made regarding the involvement in financial scandals and audit activities by Deloitte, KPMG, EY, and PwC.

Legal Ramifications

The general counsel for Deloitte, Tala Bennett, stated that:

“Deloitte supports academic freedom and constructive discourse in relation to those matters currently before the committee; however, it considers that it is important to have factually incorrect information corrected. It is disappointing that this has occurred, and we look forward to understanding the committee’s approach to correcting this information.”

It is thought to be the first time a Parliamentary Committee has been compelled to consider the use of generative AI in research and composing submissions to enquiries, which are exempt from defamation lawsuits and protected by Parliamentary Privilege (Sadler, 2023).

However, there are already cases where such falsehoods will not be protected by Parliamentary Privilege.

In May 2023 a US attorney acknowledged earlier this year that he had used ChatGPT for research, which had caused him to file several “bogus” cases with the court. In a routine personal injury suit the attorney used ChatGPT to prepare a filing, but the artificial intelligence bot delivered fake cases that the attorney then presented to the court, prompting a judge to weigh sanctions as the legal community grapples with one of the first cases of AI “hallucinations” making it to court (Bohannon, 2023).

Pure Fiction Presented as Fact

It is not unknown that generative AI can produce output that sounds authoritative but may be biased, inaccurate, or completely fictional.

Since the USA case, it has been known that these AI platforms and their algorithms are trained on existing databases of images or text and taught to generate answers based on the database, but they often conflate information or create false answers to questions.

I experienced this firsthand when I used ChatGPT to find relevant cases for an issue I needed to take to the Victorian Civil and Administrative Tribunal (VCAT). I was delighted when the platform found four very relevant cases, with full VCAT case citations, i.e., the names of the parties and the case numbers, etc. However, on cross-checking the case numbers with the VCAT case database, I found that 3 of the four cases were completely fictional and did not exist.

Therefore, it would not have taken long for seasoned researchers such as Professors Guthrie and Dumay to do a Google search to verify if the allegations they were making were true or false.

Senator Richard Colbeck, an Australian Liberal Party member who chairs the Senate enquiry, described the episode as a “salient reminder to everyone to check your sources”.

Globally, organisations and establishments in every industry are facing challenges because of the extensive implementation and utilisation of generative AI technologies like Bard and ChatGPT. Among other fields, the public sector, legal profession, and education are already extensively using these generative AI technologies.

For example, it is well known that the use of generative AI in universities has been a source of concern for months, with more students using it to cheat their way through assignments (Miles, 2023). However, in an interesting twist, because the time academics have to mark student assignments in universities are being cut, some staff are saying that they may be forced to consider using ChatGPT to generate student assignment feedback (Moran, 2023). This will result in the very real possibility of assignments full of fiction being written by AI and then marked as fact also by AI.

As such, the Australian federal government has issued a warning, stating that the use of generative AI tools in the public sector should only be limited to “low-risk situations” and that using them to write application code or make decisions has an “unacceptable risk” (Sadler, 2023).

Relevance of Academic Accounting Research

Whilst the factually flawed Senate submission brought into question the entire “double-blind review” process and the role of editors and editorial boards of academic journals; the message the submission was trying to impart was actually very relevant to the accounting and auditing profession.

In fact, Professor Guthrie, along with Professor Allan Fels, the former chairman of the Australian Competition and Consumer Commission (ACCC), wrote an important opinion piece in the Australian Financial Review saying that the PwC global tax leaks scandal (see Ratnatunga, 2023b) makes it essential for the parliament to urgently look at regulation to resolve conflicts of interest between auditors, accountants, and consultants. They said that the new terms of reference should include a critical overview of the nature and culture of the big seven consulting and accounting organisations in Australia, including the issue of conflict of interests and organisational culture.

The main recommendation in the (revised) Guthrie, et. al., (2023) submission was that a new, independent, single statutory regulator for the accounting profession must be established with the power to set technical and ethical standards, evaluate compliance, and impose meaningful sanctions for transgressive firms and individuals. This statutory regulator would also oversee the selection of appropriate legal structures for the delivery of accounting and consulting services and the ability to compel relevant reporting.

Some of the other recommendations of the submission were that:

  • The Big Four accounting partnerships in Australia undergo a structural split at the start of 2025 between the audit and consulting parts of the firm.
  • Firms must rotate their auditing firm after five years.
  • Auditors are prohibited from providing non-audit services to their audit clients.
  • The requirements for auditor independence must be strengthened.

These are all issues that have been identified in the following articles (by the author of this article), with the following headings:

  • Auditing Opinions for Sale? (Ratnatunga, 2018)
  • The Silence of the Auditors (Ratnatunga, 2018b)
  • Why Audit Opinions are ‘Untrue’ and ‘Unfair’ (Ratnatunga, 2019)
  • The Impotence of Australia’s Accounting Regulators (Ratnatunga, 2021)
  • Consulting Firms: Big Bucks but Little Value for Governments”, (Ratnatunga, 2023a)
  • PwC Tax Scandal’s Aftermath: It’s Time to Seriously Regulate the Big 4 (Ratnatunga, 2023b)

The above practice-oriented articles all appeared in applied research journals and would not be considered for publication in the more esoteric, highly ranked A* and A ranked academic accounting journals. This mismatch between what gets published in academic accounting journals and what accounting professionals read has caused a significant gap between accounting research and practice. Ratnatunga (2012) explored this perceived gap and showed that current academic accounting research:

  • Has failed to lead practice in contrast to medical research.
  • Lacks innovation.
  • Has failed to arrive at solutions to the fundamental issues in accounting practice.
  • Has no demand outside of the university context.

In other words, the excellent and very relevant recommendations of the (revised) Senate submission by Guthrie, et. al., (2023) would not have been published in their own research journals!

Restoring Relevance

Ratnatunga (2012) presented the results of five interrelated studies that support the overall finding of an ever-growing gap, especially in financial accounting and auditing. A selected sample of 16,000 accountants in professional practice (including 1,200 in the Big 4 and middle-tier firms) in 16 countries were asked if they read or had heard of 19 leading accounting research journals globally. No journal scored a ‘yes’ response above 0.05% (0.0005); i.e., 99.95% of accounting practitioners had not heard of any accounting academic journal!

This is in stark contrast to the healthy relationship found between academia and practice in the medical profession. The responses given by the General Practitioners of the 16 ‘academic’ journals publishing medical research, eight of them had a 100% “heard of” response, with an overwhelming majority of them having also been read by practitioners (Ratnatunga, 2012).

One cannot expect professional practitioners, who are safe within the legal powers provided to the professions of financial accounting and auditing to climb those ivory towers and seek out academic researchers to help them with practice issues. The first move must be made by the accounting academics if they are to regain the relevancy they once had fifty years ago with the accounting profession.

For example, when the world was grappling with inflation in the 1960s, one of Australia’s leading academics, Professor Ray Chambers, proposed a market-based system of continuously contemporary accounting (CoCoA); as an alternative to the conventional, historic cost accounting methods used at the time. His ‘prescriptive’ research was published in the leading accounting academic research journals of the day, but it will not find a place in today’s journals that only consider ‘descriptive’ research studies.

In 1977, the American Accounting Association described Professor Chambers as a leading ‘golden age’ theoretician’, recognising his influence in promoting “decision usefulness” as a major purpose of accounting (AAA, 1977). It is time we went ‘Back to the Future’ and restored ‘relevance’ to accounting practitioners.

The steps suggested by the academics themselves in the Ratnatunga (2012) research study indicate that accounting academics should (1) be rewarded for writing case studies as in some leading universities:  (2)  be recognised and rewarded for publishing in professional journals (like the Harvard Business Review) in the same way as publishing at academic journals; (3) be encouraged by universities to do more consulting-based research; and (4) be provided opportunities to engage more with practitioners via undertaking joint research; establishing ‘in-residence’ programs in which academics spend some time in practice and vice-versa; and organising seminars with topics of interest to both ‘town’ and ‘gown’.

In addition, practitioners with valuable experience should be appointed alongside PhD holders to senior academic positions. This is one of the keys to success in the medical profession, in which medical practitioners with significant practical experience are given adjunct professorial appointments so that they can spend some time at the university teaching the next generation of doctors and surgeons.

Impact of Academic Accounting Research

Currently, the principal way that academic accounting research is measured for ‘impact’, is based on the number of ‘citations’ the paper receives in other refereed academic articles. No measurement is made on the impact of the research on accounting practice via articles in professional journals, news reports, media interviews, etc.

Wilhite (2012) found that despite their shortcomings, ‘citation impact counts’ continue to be a primary means by which academics “quantify the quality” of research. One side effect of impact factors is the incentive they create for editors to coerce authors to add more citations to the journal they wish to publish in. The message is clear: Add citations or risk rejection. This can cause ‘citation inflation’, which can undermine the accuracy and comparability of citation counts by inflating the numbers without reflecting the quality or significance of the research.

Ratnatunga (2012), undertook an examination of references to external publications in two ‘Handbooks’ regularly referenced by accounting and auditing practitioners, as a proxy for ‘impact’ in practice. The Accounting Handbook revealed 4,863 external references, of which 2,550 were technical references, 2,313 were legal references, and none were academic accounting references. The examination of the Auditing, Assurance and Ethics Handbook revealed 3,590 external references, of which 2,274 were legal references, 1,316 were technical references, and none were academic references.

Across both handbooks, in total, there were 8,453 references. 4,587 were legal references, 3,866 were technical references, and none were academic references. This is a very clear indication that academic accounting research has zero impact on accounting practice.

Duff, et. al., (2020) also found that academic research has little influence on professional accounting education, which is largely influenced by professional accounting associations and employers. Parker, et.al., (2011) acknowledged that whilst accounting academic research is important to the higher education system, academic careers and publishers, its impact on teaching, professional practice, the professions and society was hotly debated.

Summary

This paper considers the very important issue of restoring trust and relevance in accounting research, especially with the advent of AI program language model generators like ChatGPT and Bard. The legal ramifications of relying on AI research were also considered.

The paper also considered important issues such as author accountability, the bias in the double-blind review’ process, and the role of editors in ensuring a level playing field for less connected academic researchers.

Finally, the relevancy and impact of academic accounting research were considered. Unfortunately, it was shown that current academic accounting research, as it stands today, has very little relevancy or impact on the accounting profession outside the university.

The opinions in this article reflect those of the author and not necessarily those of the organisation or its executive.

References:

AAA (1977). Statement of Accounting Theory and Theory Acceptance. American Accounting Association, Sarasota, FL: American Accounting Association.

ABDC (2023), Journal Quality List, Australian Business Deans Council. https://abdc.edu.au/abdc-journal-quality-list/

Belot, Henry (2023), “Australian academics apologise for false AI-generated allegations against big four consultancy firms” The Guardian, 2 November. https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms

Bohannon, Molly (2023), “Lawyer Used ChatGPT in Court—And Cited Fake Cases. A Judge Is Considering Sanctions”, Forbes, Jun 8. https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=51f7d0387c7f

Croft, Daniel (2023), “Researchers apologise to big 4 consultancy firms for false AI-based accusations”, Cyber Daily, 3 November, https://www.cyberdaily.au/digital-transformation/9779-researchers-apologies-to-big-4-consultancy-firms-for-false-ai-based-accusations

Dada S, et al. (2022) Challenging the “old boys club” in academia: Gender and geographic representation in editorial boards of journals publishing in environmental sciences and public health. PLOS Glob Public Health 2(6): e0000541. https://doi.org/10.1371/journal.pgph.0000541

Fels, Allan and Guthrie, James. (2023), “PwC scandal makes a case for breaking up the big four”, Financial Review, Opinion, May 28, 2023, https://www.afr.com/companies/professional-services/pwc-scandal-makes-a-case-for-breaking-up-the-big-four-20230522-p5dadi

Guthrie, James (2023), “Letter from Emeritus Professor James Guthrie  on Submission 32 Ethics and Professional Accountability: Structural Challenges in the Audit, Assurance and Consultancy Industry”, The Parliamentary Joint Committee on Corporations and Financial Services, Australian Senate, https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Corporations_and_Financial_Services/ConsultancyFirms/Submissions

Guthrie, James; Dumay, John; Twyford, Erin; and Hazelton, James (2023), “Ethics and Professional Accountability: Structural Challenges in the Audit, Assurance and Consultancy Industry, Submission 33, The Parliamentary Joint Committee on Corporations and Financial Services, https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Corporations_and_Financial_Services/ConsultancyFirms/Submissions

Helgesson, Gert; Radun, Igor; Radun, Jenni and Nilsonne, Gustav (2022), “Editors publishing in their own journals: A systematic review of prevalence and a discussion of normative aspects”, Learned Publishing, Wiley Online Library, 23 February, pp, 229-240.

Macquarie University (2023), James Guthrie Publications, Research Output per Year, Research Profiles, https://researchers.mq.edu.au/en/persons/james-guthrie/publications/

Miles, Janelle (2023), “What is ChatGPT and why are schools and universities so worried about students using AI to cheat?” ABC, 24 Jan, https://www.abc.net.au/news/2023-01-24/what-is-chatgpt-how-can-it-be-detected-by-school-university/101884388

Moran, Jessica (2023), “Academics consider using ChatGPT to generate feedback, with marking time at University of Tasmania college slashed”, ABC, Nov 21. https://www.abc.net.au/news/2023-11-21/tas-utas-marking-time-cuts-chatgpt-assignments-students/103125634

Parker, Lee. D., Guthrie, James, and Linacre, Simon (2011), “The relationship between academic accounting research and professional practice”, Accounting, Auditing & Accountability Journal, Vol 24: 5–14.

Ratnatunga, Janek (2012), “Ivory Towers and Legal Powers: Attitudes and Behaviour of Town and Gown to the Accounting Research-Practice Gap”, Journal of Applied Management Accounting Research, 10(2): 1-20.

Ratnatunga, Janek (2018a) “Auditing Opinions for Sale?”, Journal of Applied Management Accounting Research, 16 (1): 17-19.

Ratnatunga, Janek (2018b) “The Silence of the Auditors”, Journal of Applied Management Accounting Research, 16(1): 21-26.

Ratnatunga, Janek (2019) “Why Audit Opinions are ‘Untrue’ and ‘Unfair’”, Journal of Applied Management Accounting Research, 17(2): 23-30.

Ratnatunga, Janek (2021), “The Impotence of Australia’s Accounting Regulators”, Journal of Applied Management Accounting Research, 19(2), pp. 19-26.

Ratnatunga, Janek (2023a), “Consulting Firms: Big Bucks but Little Value for Governments”, Journal of Applied Management Accounting Research, 21(1), pp. 10-16.

Ratnatunga, Janek (2023b), “PwC Tax Scandal’s Aftermath: It’s Time to Seriously Regulate the Big 4”, Journal of Applied Management Accounting Research, 21(1), pp. 17-28.

Sadler, Denham (2023), “Australian academics caught in generative AI scandal”, Information Age, Australian Computer Society, Nov 06. https://ia.acs.org.au/content/ia/article/2023/australian-academics-caught-in-generative-ai-scandal.html

Tarkang, Elvis E.; Kweku, Margaret and Zotor, Francis B. (2017), “Publication Practices and Responsible Authorship: A Review Article”, Journal of Public Health Africa, June 8(1): 723.

Wilhite, Allen W. and Fong, Eric A. (2012) “Coercive Citation in Academic Publishing”, Science, Vol. 335: 542-543.

Scroll to Top