Navigating AI in Academic Publishing: Balancing Efficiency, Expertise and Ethics

AI Oct 23, 2024

by Adèle Kreager

Artificial intelligence (AI) is rapidly transforming many sectors, from its role in breakthrough research on protein structure prediction, which recently earned a Nobel Prize, to more controversial uses in film and entertainment. As AI infiltrates our digital world, internet users are increasingly exposed to what has been evocatively termed ‘AI slop’—from seemingly innocuous AI-generated meme trends, such as ‘Shrimp Jesus’, to more demonstrably dangerous outputs, such as AI-generated mushroom-foraging books that contain bogus advice. In turn, AI now powers everyday tools like Microsoft Word’s spelling and grammar checks, or Gmail’s email filters, often without us even noticing.

Amid this surge in AI capabilities and applications, many industries, including academic publishing, are recognising the opportunities and challenges posed by these tools. AI offers a way to enhance efficiency, streamlining the more time-consuming, repetitive and mundane tasks. Yet these advancements come with ethical and practical considerations that demand careful thought. As a small, scholar-led, non-profit publisher, we are experimenting with how AI can support, rather than replace, the human expertise and creativity that underpin high-quality academic research and its dissemination.

How We Use AI in Our Editorial Processes

We are adopting a cautious but practical approach to integrating AI, using it as an ancillary tool in various stages of the editorial process:

1.      Assisting with index topic lists

Developing a list of index topics is a laborious task often shouldered by our authors. We’ve found that AI tools, like ChatGPT, can suggest preliminary lists of topics or place names, which serve as useful starting points.  However, these AI-generated lists often focus on broad, main topics, lacking the nuance needed for a comprehensive index. Therefore, human oversight is essential, and we rely on our authors and editors to review and refine the suggestions to ensure that the final index is accurate and usable.

2.      Crafting first drafts for book blurbs

We have also experimented with using AI to generate first drafts of book blurbs. By inputting key information about the book, ChatGPT can produce a structured summary that provides a useful point of departure. However, the critical insight needed to highlight a book’s key contributions is something that AI cannot replicate, since the responses are based on algorithmic combinations of text rather than a deeper understanding of the content. This is why these drafts are always reworked by our editors and authors.

3.      Creating alt-text for accessibility

Alt-text (alternative text) provides a textual description of images, making content more accessible to people with visual impairments. Assistive technologies, like screen readers, can then translate the alt-text into speech or braille. Alt-text can also be helpful for those with unreliable internet connections, serving as a stand-in for visual content when an image fails to load.

Creating alt-text for images is essential for improving the accessibility of our books, but it can also be a labour-intensive task. Using ChatGPT’s alt-text assistant reduces the time involved in generating alt-text descriptions, and even allows for multi-language output. Still, AI-generated alt-text isn’t flawless: it can struggle to identify the most relevant elements of an image, and can overlook important context. Again, human input is necessary, with all AI-generated alt-text outputs for images reviewed by our team and authors.

4.      Expanding access with AI-generated audiobooks

AI also has the potential to make academic content accessible to a broader audience by converting texts into audiobooks through Text-to-Speech (TTS) systems, at a fraction of the cost associated with professional audiobook production. Audiobooks can provide a new way to engage with scholarly content, especially for those who prefer listening over reading. The audiobook conversion process isn’t entirely automated, and adjustments like excluding bibliographies and non-essential footnotes are necessary to ensure the listenability of the end product.

However, while we don’t have the resources to employ professional narrators, we are mindful of the ethical implications of using AI narrators, which risk displacing human voice actors. Striking a balance between efficiency, accessibility and ethical responsibility is a challenge to take seriously, and we welcome reader and listener feedback on the few AI-generated audiobooks we’ve made available so far. If there is demand for audiobooks, producing them in-house is more environmentally friendly than leaving the conversion to individual readers.

How We Use AI in Our Marketing Processes

AI has proved helpful in compiling lists of relevant journals and societies for marketing purposes. However, we’ve noticed that AI chatbots often prioritise Anglo-American journals. To ensure broader international representation, we adjust our prompts to include foreign-language journals, allowing us to reach a more diverse audience. The same strengths that make chatbots effective for drafting preliminary blurbs also make them handy for condensing our policy statements into more succinct, audience-friendly summaries for marketing materials.

How Our Developers Use AI

Our developers integrate AI into their coding environments to streamline specific tasks, such as code explanation, generating snippets of code and automating test writing (a tedious activity that AI can handle efficiently).

How Our Authors Are Using AI

Since the summer of 2024, we have asked our authors to disclose any use of AI tools in their research and writing, with the aim to understand how AI is being integrated into academic work and to ensure transparency in the research process. Although uptake has been relatively limited so far, some authors have reported using AI for tasks such as translating texts, clarifying complex ideas, improving language accuracy, and providing feedback on grammar, vocabulary and style.

Zooming Out: Ethical Considerations for AI in Publishing and Beyond

As AI becomes more embedded in academic publishing, it raises a host of ethical questions, with implications within and beyond the industry itself.

1.      Data, accountability and algorithmic bias

AI’s outputs are shaped by the datasets used to train it, many of which are harvested without the consent of creators. In turn, these datasets can be biased or incomplete (often underrepresenting marginalised groups), leading to baked-in algorithmic biases that can perpetuate social inequalities. For this reason, we avoid using AI for editorial decision-making, especially in evaluating research: a practice that could effectively institutionalise past prejudices through new technologies.

2.      The environmental cost of AI

AI is an extractive industry, at multiple levels: not only is it exploitative of human labour,[1] but it is highly resource-intensive, with a single request made through ChatGPT consuming nearly ten times the electricity of a typical Google Search. Data centres consume significant electricity, produce harmful e-waste, and rely on the extraction of critical minerals, which are often mined unsustainably and traded in areas of conflict (as discussed in one of our recent publications). AI may appear as a kind of disembodied computation, but its material, environmental impacts are very real.

3.      AI and job displacement in creative fields

Widespread use of AI in creative industries risks crowding out human expertise and creativity, depending on its implementation. A revealing comment made by OpenAI’s former CTO, Mira Murati, earlier this year—that ‘Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place’—is darkly suggestive of the company’s priorities and attitudes towards AI’s role in the profit-driven workplace.

AI: A Complement to Human Expertise

At OBP, AI currently plays a useful but limited role in our workflows. We use it to streamline repetitive or time-consuming tasks, enabling our staff and authors to allocate more time and energy to the critical and creative work involved in high-quality academic publishing. As we explore AI’s potential, we remain committed to responsible use, ensuring that human creativity, transparency and fairness remain central to our work.



[1] ChatGPT-creator OpenAI not only faces accusations of intellectual property theft by training their systems on works under copyright, without consent; they also employed contractors in Kenya, earning less than $2 per hour, as content moderators to label horrific and harmful content.