Guidance on AI use in research and innovation
Introduction
This guidance is designed to support researchers to use artificial intelligence (AI) responsibly and effectively in their research. Additional guidance is available for PhD students on Responsible Use of Generative AI in Doctoral Research.
This guidance will be updated as
required by the University working group on AI in Research and Innovation to
reflect changes in the development of AI tools and sector norms.
What is AI?
AI is the capability of computational systems to perform
cognitive functions that we normally associate with human minds. AI models
underpin generative AI (GenAI) tools, which is an umbrella term for an AI tool
that creates new content. Such tools include large language models (LLMs) such
as CoPilot or ChatGPT that can generate text, sound, images and video.
Related terms are machine learning, a subfield of AI that enables
systems to learn patterns from data without explicit programming, and deep
learning, a subset of machine learning that uses multi-layer “neural networks”
to perform data analysis in an attempt to mimic human brain function. A broad
term for machine learning and deep learning models used when the model is
developed solely from data (without the use of physical principles) is
data-driven models. These models are typically employed for approaches such as regression,
classification, or predictive processes to assist with decision making.
Types of AI usage in research and innovation
There are several ways that AI can be used in research and
innovation. Two broad classes are as follows:
1. AI use to support research: examples include (i) discovery/understanding
of the research landscape through distillation and summarisation journal articles
and other material; (ii) refining inputs (text, images, sound), including adapting
text to different audiences such as for teaching or the general public, and improving
clarity and accessibility; (iii) data cleansing and other data processing activities;
(iv) supporting code writing and debugging; and (v) brainstorming and planning
tasks. These tasks typically use GenAI tools with the aim of improving efficiency.
2. AI use as a fundamental component of the research or as a
research tool: examples include the use of data-driven tools such as neural
networks to analyse research data, verification of the outputs of machine
learning tools, monitoring and
execution of research and experimental plans, sociological
research that explores how people use AI tools, explainable AI (XAI) research
that aims to improve understanding and trust in the outcomes of AI models, computer
vision algorithms that analyse images and videos for object detection and face
recognition, and development of new AI tools or models including ones that can be
used to narrow down options before starting in-depth research or sift data to
find patterns that highlight promising research paths.
Sustainability and environmental considerations
The training and operation of AI models, including the LLMs
used in GenAI tools, is typically carried out in vast data centres that require
significant energy and use water for cooling. Rare earth minerals are also
required to produce the computer chips and other hardware with their own
environmental costs. While it is difficult to collate accurate data on
environmental impacts, one estimate is that the most common type of ChatGPT query
may use 0.0029 kWh of energy, 30 ml of water and lead to the emission of 0.69 g
of CO2; as a comparison, the same article states that Streaming
Netflix in HD for one hour uses more than 20 times as much energy (around 0.077
kWh) [https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/artificial-intelligence-and-the-environment-putting-the-numbers-into-perspective/].
Another estimate describes a single ChatGPT query as requiring the same amount
of energy as running an oven for a little over 1s, and about one-fifteenth of a
teaspoon of water [https://www.businessinsider.com/how-much-energy-does-chatgpt-use-average-query-watts-altman-2025-6].
Online AI carbon footprint calculators
also exist.
Researchers should consider the University of Reading’s strategic focus on
sustainability when using AI tools.
Ways to limit the cost of GenAI tool use includes careful
writing of prompts, requesting text rather than image outputs, avoiding repeated
similar requests for small gains (e.g., stylistic improvements), limiting the length of conversations, and limiting usage of large GenAI models e.g., considering
whether a GenAI tool is needed and not using a GenAI tool when a non-GenAI
based search engine will provide the same information. Related guidance applies
to the use of other (non-GenAI) AI tools: consider whether AI use is necessary,
use the smallest possible AI model, and use transfer learning to fine-tune
existing models rather than training new models from scratch. Open-source
sharing of models and code can also reduce environmental costs. Some vendors
have also made sustainability commitments e.g., Microsoft.
The environmental costs of the use of AI tools should be
balanced against their potential benefits including more efficient ways of
working, the rapid analysis of large datasets, and enhanced pattern recognition
and modelling of complex systems. The environmental cost of using pre-trained data-driven
models for inference can be substantially less than that of alternatives e.g.,
physics-based models. AI tools can also contribute to research that has socio-economic
benefits.
Ethics and integrity considerations
The use of AI tools raises new ethical considerations. These
tools should support the judgement of researchers, rather than replace it, with
mandatory human oversight. As with all research, research using AI tools must
be traceable and reproducible and, where possible, consistent with open
research principles. Consider
using explainable AI approaches to interpret AI-derived outcomes and quantify
uncertainty. Researchers should consider possible bias in AI training sets and AI
use should be disclosed and acknowledged in research outputs (following
guidance from the relevant publisher where available). Projects developing AI models and
applications for innovation and commercialisation should also support open
research principles with traceability and reproducibility where appropriate. At
the same time, researchers should consider the protection of intellectual
property of AI innovation components, including datasets, trained models, and
software development, to enable effective translation, innovation
commercialisation, and impact. Some research contracts may include constraints
on where AI tools can be used or developed for the associated research projects.
Researchers must follow
policies on intellectual property rights and data
protection. Intellectual
property rights considerations include sources of training data: was it obtained
under license, from open source, or instead they scraped in breach of copyright?
Data protection considerations include ensuring that data is only shared with
commercial GenAI tools where appropriate and that those tools are setup to prevent
them from using the data for training if required.
Researchers should seek ethical approval if personal data of
any kind is to be used or there are other ethical concerns.
Ways to use GenAI tools to support your research
Used appropriately GenAI tools can improve research efficiency and aid creativity. GenAI tools can be used to:
1. Plan: help generate ideas and frameworks, and summarize relevant
information e.g., from a set of published articles.
2. Improve: refine and polish initial drafts including
tailoring for specific audiences, restructuring and improving conciseness,
support translation between languages, comment or optimize model code, and simplify
and summarise technical material.
3. Co-develop: generate first draft text or model code, help
debug model code, and contrast viewpoints or data.
The Research
Funders Policy Group statement on the use of Generative
AI tools in funding applications and assessment states that AI tools must be used responsibly when developing funding proposals
and should be acknowledged in any outputs. This policy group includes many of
our major funders e,g., UKRI, Wellcome and the Royal Society; see also the UKRI
specific guidance.
These principles should also be adopted in the development of applications for
internal university funding calls.
Many journals and publishers also have their own policies on
the use of AI e.g., Springer and Wiley.
While the specific policy should be checked for the intended publisher of an
output, a common theme is that GenAI tools do not currently satisfy the
authorship criteria and human accountability is essential for the final version
of the text. The use of GenAI images is typically prohibited with a few use
exceptions.
While it is typically not a requirement to report use of AI
tools to assist copy editing, there may be a requirement by funders and publishers
(and their journals), to report some types of usage of AI tools in research
outputs and proposals.
Doctoral research students should also refer to the Guidance
on the Responsible Use of Generative AI in Doctoral Research.
Use of GenAI tools in research assessment
The policies of funders and publishers (and their associated
journals) also address the use of AI by peer reviewers. Peer reviewers are
accountable for the accuracy and views expressed in their reports, there are
known limitations of AI tools, and there may be sensitive and proprietary
information in proposals or outputs/papers that should not be shared with
remote cloud-based AI tools. Consequently, reviewers are typically instructed
not to use AI tools for research assessment. Peer reviewers should also consider
whether the material that they review contains errors or inconsistencies that suggest
that AI tools may have been inappropriately used in content generation.
The use of AI tools in the preparation of the university’s
submission to the Research Excellence Framework exercise is addressed in the
University of Reading REF2029 code of practice (further information will be added when
available).
Risks and limitations of GenAI tools
While there are potential benefits from the use of GenAI
tools, users must also consider their limitations and the associated risks. Well-known
risks related to their use in research include the hallucination or fabrication
of material (plausible but incorrect information may be generated such as fake
citations), biases created by the choice of data on which the model is trained,
sensitivity of the outputs to the details of the prompt entered, and the risk
of violating copyright or other policies by entering material into remote
cloud-based AI tools. As stated above, these tools should support the judgement
of researchers, rather than replace it, with mandatory human oversight. See
also the section on Ethics and integrity considerations.
University approved GenAI tools
The University recommends the use of Microsoft CoPilot Chat
(as part of our M365 licence) for day-to-day use queries, productivity
enhancement and creativity because the Terms and Conditions of use do meet the
necessary requirements of data protection laws and have, therefore, been
approved by the University’s Legal, IMPS and Digital teams. However, you should
not use it for any data set containing personal or personal sensitive data.
Many other tools exist and some will likely be better for some tasks than others. A list of AI tools with a brief description and whether use is permitted can be found at https://www.reading.ac.uk/cqsd/artificial-intelligence/ai-guidance-for-staff.
Sustainability and environmental considerations:
Responsible and ethical use of AI:
https://ukrio.org/ukrio-resources/embracing-ai-with-integrity/
https://www.geoethics.org/ai-ethics-recommendations
Jisc training course on Ethics and AI: - https://www.jisc.ac.uk/training/artificial-intelligence-and-ethics
Jisc advice and guidance on “An introduction to copyright
law and practice in education, and the concerns arising in the context of
GenerativeAI” - https://nationalcentreforai.jiscinvolve.org/wp/2024/03/11/copyright-and-concerns-arising-around-generative-ai
Responsible AI UK - https://rai.ac.uk/
European Commission guidance on the responsible use of generative
AI in research - https://research-and-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en
Risks of data bias https://www.ibm.com/think/topics/data-bias
GenAI tools:
Jisc have produced a directory of AI tools:- https://nationalcentreforai.jiscinvolve.org/wp/2025/08/12/ai-tools-blog-home-page/
AI and Intellectual Property
https://www.wipo.int/en/web/frontier-technologies/artificial-intelligence/index
Other University of Reading guidance on AI use:
https://www.reading.ac.uk/digital-technology-services/ai-guidance
https://www.reading.ac.uk/cqsd/artificial-intelligence