Skip to main content

The surprising ways that AI can enrich our lives

Meet the researchers offering real insights on artificial intelligence.

Image of an AI with wire connections
Our researchers have been considering how AI can be used in a multitude of ways.

Artificial intelligence is big business. The AI market in the UK is valued at more than £72 billion, and almost 70% of people in the UK use AI in various forms – whether it’s at work or at home. But how can these tools act as a helping hand, as well as a revenue driver or engaging education tool?

“We like to say: imagine a Chattable Avatar as being a bit like the paintings that come to life in Harry Potter,” says Dr Deborah Brewis. “You can talk to them in real time and get an adaptive response.”

Deborah, a member of the Future of Work research centre, is one of a large number of academics within the School of Management exploring the possibilities of how artificial intelligence can be deployed in novel ways. One of her current projects is examining how AI could power digital visitor guides – Chattable Avatars – for cultural sites such as museums, art galleries and libraries.

Bringing culture to life

Museum
Chattable Avatars could elevate the museum experience.

The research team, formed as part of South West England’s GW4 network, believe that these avatars could help to boost visitor engagement by interacting with users and answering questions.

The avatars use a large language model to mimic the tone of predefined characters – think, for instance, of being able to hold a conversation with a steerage passenger aboard the SS Great Britain or of speaking to a long-dead artist about their inspiration for a famous painting.

So far, Deborah and her colleagues have created a prototype Chattable Avatar and conducted stakeholder workshops with local museum representatives to identify potential benefits and obstacles to adoption.

“There is real enthusiasm and appetite in the cultural sector for tools that can boost visitor engagement with exhibits,” Deborah explains.

“We found that among some infrastructural challenges, the biggest concern is – naturally – the trustworthiness of AI. How can we guarantee that the AI will give accurate information, can’t be easily ‘broken’ and will respond ethically? Solving this will be essential.”

They identified that these interactive additions could empower visitors by bringing history to life, allowing them to shape the visitor experience themselves, as well as driving interest on social media. They could also enable organisations to provide an outstanding visitor experience even when short-staffed. However, there were concerns around cost, maintenance, and the pitfall of potential avatars falling into the ‘uncanny valley’ of being almost-but-not-quite lifelike.

The team believe that some of these challenges to buy-in could be addressed by how the avatars are framed: rather than authoritative sources of fact, they could be positioned as narrative extras. What’s more, they could be used to offer visitors the experience that best suits them and their needs, rather than a one-size-fits-all approach. For instance, the information given could be adapted to varied age groups, or language tailored to differing needs.

“Along with being an engaging experience for visitors of museums and heritage sites, one of the exciting things about this is that the experience could be personalised to diverse groups with different skills and interests,” Deborah explains. “The hope is to improve accessibility to culturally significant sites and add to the impact of the visitors’ experience once there."

Tailored training on demand

Image of a worker in a warehouse at a computer
AI could be used to help train warehouse operatives.

The ability of AI to tailor information in incredibly short timespans is also of interest to Professor Melih Celik from the Smart Warehousing and Logistics Systems research centre (SWALOS). He is part of an Innovate UK-funded team – also comprising SWALOS Director Professor Vaggelis Giannikas and PhD student Sina Khodaee – that partnered with Internet of Things company Logidot to investigate the potential of AI as a training tool for warehouse operatives.

“Their question to us was: with the growth of AI, how can we best implement such tools on the warehouse floor?” Melih explains.

The researchers worked with stakeholders in industry to identify their needs and consider how AI could meet them; they hope to build on this to produce a prototype in future projects. The team focused on how an AI chatbot could be used to train new warehousing staff, as well as to answer questions and troubleshoot problems for existing employees.

“The traditional way of doing this [sort of training] is you sit in a classroom and you’ve got a bunch of people who work in the warehouse all [being shown] a bunch of PowerPoint slides, whereas with this approach you’ve got personal trainers for everybody,” he says.

“Plus, it’s on demand as well. You don’t have to be present in a classroom where you’re looking at the teacher, and the students don’t need to be available at the same time.”

It’s also, as Melih points out, incredibly versatile. It’s rare to have multiple people working through identical workflows during the day, so training could be implemented to address even subtle differences in their needs. “I also don’t think there’s anything in our methodology that prevents another industry from taking an approach like this,” he asserts.

By limiting the data that the models behind the chatbot can draw upon for its answers, the risk of hallucinations (false yet plausible sounding answers) is avoided. However, Melih doesn’t believe that AI will entirely replace human presence in the classroom:

“I don’t think [AI tools] are going to replace us. I think we will need that human touch and, at the end of the day, that’s what people are signing up for when they register for our courses and come to our lectures – but it’s going to make that human contact much, much better."

“The most important thing is to go to people as early as possible and just ask: what would help you and what do you want?”
Dr James Fletcher Assistant Professor of Digital Futures

Efficiency in the classroom

A computer on a desk with some papers.
Our researchers are looking into AI and it's uses in the classroom.

Dr Teslim Bukoye, Associate Professor in the School’s Information, Decisions & Operations division agrees that AI can have beneficial applications in education. Research he carried out with colleagues from the Universities of Bristol and Huddersfield has demonstrated that AI can aid in the development of critical thinking skills.

“It started with a simple observation: AI text generators moved from ‘interesting novelty’ to ‘everyday study habit’ almost overnight,” he says.

“In business schools especially, students were using tools like ChatGPT to speed up reading, drafting and sensemaking, but the big unanswered question was: what does that do to critical thinking?”

He continues: “The speed of adoption made it feel urgent to study what was actually happening, not what we feared or hoped was happening.”

The team set out to answer this question by sampling a group of over 100 postgraduate students, split into a control group and an intervention group.

Both groups completed two sets of critical thinking questions, with the intervention group using ChatGPT to aid their preparation for the second test.

The results indicated a statistically significant improvement in the performance among the group that had used AI, particularly when it came to remembering, understanding and applying knowledge. The effect was less pronounced when it came to analysis and evaluation, and not apparent upon creativity.

“Practically, this suggests a shift in how we teach and assess,” says Teslim. “If AI can take some of the friction out of earlystage learning – such as summarising and proofreading – we can deliberately re-invest class time and assessment weight into higher-order thinking: justification, critique, triangulation of sources and defensible decision-making.”

Teslim asserts that the responsibility lies with educators to develop their teaching in a fashion that steers students efficiently towards deeper thinking by freeing up headspace through AI, while also ensuring that students also continue to challenge their assumptions. Essentially, he views AI as a thinking partner rather than an answer machine.

“It is not ‘plug in AI, get better thinking’,” he cautions. “The net outcome depends on guardrails: assessment design, AI literacy, expectations around verification and a culture that rewards reasoning over polish.”

Support in the workplace

It might be tempting to assume that rapidly developing technology such as artificial intelligence is a young person’s game, when in fact AI tools can offer valuable support to older adults. Dr James Fletcher, Assistant Professor in Digital Futures, warns against this narrow viewpoint: “We often call it ‘digital ageism’, this assumption that older people just don’t engage with new technologies.”

James’ recent work – carried out with Institute for Digital Security & Behaviour colleague Dr Olivia Brown – has identified ways in which technologies such as AI can support people with dementia in the workplace, helping them to remain employed for longer.

“One of the things that people with dementia can struggle with – outside of the memory problems that most people traditionally associate with the condition – is particular speech problems such as finding the next word in a sequence,” he explains.

“It just so happens that we have now invented what are basically super text predictors. Generative AI tools are incredibly good at predicting the next word. This can potentially fill in that exact gap for many, especially administrative, workers with dementia.”

James suggests that digital workspaces incorporating generative AI into applications such as e-mail and memo programs can enable workers with dementia to focus on other skills, such as networking – “often older workers are a little bit better at [soft skills] simply by virtue of having been in the market for a long time”. In order to be helpful and avoid confusion, however, James feels that their integration into user interfaces must be seamless.

One of the crucial facets of implementing tech in this manner, James emphasises, is co-creation rather than simply making and acting upon assumptions about what people will want or find helpful. He explains: “The problem is that we have fantastic ideas and we go out and develop them, thinking we’re going to help all of these older people to live better lives in the world. Then we meet a user for the first time and it turns out that our assumptions about their lives were entirely wrong.”

The solution? “The most important thing is to go to people as early as possible and just ask: what would help you and what do you want?”

Find out more about the researchers

Visit our academics' profile pages.

Learn more about our Research4Good


This article appeared in issue 3 of the Research4Good magazine, published March 2026. All information correct at time of printing.