Students at the UKRI CDT in ART-AI
PhD students at the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent Artificial Intelligence (ART-AI).
We work across three university faculties in a unique integrated programme of interdisciplinary research and training.
To what extent can artificial intelligence ensure the rights of children and young people are protected in the digital environment? Ronny will explore AI's potential as a digital guardian in the online environment. His project is relevant to present debate concerning both the means and the desirability of classifying individuals as ‘children’ using the current system based on the age of the child, and how to remain consistent with the best interests of the child in the digital world; how AI can be utilized to evaluate the individual developmental maturity of children independent of age, and how artificial intelligence may be employed to protect and ensure children’s rights to development, information, education, participation and privacy in a safe and secure digital environment. His research is intended to ascertain whether AI acting as a digital guardian could redefine the position and power of children in relation to the competing protective interests of parents and policy interests of the State using the United Nations Convention for the Rights of the Child (UNCRC) as a regulatory framework.
AI can solve engineering problems that were previously considered too complex, or impractical, for a generalised solution. A major area of development is using AI to increase the autonomy and capabilities of robotics. Autonomy is important to underwater robotics where direct communication is very limited without a physical tether. The marine environment plays a significant role in society in terms of transport, food resources and ecological diversity. Therefore, the impact of increasing underwater robotic capabilities are far reaching and it is important to consider the accountability of autonomous robots and the institutes that operate them. It is crucial for future technology leaders to develop perspective which encompasses both technological and social impacts of AI systems in order to promote transparency and trust.
George will investigate the use of AI to enable rapid character and asset creation for personalised highly stylised avatars or for high quality rapid character creation for e.g. video games. These systems will have semantically meaningful controls for e.g. shape, size, style and appearance, and will allow users to guide the design of the character with additional data, e.g. upload photographs, concept art or even voice audio to personalise or rapidly accelerate character creation.
Catriona has a multidisciplinary background in law, sociology and forced migration studies. Prior to starting her PhD, she worked for a human rights organisation on projects aiming to build the capacity of civil society actors to engage in digital policy debates. Before that, she worked in the European Parliament as a researcher on gender equality. Her project will critically examine the design and deployment of machine learning in the governance of migration.
A fundamental problem in AI is how to endow artificial agents with the ability to autonomously form useful high-level behaviours (for example, grasping) from available behavioural units (for example, primitive sensory and motor actions available to a robot). This ability allows a developmental process during which an agent can learn to display behaviours of increasing complexity through continuously building on its existing set of skills to acquire new ones. Akshil aims to develop algorithms that enable an artificial agent to autonomously go through such a developmental process.
A cultured neuronal network is a cell culture of biological neurons that can be directly connected to input/output devices, allowing bidirectional communication between the network and the device. One of the main challenges associated with these networks is how to get the neurons to grow in a controlled pattern, as this dictates the network geometry. Mafalda will explore different mechanisms for directing the growth of cultured neurons by modifying the properties of the substrate surface. The potential implications of this work are widespread, with direct improvement of artificial neural networks and an improved understanding of the way in which biological neurons grow and develop.
Unmanned and autonomous aircraft are being tasked with increasingly complex missions and in greater proximity to other air users and densely populated regions. To ensure the safety of the aircraft and people and property on the ground, aircraft must be imbued with a level of situational awareness equal or greater than that of a trained pilot. As aircraft are operated ever more remotely with longer endurance and larger intervals between human inspections, it becomes increasingly important to monitor the health of the aircraft systems themselves as part of this situational awareness. Elena will apply machine learning techniques to the problem of aircraft health monitoring, to detect performance anomalies and to identify categories of faults.
Jack E. Saunders
Visual Simultaneous Localisation and Mapping (SLAM) is an essential issue of general spatial AI research, which may concern the incremental estimation of geometry and semantic information of the scene around a mobile robot, and recovering their pose, as well as real-time performance. A typical Visual SLAM algorithm usually stands on slow moving robots within a static scene but those assumptions are inadequate to represent real-world scenarios including drones or autonomous vehicles, where the environment is highly dynamic, random and large. With advances in deep learning and electronics technology, other sense channels and related learning based synthesised sensing may be a promising area for improvements. Such work has great implications for public policy, e.g. licensing and road traffic regulations, revenue and security.
Jack R. Saunders
Computer Algebra is the art/science of using computer systems to do algebra: generally calculations too great for humans, and often close to the limit of feasibility for machines. However, these computations stretch even the limits of current systems, and require substantial human expertise to formulate correctly. Such expertise is rare, and also fallible. Can Artificial Intelligence techniques be used to “package” such expertise, and make it available to more users?
Artificial intelligence has been widely used for decision making in various fields from politics and business to daily life, raising issues such as bias (including gender, races), security and privacy. To make trustworthy AI is becoming key in the development of AI industry. If we are to build trustworthy AI, what value attributes do humans want to embed in robots? How do we endow the desired attributes in AI design? What algorithms should be applied to construct moral AI and to strike a balance between utilitarianism and deontology in its moral decision making? Huixin is attempting to answer these questions by using psychological and human computer interaction theories and research methods, finally making AI trustworthy through the whole process from design and construction to application.
Damian will use computer science and social scientific concepts to analyse digital innovation in city governance. He will evaluate the use of AI by city and municipal governments in Europe, explore how AI may successfully be integrated into policy making processes, and propose algorithms with ability to support decision-making within these processes through the harnessing of synergies achieved by human - AI collaboration.