Research Strategy - London, United Kingdom - Department for Science, Innovation & Technology

Tom O´Connor

Posted by:

Tom O´Connor

beBee Recruiter


Description

Details:


Reference number:


Salary:

- £42,495 - £46,750
  • A Civil Service Pension with an average employer contribution of 27%
    Job grade:
  • Senior Executive Officer
    Contract type:
  • Fixed term
    Length of employment:
  • 18 months
    Business area:
  • DSIT
  • Digital, Technologies and Telecoms
    Type of role:
  • Project Delivery
    Working pattern:
  • Flexible working, Fulltime
    Number of jobs available:
  • 2Contents
Location

About the job


Benefits:

Things you need to know

Location

  • LondonAbout the job

Job summary:


AI Safety Institute
Today, harnessing AI is an opportunity that could be transformational for the UK and the rest of the world.

Advanced AI systems have the potential to drive economic growth and productivity, boost health and wellbeing, improve public services, and increase security.

The UK government is determined to seize these opportunities.

In September, we announced Isambard AI as the UK AI Research Resource, which will be one of Europe's most powerful supercomputers purpose-built for AI.

The National Health Service (NHS) is running trials to help clinicians identify breast cancer sooner by using AI, and there are opportunities in other areas of public service, including education and policing too.


But advanced AI systems also pose significant risks, as detailed in the government's paper on Capabilities and Risks from Frontier AI.

AI can be misused - this could include using AI to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons.

AI can cause societal harms - there have been examples of AI chatbots encouraging harmful actions, promoting skewed or radical views, and providing biased advice.

AI generated content that is highly realistic but false could reduce public trust in information.

Some experts are concerned that humanity could lose control of advanced systems, with potentially catastrophic and permanent consequences. We will only unlock the benefits of AI if we can manage these risks. At present, our ability to develop powerful systems outpaces our ability to make them safe. The first step is to better understand the capabilities and risks of these advanced AI systems.

The UK is taking a leading role in driving this conversation forward internationally. We hosted the world's first major AI Safety Summit.

We have launched the AI Safety Institute, which is advancing the world's knowledge of AI safety by carefully examining, evaluating, and testing new types of AI, so that we understand what each new model is capable of.

The Institute is conducting fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI.


Job description:

The risks arising from AI are uncertain and evolving.

It is therefore vital that the AI Safety Institute is proactive in identifying the right priorities - often within a quickly changing context - and delivering efficiently to ensure the organisation is focused on the activities that fulfil our mission: to minimise surprise to the UK and humanity from rapid and unexpected advances in AI.

To do this, technical evaluations of and research into AI system are critical.


As a Research Strategy & Delivery Adviser you will work as part of a high performing, friendly team - the AISI Research Unit - to drive forward the design and delivery of high-quality evaluations of advanced AI systems and novel research.

We are looking for an experienced operator who is comfortable working at pace and in ambiguity, who can translate complex research for policy makers and operate within Government and the Whitehall machine.


You will be excellent at building strong, trusting relationships, problem-solving, and co-ordinating complex projects to support the organisation to deliver tangible benefits for AI safety research and evaluations.

This is a unique opportunity to work at the cutting edge of issues affecting the global community.


Applicants are encouraged to indicate if they have a preference to join either the Cyber Misuse team or the Safeguards Analysis team, both of which are arms of AISI's Research Unit.


The Cyber Misuse team develops evaluations that measure the uplift advanced AI systems provide to a range of threat actors, including autonomous AI systems.

Applicants with experience in cybersecurity are particularly encouraged to apply, but this is not a prerequisite for the role.

The Safeguards Analysis team evaluates and discovers attacks that enable AI systems to be misused.

Applicants with previous experience relating to adversarial machine learning, offensive cyber, information security, and technology privacy are especially encouraged to apply.


This will include, but not be limited to:

  • Generating and maintaining delivery momentum within AISI's research unit by creating a culture of velocity, coordination, and tight execution cycles, all directed at the goal of building and deploying evaluations for advanced AI s

More jobs from Department for Science, Innovation & Technology