Cyber Misuse Workstream Lead - London, United Kingdom - Department for Science, Innovation & Technology

Tom O´Connor

Posted by:

Tom O´Connor

beBee Recruiter


Description

Details:


Reference number:


Salary:

- £105,000 - £135,000
  • Base salary of between £55,805 (L5) £68,770 (L6) which is supplemented with an allowance between £49,195 to £66,230
  • A Civil Service Pension with an average employer contribution of 27%
    Job grade:
  • Other
  • L5/L6
    Contract type:
  • Temporary (not fair and open)
  • Loan
  • Secondment
    Length of employment:
  • 12 to 18 months
    Business area:
  • AI Safety Institute
    Type of role:
  • Information Technology
    Working pattern:
  • Flexible working, Fulltime, Parttime
    Number of jobs available:
  • 1Contents
Location

About the job


Benefits:

Things you need to know

Location

  • LondonAbout the job

Job summary:

The AI Safety Institute is the first state-backed organisation focused on advanced AI safety for the public interest.

We launched at the AI Safety Summit because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.


Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from Oxford and Cambridge.

We are now calling on the world's top technical talent to build the institute from the ground up. This is a truly unique opportunity to help shape AI safety at an international level.

We have ambitious goals and need to move fast.


Develop and conduct evaluations on advanced AI systems
We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.


Drive foundational AI safety research
We will launch moonshot research projects and convene world-class external researchers.


Facilitate information exchange
We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.


Job description:


About the Role
As AI capabilities evolve rapidly, we expect a significant shift in the global cybersecurity paradigm.

Of particular concern is the potential for AI systems to grant novice actors dangerous capabilities, expanding the number of actors who could carry out serious cyber-attacks.

To improve our ability to understand this threat, AISI plans to conduct research on a broad spectrum of cyber activities, including sophisticated threat modelling and the development of cyber ranges with a view towards developing and deploying evaluations for cyber-attack uplift capabilities (e.g. vulnerability discovery) by next-generation advanced AI systems.

This work is pivotal to an informed governmental and societal response to a potentially significant emerging threat.


Even more importantly, current evaluation methodologies in this field are predominantly ad-hoc and qualitative; our ambition is to reshape the cyber evaluations ecosystem in a more rigorous and holistic direction.


To do this, the Cyber Misuse Evaluations Team will collaborate with an array of critical partners include AI labs, academia and the national security community.


We are seeking a Head of Cyber Misuse Evaluations to build and lead a team to that researches and delivers crucial misuse threat evaluations in a critical and emerging field.

Note that this team will work closely with the Safeguard Analysis team, which will examine vulnerabilities of AI systems as opposed to the ability for cyber-attackers to use AI systems as tools.


Responsibilities

This role will involve:
-
Leading the overall strategy for the Cyber Misuse workstream, focused on designing and conducting high quality evaluations of the risks posed by advanced AI in the cyber domain
-
Developing evaluations aimed at assessing cyber misuse risks from frontier systems, both through in-house research and external commissions
-
Building and leading the team, identifying key talent gaps that are blocking delivery, finding and pitching talent to address gaps, and making final hiring decision for members of technical staff on the workstream

Ensure evaluations are scientifically robust, actionable, communicable and consider the current state of research
-
Deliver impactful research by overseeing a diverse portfolio of research projects, collaborating with leading national security experts and external partners. Your work will not only advance the field but also inform critical policy decisions.
-
Communicating research outputs in core AISI products through government communications, research publications, and high-profile events like AI summits and conferences.
-
Collaborating with AISI teams on specific risks areas to combine cyber evaluations with evaluations of specific dangerous capabilities, and the AISI platform team to build tooling to perform such evaluations.
-
Working closely with partners in the national security community, as well as other partners in industry, academia, an

More jobs from Department for Science, Innovation & Technology