Safeguard Analysis Workstream Lead - London, United Kingdom - Department for Science, Innovation & Technology
Description
Details:
Reference number:
Salary:
- £105,000 - £135,000
- Base salary of between £55,805 (L5) £68,770 (L6) which is supplemented with an allowance between £49,195 to £66,230
- A Civil Service Pension with an average employer contribution of 27%
Job grade: - Other
- L5/L6
Contract type: - Temporary (not fair and open)
- Loan
- Secondment
Length of employment: - 12 to 18 months
Business area: - AI Safety Institute
Type of role: - Information Technology
Working pattern: - Flexible working, Fulltime, Parttime
Number of jobs available: - 1Contents
About the job
Benefits:
Things you need to know
Location
- LondonAbout the job
Job summary:
The AI Safety Institute is the first state-backed organisation focused on advanced AI safety for the public interest.
We launched at the AI Safety Summit because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.
Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from Oxford and Cambridge.
We have ambitious goals and need to move fast.
Develop and conduct evaluations on advanced AI systems
We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.
Drive foundational AI safety research
We will launch moonshot research projects and convene world-class external researchers.
Facilitate information exchange
We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.
Job description:
About the Role
We are hiring for a Head of Safeguard Analysis to lead and build a team to develop and conduct analyses of how well the safety and security components ("safeguards") of advanced AI systems stand up to a variety of threats.
This role will involve:
-
Leading the strategy for the Safeguards team, to design high quality analyses of safeguards. You will be responsible for delivering AISI's safeguards portfolio, through in-house research and external commissions
-
Developing evaluations aimed at assessing how safeguards stand up against a range of attack vectors, drawing from techniques in ML evaluations and a range of security expertise
-
Ensure evaluations are scientifically robust, actionable, communicable and consider the current state of research
-
Building and leading a team by identifying talent gaps, leading recruitment to address those gaps and being responsible for the final hiring decision for members of technical staff on the workstream
-
Collaborating with AISI teams on specific risks areas to combine safeguard evaluations with evaluations of specific dangerous capabilities, and the AISI platform team to build tooling to perform such evaluations
-
Developing evaluations aimed at assessing how safeguards stand up against a range of attack vectors, drawing from techniques in ML evaluations and a range of security expertise
-
Ensure evaluations are scientifically robust, actionable, communicable and consider the current state of research
-
Building and leading a team by identifying talent gaps, leading recruitment to address those gaps and being responsible for the final hiring decision for members of technical staff on the workstream
-
Collaborating with AISI teams on specific risks areas to combine safeguard evaluations with evaluations of specific dangerous capabilities, and the AISI platform team to build tooling to perform such evaluations
-
Working closely with partners in the national security community, as well as other partners in industry, academia, and civil society
-
Communicating research outputs in core AISI products through government communications, research publications, and high-profile events like AI summits and conferences
-
Leading longer-term research efforts aimed at better understanding the efficacy of system safeguards
Person specification:
You would be a great fit if you:
Are motivated by a strong desire to ensure positive outcomes for all of humanityfrom the creation of AI systems
Have substantial experience at the intersection of ML and information/cyber security, like participating in ML red teams, or conducting research in adversarial ML or differential privacy
-
Can set and steer research agendas towards the most impactful avenues of research and development
-
Have leadership experience; you can manage a team of dedicated engineers and researchers, able to inspire, guide and maximize the potential of
More jobs from Department for Science, Innovation & Technology
-
Senior Content Designer
London, United Kingdom - 2 weeks ago
-
Senior Multilateral Policy Advisor
London, United Kingdom - 6 days ago
-
Digital, Technology and Telecoms
London, United Kingdom - 2 weeks ago
-
Head of Delivery Capability
London, United Kingdom - 1 week ago
-
Policy Advisor
London, United Kingdom - 1 week ago
-
Senior Strategic Finance Adviser
London, United Kingdom - 1 week ago