Cyber Security Researcher - London, United Kingdom - Department for Science, Innovation & Technology
Description
Details:
Reference number:
Salary:
- £65,000 - £135,000
- Base salary of between £35,720 (L3) £68,770 (L6) which is supplemented with an allowance between £29,280 to £66,230
Job grade:
- Other
- L3, L4, L5, L6
Contract type: - Fixed term
- Secondment
Length of employment:
months
Business area:
- DSIT
- Science, Innovation and Growth
- AISI
Type of role: - Digital
Working pattern: - Flexible working, Fulltime, Parttime
Number of jobs available: - 2Contents
About the job
Benefits:
Things you need to know
Location
- LondonAbout the job
Job summary:
About the AI Safety Institute
The AI Safety Institute is the first state-backed organisation focused on advancing AI safety for the public interest.
We launched at the Bletchley Park AI Safety Summit in 2023 because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.
We have ambitious goals and need to move fast.-
Develop and conduct evaluations on advanced AI systems. We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.
-
Develop novel tools for AI governance. We will create practical frameworks and novel methods to evaluate the safety and societal impacts of advanced AI systems, and anticipate how future technical safety research will feed into AI governance.
-
Facilitate information exchange. We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.
Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from leading universities.
As more powerful models are expected to hit the market over the course of 2024, AISI's mission to push for safe and responsible development and deployment of AI is more important than ever.
What we value:
-
Diverse Perspectives: We believe that a range of experiences and backgrounds is essential to our success. We welcome individuals from underrepresented groups to join us in this crucial mission.
-
Collaborative Spirit: We thrive on teamwork and open collaboration, valuing every contribution, big or small.
-
Innovation and Impact: We are dedicated to making a real-world difference in the field of frontier AI safety and capability, and we encourage innovative thinking and bold ideas.
-
Our Inclusive Environment:We are building an inclusive culture to make the Department a brilliant place to work where our people feel valued, have a voice and can be their authentic selves. We value difference and diversity, not only because we believe it is the right thing to do, but because it will help us be more innovative and make better decisions.
Job description:
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals.
Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems.
One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Safety Institute's Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security.
Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems.
Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations, and to scale up our capacity to evaluate frontier AI systems as they are released.
We are also open to hiring technical generalists with a background spanning many of these areas, as well as threat intelligence experts with a focus on researching novel cyber security risks from advanced AI systems.
RESPONSIBILITIES
As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model, to working with research and infrastructure engineers to build environments and challenges against which to benchmark the capabilities of AI systems.
More jobs from Department for Science, Innovation & Technology
-
Culture and Engagement Lead
London, United Kingdom - 3 weeks ago
-
Change Management Lead
Birmingham, United Kingdom - 2 weeks ago
-
Ofcom Sponsorship Adviser
London, United Kingdom - 3 weeks ago
-
Investment and Engagement Lead
London, United Kingdom - 3 weeks ago
-
Policy Advisor
London, United Kingdom - 2 weeks ago
-
Head of International Data Transfer Tools
London, United Kingdom - 1 week ago