A team of British engineers based in Portsmouth has successfully demonstrated a new type of…
Call for submissions: Cyber Autonomy Gym for Experimentation (CAGE) Challenge 2
The Defence Science and Technology Group (DSTG) has called for submissions on Artificial Intelligence-enabled Autonomous Cyber Operations (ACO), offering the potential for distributed, adaptive defensive measures at machine speed and scale.
The cyber domain is a particularly challenging domain for autonomous AI, says DSTG, and requires further research in order to enable ACO to become an operational capability. To facilitate this AI research The Technical Cooperation Program (TTCP) CAGE working group is releasing CybORG, an experimental platform using the OpenAI Gym interface together with a cyber security scenario and a challenge to which we invite researchers to respond.
“Our aim is to support the development of AI tactics, techniques and procedures with CybORG and a series of CAGE scenarios with associated challenge problems in order to support practical demonstrations of ACO,” says the call for submissions.
“We wish to engage the AI and cyber security research communities, especially to leverage domain experts outside of the cyber field, and by encapsulating the cyber elements in environments such as CybORG along with the CAGE scenarios and challenge problems, we hope that the cyber problem set becomes accessible to a wider audience. The first CAGE scenario and associated challenge problem were released at the IJCAI-21 1st International Workshop on Adaptive Cyber Defense (ACD 2021). The second CAGE challenge scenario and challenge problem were announced at the AAAI-22 Workshop on Artificial Intelligence for Cyber Security Workshop (AICS) and have now been released.”
The CAGE challenge environment, CybORG, and second challenge are available here. Any enquiries can be directed to CAGEACOChallenge@dst.defence.gov.au
The CAGE challenge is written in Python. Dependencies can be installed using pip. Further instructions are included on the GitHub page. The challenge includes red agents to test against and an example blue agent. Submissions should implement the same methods of the example blue agent.
Important dates
1 April 2022: Challenge 2 released and open for submissions.
11 July 2022 (any time zone): Final day for challenge submissions.
23 July 2022: Final results announced.