About the Project
The larger AISEC project of which the PhD will be part, explores on the one hand if formal representations of applicable laws can be built to ensure legal compliance by AIs even when under malicious attacks. On the other hand, it will develop a critical appraisal of current laws as they apply to safety-critical AIs. While based in the School of Law, the candidate will be able to set their own emphasis within this spectrum, reflecting their background (law or computer science), interests and future career aspirations. It can involve some or all of the following:
• Analyse the legal regime governing attacks against (selected applications of) AIs and evaluate its suitability for current technological developments
• Research how to add explicit legal knowledge through appropriate symbolic representations that ensures legal compliance by design
• Analyse the appropriateness of current allocations of legal responsibility within the complex AI development ecosystem
AISEC has a significant international span, with 12 partners from Academia and Industry in Europe and the US. PhD students joining this project will have opportunities to travel to international conferences, help organising academic events, spend time with industrial partners, collaborate with academic leaders in the field, develop their own research profiles as well as gain interdisciplinary experience working at the intersection of law and various AI and Computer Science disciplines and within a large team of senior academics, postdoctoral and doctoral researchers at the two partner institutions, the University of Edinburgh and Heriot-Watt University.
The successful applicant will have:
• A UK 2:1 honours degree and a UK Masters degree with at least 60% in the taught section and 65% or more in the dissertation (or their international equivalents) in law and/or computer science.
• Applicants with a background in law will have experience in IT law, especially civil and criminal liability issues, and/or experience with the theory and practice of the regulation of technology and/or experience with legal AI and law as software code. They will ideally have demonstrable experience working with or building software applications.
• Applicants with a background in computer science will have experience in numerical and symbolic AI methods (e.g. statistical machine learning, NLP, knowledge representation, argumentation, dialogue systems), or in building user-facing software applications, ideally with some experience in HCI/UX design and evaluation and human factors methods. In addition, they will have strong demonstrable experience with legal and regulatory issues of AI, e.g. through prior project work.
• Proficiency in English (both oral and written). For candidate’s whose first language is not English, we require an overall IELTS score of 7.0 with at least 6.5 in each component (or other acceptable English language qualification at the equivalent standard).
• Excellent communication and teamwork skills, and the ability to communicate their subject matter to non-experts.
• A strong interest in interdisciplinary work and engaging with domain experts to conduct user research and develop their own knowledge of the legal domain.
Why not add a message here
Based on your current searches we recommend the following search filters.
Based on your current search criteria we thought you might be interested in these.