Skip navigation EPAM
CONTACT US

Partnering with the University of Manchester on a Responsible AI Framework

Partnering with the University of Manchester on a Responsible AI Framework

Responsible AI is AI we can trust. It’s the difference between a future in which we fear automated decisions and one where we embrace them. Today, the legitimacy of unfettered, unregulated automated decision-making is being challenged by legislators and end-users alike. The development of AI solutions requires a multidisciplinary approach from planning through deployment and operations. To achieve this, academics, social scientists and legally trained professionals should be working alongside data scientists and engineers to ensure responsible AI development. 

New Regulations Spurring the Need for Action

Responsible AI is increasingly relevant to business success. As an example, the legal framework introduced by the European Union (EU) is designed to make the EU a world-class hub of innovation in AI, and legal certainty is being provided as a basis for strategic AI leadership.

When enacted, the EU’s AI harmonization legislation (originally proposed in 2021) will mandate that all AI system providers be required to assess and classify their AI system risk using strict criteria.  Companies who fail to comply risk maximum fines of either 30,000,000 EUR or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher.

Regardless of where they are headquartered, all businesses deploying and developing AI technologies that impact EU citizens in a high-risk context (e.g., to determine mortgage approvals or make hiring decisions) will be required to register with the European Commission. ‘Prohibited’ actions — defined as AI that poses unacceptable risk to safety and livelihood and which distorts physical or psychological well-being — will become unlawful if not for defined legitimate purpose.

Developing a Framework to Address Responsible AI

To harness the transformative potential of AI-empowered technologies, companies must act responsibly. We created an AI framework that provides measurable, transparent and extensible guidelines — aligned with a client’s legal or regulatory interpretation — to direct AI product development. Our approach evaluates AI on an array of testable standard dimensions, cross-referenced with stakeholder intent guided by legislation. The goal of the framework is to provide a simple, traceable, high-level view that encompasses the multisystemic nature of modern AI systems, while also providing the necessary specificity to inform AI planning, management and remediation strategies.

Partnering with the University of Manchester

To validate our framework, we partnered with the University of Manchester in a pilot program. Working groups comprised of EPAM liaisons were paired with computer science and politics, philosophy and economics (PPE) students to critically and independently evaluate two AI-powered use cases utilized at EPAM. This independent pilot helped refine and select coherent criteria within EPAM’s Responsible AI framework.

Duncan Hull, Assistant Professor in Computer Science at the University of Manchester shares: “AI presents many challenges, which require a multidisciplinary approach to developing applications responsibly. It’s been great to see EPAM developing the Responsible AI framework by building collaborations between humanities students and computer science students studying at the University of Manchester.”

Through this pilot, we achieved three key results:

  • The AI framework was ruggedized, demonstrating value when applied to real-world AI in production
  • EPAM’s own selected internal AI systems were reviewed and scored
  • A strong partnership between EPAM and the University of Manchester was formed

Professor Jackie Carter of Statistical Literacy and the University of Manchester explains: “Partnering with EPAM through the Responsible AI project was a win-win. Four University of Manchester undergrads worked together in a professional environment on a project that has already led to a hugely successful outcome. Through this proof-of-concept internship model, the University of Manchester and EPAM have evidenced the value of social science students working with computer science students in a professional, commercial environment. Paid internships help students gain valuable analytical and professional skills that make them highly employable. We look forward to seeing this partnership flourish.”

Thanks to the partnership between EPAM and the University of Manchester, we’re better equipped to help companies prepare for the new AI world. This means companies will be able to derive the value that advanced AI systems offer, but in a compliant manner. Stay tuned for more exciting updates as we continue to extend our framework! 

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?