Discover more from The Ethical Reckoner
ER18: On the EU AI Act Grand Challenge
Or, twelve teams, ten days, and $100k.
Welcome back to the Ethical Reckoner. Today, I’ve got something slightly different for you. Last week, I competed in the First EU AI Act Grand Challenge, hosted by the University of St. Gallen in Switzerland. It was an incredible experience and I’m delighted that my team, the LegalAIzers, came away with a win, along with the Conformity Mavericks. This is going to be part travelogue, part recapping the Challenge, and part reflection on the experience and the AI Act as a whole.
So, the Grand Challenge. Twelve teams faced off with one goal:
be the biggest legal nerds put the forthcoming EU AI Act into practice by assessing the conformity of different uses of AI with the Act.
The experience started on the 12th with the Boot Camp, which was, in fact, in an actual military camp. We spent the day looking at robots at a Swiss army disaster relief training facility and assessing their compliance with the new EU AI Act. (Also, should I mention that we slept in a Swiss army bunker barracks?) Here are a few things I learned from the Boot Camp:
Robots can do a lot of cool things in construction, delivery, and search and rescue—I was especially impressed by the autonomous excavator.
Yes, the robot dogs are coming.
Mandatory military service in Switzerland is fairly cushy—lots of time for spikeball. Plus you get free train travel, and a scholarship for education when you’re done.
Also, in a country with three major languages and cultures (German, French, and Italian), it’s a way to create some national unity.
The Swiss army does, however, hate outlets.
The Final didn’t start until the 17th, which gave us time to start our report and refine our approach. It wasn’t all work, though—we found time to explore Geneva and, after moving to St. Gallen, check out the old town and go to the Rorschach beach! (I also stumbled on an antelope preserve in the hills, which was unexpected but not the strangest part of the week.)
On Day 1 of the final, which was held in the beautiful Square building nestled in the hills of St. Gallen, we heard from AI providers across healthcare, telecoms, and manufacturing. Each gave a presentation, and then each team had 15 minutes to ask questions. After the presentations wrapped up in the afternoon, we had till 8AM the next morning to write a report assessing the conformity of four AI applications—some high risk according to the AI Act (which subjects them to additional requirements), some not. We had prepared our assessment framework in advance, but we had to dig through a lot of other laws to determine exactly how to classify the applications. The AI Act references laws about machinery and medical devices that we had to interpret—does this robot count as machinery? Is this an in vitro medical device or not?—and that complicated the process. Though we had been hoping to get a little more sleep, a late night finishing the report and an early wake-up to proofread left me operating on about four hours, but I was still wired after we submitted, so after trying and failing to go back to sleep, I went for a run up Solitüdenweg, which did in fact offer some solitude (and lots of uphill, like every run I went on there).
We were back in Square in the late morning for the announcement of the finalists. Bleary-eyed teams clustered around the coffee machines (until we blew a fuse). Everyone was chatting, commiserating, and sneaking peeks at the jury’s room to try and divine what was going on, but we didn’t have to consult an oracle, because around 12:30 word came that the jury had come to a decision. And, well:
Surprised and delighted though we were, we had to pull it together for the final round. We heard a presentation from the final AI provider and then we and the other finalist team, the Conformity Mavericks, had separate Q&A sessions. After that, we had two hours to prepare a 20-minute oral presentation for the jury. I was tired but absolutely wired from caffeine, sugar (they provided some excellent Swiss candy bars), and adrenaline, and the two hours flew by. Kholo, our brilliant trade lawyer, and I had been delegated to give the presentation. I didn’t feel as prepared as I would’ve liked—not surprising given the time frame—but watching Kholo crush her part of the presentation gave me confidence, and I talked through our recommendations for the provider’s low-risk AI applications.
Most of the AI Act applies only to high-risk AI applications, but transparency and some other requirements apply to non-high-risk applications, and all AI applications are supposed to uphold EU values like privacy, human agency, non-discrimination, diversity, and environmental well-being. These were what I focused on in the second part of the presentation. One of the applications was intended to “nudge” the user into making better decisions, and while we thought maybe it was prohibited under the AI Act, we ultimately decided that it was permitted. (It’s a little ironic that an application can be “almost prohibited” but then ultimately classified as low risk, which made the values-based assessment all the more important.) I talked about the importance of pro-ethical design, which is a design philosophy that focuses on empowering the person by shaping the information available to them rather than their choices, nudging them into better choices but respecting whatever choice they do make. One of the main weaknesses of the AI Act is that there is no way to operationalize the principles it espouses, like agency. An official code of conduct will be released eventually, but companies need to start figuring out how they will design AI for compliance now, making initiatives like the Grand Challenge and ideas like pro-ethical design even more important.
After another nerve-wracking wait, the judges came out with their decision and declared it a tie! Our team and the Conformity Mavericks will split the prize pot. The Conformity Mavericks did a great job and it’s an honor to be winners alongside them.
So, what did I learn from this? Besides the fact that “almost prohibited” applications are probably actually low risk, what surprised me most about the law was that the majority of high-risk applications probably won’t be the ones explicitly listed in Annex III (which covers AI used in biometrics systems, critical infrastructure, education, employment, essential services, law enforcement, migration, and democratic processes) that have gotten the most media attention, but a whole other category that involves products and components covered under existing EU laws, like machinery and health devices, which are deemed “high-risk” if that law requires them to do a conformity assessment. Also, the AI Act takes some measures to decrease the compliance impact on small & medium enterprises (SMEs), but some of them (like saying that their technical documentation can be “equivalent documentation meeting the same objectives” as what’s laid out for everyone else) don’t seem like they’ll be that helpful. What the SMEs we talked to seemed to need most was guidance, because the AI Act is dense with requirements and references to other regulations. I’d love to see more information for SMEs and start-ups, but part of it is also an upstream issue. From my experience in computer science and software engineering, most engineering education and work focuses on what you can do, not what you should do. Integrating material about ethical and legal obligations would help integrate some of the values the AI Act wants to promote into the engineering chain itself, rather than trying to layer them on after the fact.
Thanks for reading The Ethical Reckoner! Subscribe for free to receive new posts and support my work.
What’s next for the AI Act? It’s going through the final phase of negotiations now, and it’ll enter into force 24 months after it’s approved, giving companies till late 2025 at the earliest to comply. However, our assessments showed that most companies have a long way to go to achieve full compliance. Beyond just ticking boxes, going through the AI Act conformity process will hopefully encourage companies to reflect on how they uses AI, what their goals are, and how they can foster a better society that respects individuals, community, and the environment.
This was an amazing experience, and I’m so grateful to the organizers (especially Thomas Burri and Viktoriya Zakrevskaya) my amazing team for their hard work (shout-out to Dirk, Kholo, Parisa, Pier Giorgio, and Yasaman). I wouldn’t have wanted to travel to Switzerland, sleep in a bunker, or pull an all-nighter with anyone else.
Photo credits: Bootcamp and Switzerland pictures were taken by me. All other pictures were taken by the extremely talented Darya Shramko. Thumbnail generated by DALLE-2 with the prompt “An abstract painting of a Grand Challenge.”
If you’ve made it this far, thank you—you get some bonus pictures of Switzerland: