Get in Touch
Have a question or want to collaborate? Reach out through the form or email us directly.
Prefer email? Write to us at info@themidasproject.com.
Frequently Asked Questions
What people often ask about our mission, work, and how to get involved.
We engage in a combination of research, outreach, and public advocacy to ensure that AI companies are meeting public expectations, and living up to their past promises, when it comes to ensuring responsible AI development and deployment.
The most important component of our work is helping to identify and disseminate industry best practices for AI development. We review technical literature, regulatory guidance, and case studies to distill concrete measures—such as frontier-model risk assessments, red-teaming requirements, audit regimes, and whistle-blower protections—and advocate for the most important voluntary steps that companies can take today to ensure they are acting responsible.
We also monitor whether companies follow their stated policies and industry norms. When evidence shows back-tracking or inadequate controls, we document these gaps and publicly press for corrective action—mobilizing employees, customers, and civil-society allies until the company adopts the necessary safeguards.
Finally, we publicize our research to help ensure the public is aware of how AI developers stack up on safety and responsibility. We release concise scorecards, incident analyses, and memos so that regulators, investors, and the wider public can see how individual developers perform on safety and responsibility.
Various AI experts including Nick Bostrom and Stuart Russell have compared the development of advanced AI to the myth of King Midas.
According to the legend, King Midas once asked a powerful satyr to make it so that whatever he touched instantly turned into gold. At first, he was thrilled with his new powers. But the King soon discovered that he couldn’t touch food, water, or even his family without instantly turning them to metal. In other words, the sudden attainment of an incredible power with insufficiently well-specified goals and safeguards led to a terrible tragedy.
Much like King Midas, tech companies are now eagerly pursuing incredible wealth and power by developing artificial intelligence, a technology that will change our world forever. But how will we know that it is designed in alignment with our collective human values? If we misspecify even a single goal or safeguard for these systems, how will we prevent them from causing an incredible catastrophe?
In the words of Stuart Russell, “If you continue on the current path, the better AI gets, the worse things get for us. For any given incorrectly stated objective, the better a system achieves that objective, the worse it is.”
The Midas Project is a nonprofit initiative founded in early 2024. To this day, the majority of our campaign participants are unpaid volunteers who contribute in their free time. We are a nonprofit, tax-exempt, 501(c)(3) organization that relies on donations from the public to continue our work.
No. One of our central values is a pro-technology attitude.
Progress in technology has improved lives for millions of people around the globe (after all, without it, we wouldn’t have penicillin, air conditioning, or the internet). Artificial intelligence is already being used by millions to help improve medicine, education, and overall living standards. We believe this progress should continue, and we hope AI will be a positive force in the world.
However, we are also realists — and skeptical realists at that. We believe advanced AI systems may be a “dual-use” technology that can be used for harm as well. In order to avert social inequality, concentration of power, or AI-driven catastrophes, everybody needs to have a voice at the table when decisions about development and deployment are being made.
Currently, the vast majority of these decisions about the future of AI are being made in shadowy corporate boardrooms with little oversight and accountability. That’s why The Midas Project is committed to raising awareness about the risks of AI, and ensuring that global citizens are given a chance to make their voice heard.
If you’d like to get involved, consider signing up for our newsletter, joining as an official volunteer, or making a charitable donation today.
You can email us at info@themidasproject.com, or reach out via the form on our contact page.