AI Ph.D.s are flocking to Big Tech. Here’s why that could be bad news for open innovation

The current debate over whether open or closed AI models are safer or better is confusing. Instead of focusing on one business model over another, we need to embrace the full meaning of what it means to make AI open. This means shifting the conversation to focus on the importance of open science, transparency, and fairness if we want to build AI that works and benefits people.

Open science is the foundation of technological progress. We need more ideas, and different ideas, that are more common, at least. The organization I oversee, the Partnership on AI, is itself a mission-driven experiment in innovation, bringing together academics, government agencies, industry partners, and policymakers to tackle one of the most pressing challenges – ensuring that technology helps the many. , not a few.

With open models, we can’t forget the high responsibilities of public funding of science and the open publication of academic research.

The National Science and Innovation Policy is essential for an open environment. In his book, The Entrepreneurial State, economist Mariana Mazzucato says that government funding for research planted some IP seeds that grew into US technology companies. From the Internet to the iPhone and the Google Adwords algorithm, the latest AI technology was inspired by the government’s initial funding for the latest research and application.

Likewise, open publication of research, peer-reviewed and ethically reviewed, is essential to scientific progress. ChatGPT, for example, would not have been possible without access to publicly published research by researchers on transformer models. It’s about reading, as written in Stanford AI Index, that the number of AI Ph.D. graduates taking up jobs in schools has fallen over the past decade while the number of people going into industry has risen, and is set to double in 2021.

It is also important to remember that openness does not mean transparency. And, while transparency may not be an end in itself, it is worth responding to.

Transparency requires timely disclosure, clear communication to appropriate audiences, and clear documentation principles. Like PAI’s Guide for Safe Foundation Model Deployment shows, the steps taken throughout the life of the model allow for greater external evaluation and assessment while protecting competitiveness. These include transparency about the types of studies, testing and evaluation, reporting of events, sources of employment, attention to human rights, and monitoring of environmental performance. Developing documentation and disclosure standards is essential to ensure the safety and responsibility of advanced AI.

Finally, as our research has shown, it’s easy to recognize the importance of being open and creating a space for diverse ideas to define the future of AI—and hard to do. It is true that with low barriers to entry, the open space also includes actors from areas not seen in Silicon Valley. It’s also true that instead of increasing power and wealth, open environments enable more players to share in the economic benefits of AI.

But we need to do more than just lay the groundwork.

We must ensure that communities disproportionately affected by algorithmic harm, and from previously marginalized groups, can participate in the development and deployment of AI that supports them and protects their privacy. This means focusing on skills and education and it also means reshaping who develops AI systems and how they are evaluated. Today, through private and public sandboxes and labs, citizen-led AI innovations are being explored around the world.

Ensuring that security does not have to be a part between open and closed models. Rather, it is the establishment of national research and innovative new methods that improve the sustainability of scientific innovation and integrity. It is about creating a competitive market place for ideas to promote development. It is about ensuring that policy makers and the public are exposed to the development of these new technologies so that they can better analyze their potential and risks. It is an acknowledgment that clear rules of the road allow us all to travel faster and safer. Most importantly, if AI is to fulfill its promise, it needs to find sustainable, respectful, and effective ways to listen to new and diverse voices in the AI ​​debate.

Rebecca Finlay and CEO of Collaboration on AI.

Many must-read reviews published by Chance:

The opinions expressed in Fortune.com articles are the opinions of their authors and do not reflect the opinions and beliefs of Chance.

Subscribe to the new Fortune CEO Weekly Europe newsletter for corner office insights on the biggest business news in Europe. Log in for free.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *