Tech companies can’t agree on what open AI means. That is a problem.

[ad_1]

Ultimately, the community has to decide what they’re trying to achieve, says Zacchiroli: “Are you just following where the market is going to avoid the term ‘open-source AI,’ or are you trying to make the market more open and give more freedom to users?”

What is open space?

It’s unclear how any definition of open source AI can match, said Sarah Myers West, executive director of the AI ​​Now Institute. He wrote with them paper published in August 2023 showing the lack of many open source AI projects. But it also showed that the sheer volume of data and computing power required to train long-term AI creates deep barriers for smaller players, no matter how open they are.

Myers West thinks there is also a lack of clarity about what people hope to achieve by creating open source AI. “Is it security, is it the ability to do academic research, am I trying to encourage greater competition?” he asks. “We need to know exactly what the goal is, and then how the opening of the system changes the pursuit of that goal.”

OSI seems eager to avoid these discussions. The draft definition lists independence and transparency as key benefits, but Maffulli demurred when pressed to explain why OSI values ​​those values. The document also has a section labeled “External Challenges” that clarifies that it does not include questions about “ethical, reliable, or trustworthy” AI.

Maffulli says that the open source community has historically focused on the ability to share software seamlessly and has avoided getting bogged down in arguments about what the software should be used for. “It’s not our job,” he says.

But those questions will never be answered, says Warso, no matter how hard people try over the years. The idea that technology is neutral and that topics like ethics have “disappeared” is a myth, he adds. He thinks that it is a myth that needs to be followed in order for the common people’s social cohesion not to be destroyed. “I think people realize that it’s not real [the myth]but we need this to move forward,” says Warso.

Beyond OSI, some have taken a different approach. In 2022, a group of researchers launched Trusted AI Licenses (RAIL), which are similar to open source licenses but include sections that can restrict certain activities. The goal, says Danish Contractor, the AI ​​researcher who created the license, is to allow developers to prevent their work from being used for things they deem inappropriate or inappropriate.

He said: “As a researcher, I was not averse to my material being used in destructive ways. And he is not alone: ​​a recent analysis he and his fellow AI startup Hugging Face’s popular modeling platform found that 28% of brands use RAIL.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *