Stanford researchers challenge OpenAI, others on AI transparency in new report

Stanford researchers challenge OpenAI, others on AI transparency in new report

a year ago
Anonymous $pUsIN4hzN9

https://arstechnica.com/information-technology/2023/10/stanford-researchers-challenge-openai-others-on-ai-transparency-in-new-report/

On Wednesday, Stanford University researchers issued a report on major AI models and found them greatly lacking in transparency, reports Reuters. The report, called "The Foundation Model Transparency Index," examined models (such as GPT-4) created by OpenAI, Google, Meta, Anthropic, and others. It aims to shed light on the data and human labor used in training the models, calling for increased disclosure from companies.

Foundation models refer to AI systems trained on large datasets capable of performing tasks, from writing to generating images. They've become key to the rise of generative AI technology, particularly since the launch of OpenAI's ChatGPT in November 2022. As businesses and organizations increasingly incorporate these models into their operations, fine-tuning them for their own needs, the researchers argue that understanding their limitations and biases has become essential.

Last Seen
21 minutes ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
36 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
57 minutes ago
Reputation
0
Spam
0.000