Unpicking the rules shaping generative AI

Unpicking the rules shaping generative AI

a year ago
Anonymous $5YzO3NGzaX

https://techcrunch.com/2023/04/13/generative-ai-gdpr-enforcement/

In the past couple of weeks, already an ice-age ago in AI hype terms, a number of very well known names in tech (Elon Musk! Woz!) signed an open letter calling for a halt on the development of AI models more powerful that OpenAI-s GPT-4 — arguing humanity needs time for planning and management around the application of powerful automation technologies which they implied could be on the cusp of toppling man from his pinnacle atop the proverbial food chain.

Whether their call for human civilization to buy time to adapt to “ever more powerful digital minds”, as they put it — and beat the digital genii back into the bottle by somehow collectively agreeing “shared safety protocols” and “robust AI governance systems” — was more a self-interested bid by a subsection of technologists to throw a spanner in the engine of more advanced competitors (so they can try to catch up) is one overarching question. But the letter’s implication that no laws apply to AI is just obviously wrong. Although we can certainly argue how existing laws apply (or should be being applied).

Unpicking the rules shaping generative AI

Apr 13, 2023, 5:31pm UTC
https://techcrunch.com/2023/04/13/generative-ai-gdpr-enforcement/ > In the past couple of weeks, already an ice-age ago in AI hype terms, a number of very well known names in tech (Elon Musk! Woz!) signed an open letter calling for a halt on the development of AI models more powerful that OpenAI-s GPT-4 — arguing humanity needs time for planning and management around the application of powerful automation technologies which they implied could be on the cusp of toppling man from his pinnacle atop the proverbial food chain. > Whether their call for human civilization to buy time to adapt to “ever more powerful digital minds”, as they put it — and beat the digital genii back into the bottle by somehow collectively agreeing “shared safety protocols” and “robust AI governance systems” — was more a self-interested bid by a subsection of technologists to throw a spanner in the engine of more advanced competitors (so they can try to catch up) is one overarching question. But the letter’s implication that no laws apply to AI is just obviously wrong. Although we can certainly argue how existing laws apply (or should be being applied).