2 Comments

I think a key aspect of AI that needs regulating is that no humans understand how the program works in a meaningful way. The key difference is that programmers used to be able to authoritatively make claims about what their program would and wouldn't do.

Programs that no one can make claims like that about probably do need to be regulated more carefully. While this might catch some obscurely written code or bugs which no one understands even if does not use machine learning, I would argue that this is a benefit.

However we frame the regulation I think that this is the key aspect we should care about. To use your fraud example, programs that can commit fraud without any human involvement probably do need separate regulations. To give a specific example, if you clearly say "the following product description was written by generative AI, the seller makes no claims about its accuracy" and the description contains false and misleading claims that would constitute fraud, does the disclaimer get you out of it? Is using generative AI to write a product description not checked by a human inherently illegal? (this would forbid using it to write in a language the user does not understand for example). Who is responsible for the fraud? The seller or the LLM programmer? What if the LLM company makes claims about how useful their product is for translating product descriptions?

I think these types of questions may in fact be best answered by new and clearer regulation.

Expand full comment

Informative and clearly argued. A good reminder to always start with clear definitions. Thanks!

Expand full comment