The discussion around the ethics and legality surrounding AI has been a constant the last year — and it’s culminating in some important trials that’s coming up.

I won’t go into the entire thing here — I just want to focus on a specific argument that I often hear when it comes to the way these large models are trained. It oftes goes something like: «But how is this different from how humans have always been learning and iterating on previous knowledge?» or «The information was available on the open web, so it can be used for anything!».

I think these are terrible arguments.

Humans are allowed into shopping malls.

However, that’s simply not an argument for that cars should be allowed there as well — whether they’re driven by a human or autonomous.

But while it would be «very bad» if someone drove through a mall with their Hummer, it would only be «annoying» if someone did it with their RC car. I’m not saying the opposite (that if humans can do something, machines can’t) — I’m just saying that it’s not an argument, and that we have to evaluate every «machine» for what it is.

Because scale and context matters.

At some point, we decided that cars were a large enough departure from things like bikes or horse and carriage, and that they required their own rules. And today you need a license to drive a car, but not a bike. And while bikes don’t really need speed limits, cars absolutely do because it’s capable of much more. They’re just… different.

I view AI the same way, and that is why I’m a bit annoyed that much of the discussion’s around «ancient» copyright laws written for a different time. I wish we’d just jump straight to what we think is right, and make laws accordingly. 1

Now, I absolutely have issues with copyright laws and how they’re enforced and who they benefit. But as someone who plays in a small band, I wouldn’t like it if someone took one of our songs and uploaded it to YouTube as their own. At the same time, I wouldn’t mind if a performer learned the song and played it for money on the street — because scale and context matters.

Could we please evaluate it for what it is?

I just want to get pass «But technically, according to these laws written for something completely different, it might be fair use» and «But humans have done something similar, so these machines should be allowed to do the same at a global scale».

AI ≠ Napster

AI ≠ Humans

Could we please talk more about what we think is fair and what we actually want?

Imagine if someone, at the beginning of the last century, said: «You can’t put the genie back in the bottle — these cars are driving all over the place, we can’t start regulating them now»?

An black and hite image of the first Benz.

ChatGPT 1.0 (or perhaps I’m losing track of my own metaphors).

PS: Let me add, that I think AI tools can be immensely useful. I think too many think you have to choose between

  • being skeptical of the ethics, and think that they’re useless, hallucinating bullshit machines, or
  • think that they’re useful, and thus must be ethical and right.

But you can mix and match here, people! 👆🏻

I’m not calling for the removal of all AI tools. 2 But we, as a society, can choose which society we want! And if we think these tools are more unethical than useful, we can make them illegal in their current form. And I wish those who argue for why they should be legal would say why, instead of just «Because humans…».

  1. But sadly, even though the problem is global, most of it is decided in a country with about 4% of the world’s population, that doesn’t really seem capable of passing laws at the moment. ↩︎

  2. At least not in this posts! I’m honestly not sure… ↩︎