When robots collude

By Betty on Maggio 16 2017
Evergreen

Algorithms can learn to collude. 

Two law professors, Ariel Ezrachi of Oxford and Maurice E. Stucke of the University of Tennessee, have a working paper on how when computers get involved in pricing for goods and services (say, at Amazon or Uber), the potential for collusion is even greater than when humans are making the prices. 

Computers can't have a back-room conversation to fix prices, but they can predict the way that other computers are going to behave. And with that information, they can effectively cooperate with each other in advancing their own profit-maximizing interests.

Sometimes, a computer is just a tool used to help humans collude, which theoretically can be prosecuted. But sometimes, the authors find, the computer learns to collude on its own. Can a machine be prosecuted?

How does antitrust law punish a computer? If an algorithm isn't programmed to collude, but ends up doing so independently through machine learning, it isn't clear that the law can stop it.

Teacher Rating
0
No votes yet
Discussion
According to the article, what are the legal problems caused by self-learning technology?
Can you think of any other examples where the law has failed to keep up with advances in technology?
How can governments and legal authorities keep up with technological advances?