Algorithms can learn to collude.
Two law professors, Ariel Ezrachi of Oxford and Maurice E. Stucke of the University of Tennessee, have a working paper on how when computers get involved in pricing for goods and services (say, at Amazon or Uber), the potential for collusion is even greater than when humans are making the prices.
Computers can't have a back-room conversation to fix prices, but they can predict the way that other computers are going to behave. And with that information, they can effectively cooperate with each other in advancing their own profit-maximizing interests.
Sometimes, a computer is just a tool used to help humans collude, which theoretically can be prosecuted. But sometimes, the authors find, the computer learns to collude on its own. Can a machine be prosecuted?
How does antitrust law punish a computer? If an algorithm isn't programmed to collude, but ends up doing so independently through machine learning, it isn't clear that the law can stop it.