ARE ROBOTS ENTITLED TO 'FREEDOM OF SPEECH'?
The thing about robots is that nobody is really 100% sure what the next step will be.
For nearly half a century, the US legal system has lived some sort of a double life. On the one hand, the Supreme Court has stated that journalists do not have greater, or fewer rights than other citizens. And on the other, the lower courts have generally ignored, or let stand, numerous laws or privileges that provide journalists special protection.
Most of these laws and privileges were devised before the Web was publicly available, and the case law is inconsistent in who it applies these protections to online. Over the years, state and federal courts have tried using a variety of approaches, to define journalism and who is a journalist.
Citizen publishers have invoked shield laws, which vary significantly in their definitions of who they protect, often in recent years. The lower courts have done the hard work in this area, and so the Supreme Court has stayed out of it.
However, a new question arises, where do robots come in?
Some of the technologies have already challenged journalists to distinguish their work from the countless other types of information that flood virtual spaces. Now non-human entities have the potential to muddy the waters even more.
Courts will soon have to start exploring whether AI communications have rights as publishers and whether a bot can be entitled to have some sort of 'journalistic protection'. But, if courts focus only on the publisher, it would be difficult for AI communicators to receive journalistic protection. If the courts focus on what was published, AI communicators have a better chance of succeeding, particularly if their content can be seen as a public good.
While giving robots some sort of free speech sound shocking, a court decision is a favour of an AI entity, and it could benefit news organisations, some of which have published AI-constructed stores for years, like AP and Reuters.
A good example is the daily stock-market roundup, many such stories could be understood as a public good (think news alerts) and thus receive journalistic legal protections, but that is if the courts focused on what was published rather than how it was published.
These issues are more complex in the context of fake news and clickbait. Recent elections in the US, the United Kingdom and France have seen bots flood social media with false, or even intentionally misleading, content. That can hardly be seen as a public good.
This requires us to identify what is human about journalism, as well as the fundamentals of it. Could a bot programmer invoke a journalistic shield law to protect the program's code, including the sources it used to construct a report, from compelled disclosure? What if a bot requests an FOIA file, should it be exempt from fees because it intends to scrape the data and publish it in tweets or on a blog?
These are some of the questions that should be asked for the courts, and they are critical for us to take up, as we move into the fourth wave of networked communication. It is a movement of complex relationships between humans and artificial intelligent communicators that is coming our way.