There is unrest afoot in the AI research community in recent weeks; some of this calls into question whether artificial intelligence being developed over the years to come will have an effect on legal rights to freedom of speech, while others have to do with concerns over what government institutions might do with it.
Following the debut of new technology by companies like Google and Amazon that feature voice-bot calling that can schedule appointments over the phone (while sounding eerily human), California Senator Robert M. Hertzberg put forth a motion to place restrictions on such activities, requiring bots to have to identify themselves as such.
According to the proposed bill:
“This bill would make it unlawful for any person to use a bot, as defined, to communicate or interact with natural persons in California online with the intention of misleading and would provide that a person using a bot is presumed to not act with the intent to mislead if the person discloses that the bot is not a natural person. The bill would require an online platform to enable users to report violations of this prohibition, and would require the online platform to respond to the reports and, upon request, provide the Attorney General with specified related information.”
The bill evoked ire among some in futurist circles, who responded by saying that such legislation could represent an infringement of free speech. According to the Electronic Freedom Foundation (EFF), described by Futurism.com as “a non-profit designed to protect civil liberties in the digital age,” it’s not the bots whose right to free speech may be at stake:
“Bots are used for all sorts of ordinary and protected speech activities, including poetry, political speech, and even satire, such as poking fun at people who cannot resist arguing — even with bots. Disclosure mandates would restrict and chill the speech of artists whose projects may necessitate not disclosing that a bot is a bot.”
As one can see from this exchange of ideologies, the presence of artificial intelligence is already fomenting some interesting debate in relation to privacy, transparency, and yes, freedom of speech. However, there may be more at stake than just the eventuality of how AI–future or present–may complicate the way we coexist.
Wired reports that a recent Pentagon initiative “is planning a new Joint Artificial Intelligence Center to serve all US military and intelligence agencies,” which may bear some similarity or other association with the Pentagon’s existing Project Maven.
What, you may be wondering, is Project Maven? In a memorandum dated April 26, 2017, issued by Deputy Secretary of Defense Bob Work under the title, “Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven),” Work stated that, “As numerous studies have made clear, the Department of Defense (DoD) must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors.”
The same month, Work established the Pentagon’s Project Maven, telling Wired recently that the project is “exceeding my expectations.”
Wired further reports that “Google’s precise role in Project Maven is unclear—neither the search company nor the Department of Defense will say.” However, it is believed that Google’s role in the program may have to do with systems of operation for drones being used overseas.
However, not all is well between the DOD and the California Tech Giant, as now there are as many as 4000 Google employees that are protesting the Pentagon’s expansion of the program. While it remains the case that many details of Google’s involvement in the Project Maven program remain off-record, the thousands of employees now protesting its continuation should tell us something.
In an era where concerns over privacy and transparency have become paramount in western society, it is only reasonable to see apprehension at the proliferation of advanced AI and its use in warfare, especially when aided by industry leaders like Google.
It is understandable that lawmakers are willing to call into question the public’s right to know whether communications they receive are coming from artificially intelligent bots. However, perhaps there should be equal concern among elected officials about what our government is doing with advanced AI, especially when the proverbial “canary in the cage” equates to thousands of worried Silicon Valley employees who think there’s something just a bit rotten in the state of Denmark.