The Moral Obligations of Social and Data Companies?
Facebook absolutely has a moral component to their business model; if any company doesn't, they should. If Facebook is the first tech company to acknowledge these responsibilities (and they aren't), then they need to lead the charge and set an example. Using machine learning to seek out patterns of abuse and hate is critical. Training this technology, moving forward, to better understand the nuances of human interaction than perhaps even humans can: that's the challenge. Data sets fed to AI are flawed and biased because humans are flawed and biased. However, machines can be trained to recognize their own bias and work beyond that.
I would posit that where humans struggle with objectivity, machines would not. The greatest effort is to condition the machine to understand the difference between negative behavior and the free expression of ideas, and people need to be prepared to witness the machines make mistakes as it undergoes this complex process of learning. While machines will be new to this, humans are not, and we must recognize that we have not been doing such a stellar job ourselves of balancing free speech from abusive behavior. My opinion: Companies like Facebook, Microsoft and Google not only have a responsibility to integrate ethical machine learning into their business model, but a moral imperative to teach the machines that we create to possess an ethical model and, hopefully, implement it better than even humans themselves have. --Bill Ahern