The revelation regarding the role that Cambridge Analytica, a data anayltics company orginiating from Britain, played in the election of Donald Trump as the United States president caused Facebook users to begin questioning the ethics and responsibilties that Facebook has in keep their information and data private.
So, what responsibilities do platform providers like Facebook have to ensure "fake" or "misleading" news and information is not shared on their platforms? Should Facebook monitor people's posts to make sure that they are factually accurate? Facebook does currently have an option where users can report an advertisment if it's a fake news story. That also doesn't necessarily mean that Facebook users will be able to tell if it is fake news, and the company itself doesn't review ads before they are posted. I think with any social media platform, users must acknowledge that they are willingly putting themselves at risk for being exposed to fake media. Avid first amendment enthusiasts would never be comfortable with having Facebook control what they post online, even if it is fake. The United States has "freedom of speech" as its first admendment in the constitution for a reason. Ironically enough, Russia actually is working on passing legislature which would make it illegal to post fake news online and companies would face fines up to $800,000 for not addressing it. Although I do not think that the US would ever implement such a law due to our emphasis on freedom of speech, I think Facebook should be held liable for regulating some of the fake content on its website. The company is massive and very capable of beginning to hone in on "fake news" and protecting its users.
Another question that has come to light about some of the ethics behind what content can be promoted is regarding racism. Facebook has intervened in advertising in 2017, when anti-semetic ads were ran to target people who expressed these particular political views as reported by The New York Times. Facbook apologized for the oversight and has hired more people to try and manage this type of advertising. Although I believe that ethically Facebook should be responsible for these type of occurences, I also want to ask the question of where the line should be drawn. Should Facebook completely eliminate any type of group promoting any hate speech? I think Facebook should, and that they have the capability to do so, just as they do to control fake news. Just as with reporting innapropriate nudity, Facebook should have the option for users to report this type of hate speech.
Social media sites should be more heavily regulated than they currently are. Facebook is the first platform is its kind and started a revolution. Regulations couldn't have been made because the government wasn't really even sure what it would become. Even though legislature hasn't caught up with the social media giant, it should set privacy boundaries. Not only would this promote the network in terms of public relations, it would also drive its stock prices back up. Facebook's stock has plummetted after Mark Zuckerberg's testimony before Congress. Having an opt in and an opt out with privacy settings is beneficial to users because they have the option of what is shared and Facebook releases itself from any liability. However, some users may agree to sharing their data and not even realized what they're doing or what it's actually being used for.
The role of Russia in the elections has caused Facebook to be heavily criticized for being the link between how they were able to gather so much data on US citizens and maniulate advertising to target them. Should Facebook have disclosed to the purchasers of these type of advertisements that they may be in opposition to national interests or have more nefarious implications? I think that whoever is promoting this type of advertising is responsible for the regulatory effects that may come with. If Facebook chooses to allow this type of advertising as a US company, then they should face consequences from the US government. Imposing greater regulation of this type of advertising is critical as it will prevent future manipulations of elections. It will also set the standard for other social media platforms to ensure that they should be utilizing advertising revenue in a way that is in accordance with the laws of the country that it operates from. Risks of imposing greater regulation of advertising can include a decline in advertising revenues, a lack of trust between the businesses and the network, and limiting free speech.
Sourced from The Washington Post
When Russia purchased advertisements from Facebook to support Trump's campaign, Facebook should've reported it to Congress but also to the public. Especially with trying to combat fake news, it would be beneficial for the public to know that this type of targetting exists and they should be on the look out for it. Most people that were targeted had absolutely no idea and didn't know the extent to which their data was being shared/collected, as reported by Vice.
This discussion ultimately is coming to ask of whose responsibility it is to regulate and police this type of activity. Although it would be nice if Facebook genuinely had its users best interests at heart, at the end of the day, it is a public company looking to increase its value. A company of that size cannot be left to police itself. This would be like asking Wall Street to report banking fraud. The government needs to create laws to protect the privacy of its citizens as well as limit them to some of the corruption that exists with social media platforms.
Comments
Post a Comment