Banner graphic for She-philosopher.com: Studies in the history of science, technology & culture
Your support enables us to further develop this unique collection of scholarly resources: Donate to She-philosopher.com!

Q U I C K   L I N K S

Believing that Aristotelian rhetoric offered an unsurpassed guide to knowledge of human nature and the art of controlling/inflaming “the passions,” Hobbes made a free translation of Aristotle’s Rhetoric, dictated to William Cavendish (later 3rd earl of Devonshire) while Hobbes was his tutor.
  Learn more about Thomas Hobbes’ textbook of rhetorized psychology, A Briefe of the Art of Rhetorique (1st edn., 1637) in the Editor’s Introduction for Lib. Cat. No. THOB1637.

She-philosopher.com’s detailed study of California’s flawed “Good Neighbor Fence Act of 2013” (California Assembly Bill 1404) critiques our postmodern resort to militant ignorance and a demagogic politics of certainty.
  I believe that a classical agonistic politics of persuasion — not the polling and data-driven demagoguery (calculated appeals designed to manipulate us) which controls policy-making today — best serves the type of pluralist democratic society to which many of us aspire.

An IN BRIEF topic promoting the more ethical rhetoric of critical pluralism — an art of engagement & confrontation, born of respect for discomfiting difference.
  Among other useful aphorisms you’ll find there:
  “Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. Only then does the necessity for interdependence become unthreatening....” —Audre Lorde, Sister Outsider (1984)

The PBS NewsHour has produced an excellent series on the ways in which data aggregators and brokers like Facebook weaponize metadata (e.g., for psychographics, psychographic filtering, and other high-tech forms of psychological warfare) in order to manipulate our behavior: getting users addicted to social media, which encourages us to promote, buy, consume, vote (or not vote), mobilize, fear, hate, believe and spread lies, lean into “group think,” etc.
1. Part 1 of 4 (“How Facebook’s News Feed Can Be Fooled into Spreading Misinformation”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 4/25/2018).
  SUMMARY: “Facebook’s news feed algorithm learns in great detail what we like, and then strives to give us more of the same — and it’s that technology that can be taken advantage of to spread junk news like a virus. Science correspondent Miles O’Brien begins a four-part series on Facebook’s battle against misinformation that began after the 2016 presidential election.”
2. Part 2 of 4 (“Online Anger Is Gold to this Junk-News Pioneer”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/2/2018).
  SUMMARY: “Meet one of the Internet’s most prolific distributors of hyper-partisan fare. From California, Cyrus Massoumi caters to both liberals and conservatives, serving up political grist through various Facebook pages. Science correspondent Miles O’Brien profiles a leading purveyor of junk news who has hit the jackpot exploiting the trend toward tribalism.”
3. Part 3 of 4 (“Why We Love to Like Junk News that Reaffirms our Beliefs”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/9/2018).
  SUMMARY: “Facebook is exquisitely designed to feed our addiction to hyper-partisan content. In this world, fringe players who are apt to be more strident end up at the top of our news feeds, burying the middle ground. Science correspondent Miles O’Brien reports on the ways junk news feeds into our own beliefs about politics, institutions and government.”
  Most disturbing about this episode: the ways in which we have refashioned independent thought in terms of “confirmation bias” (for Betty Manlove, anti-establishment news which she perceives as edgy, ergo true):
  “[BETTY MANLOVE:] I believe what I want to believe. I’m too much of an independent thinker to allow emotions to take over. And news is news and opinion is opinion, and so I just go for the true news.
  “[MILES O’BRIEN:] But finding what is true in her news feed is not so easy. She has been convinced Barack Obama was born in Kenya… [...] and Parkland school shooting survivor David Hogg is a fraud.  ¶   So, where do these ideas come from? From the filter bubble created on Facebook by liking a post or clicking on a targeted ad that unwittingly makes users followers of a hyperpartisan page.  ¶   In the mix, some misinformation from Russia. Her grandson helped her find that out by going to a site on Facebook for users to see if they have liked any pages linked to Russia’s Internet Research Agency.” (n. pag.)
  As commentators were quick to point out, Manlove’s beliefs didn’t originate with her Facebook news feed, which O’Brien elsewhere confirms: “Neither Betty nor Gabe say their opinions have been swayed on Facebook, just hardened.”
4. Part 4 of 4 (“Inside Facebook’s Race to Separate News from Junk”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/16/2018).
  SUMMARY: “At Facebook, there are two competing goals: keep the platform free and open to a broad spectrum of ideas and opinions, while reducing the spread of misinformation. The company says it’s not in the business of making editorial judgments, so they use fact-checkers, artificial intelligence and their users. Can they stop junk news from adapting? Science correspondent Miles O’Brien reports.”
5. A supplementary episode in Paul Solman’s Making Sen$e series, “Why We Should Be More Like Cats than Dogs When It Comes to Social Media” (first aired 5/17/2018).
  SUMMARY: “Computer scientist and virtual reality pioneer Jaron Lanier doesn’t mince words when it comes to social media. In his latest book, Ten Arguments for Deleting Your Social Media Accounts Right Now, [he] says the economic model is based on ‘sneaky manipulation.’ Economics correspondent Paul Solman sits down with Lanier to discuss how the medium is designed to [engage] us and how it could hurt us.” (n. pag.)
  One exchange of note:
  “[JARON LANIER:] ... There’s sort of the cognitive extortion racket now, where the idea is that, you know what, nobody’s going to know about your book, nobody’s going to know about your store, nobody’s going to know about your candidacy unless you’re putting money into these social network things.
  “[PAUL SOLMAN:] Right.  ¶   All that information we share about ourselves online, Lanier argues, is not only used to sell us stuff, but to manipulate our civic behavior in uncivilly destabilizing ways.  ¶   Just look at the spread of fake news and the Cambridge Analytica scandal.
  “[JARON LANIER:] In the last presidential election in the U.S., what we saw was targeted nihilism or cynicism, conspiracy theories, paranoia, negativity at voter groups that parties were trying to suppress.  ¶   The thing about negativity is, it comes up faster, it’s cheaper to generate, and it lingers longer. So, for instance, it takes a long time to build trust, but you can lose trust very quickly.
  “[PAUL SOLMAN:] Right, always easier to destroy than to build.
  “[JARON LANIER:] So, the thing is, since these systems are built on really quick feedback, negativity is more efficient, cheaper, more effective. So if you want to turn an election, for instance, you don’t do it with positivity about your candidate. You do it with negativity about the other candidate.” (n. pag.)
  And from another exchange of note:
  “[JARON LANIER:] So, we’re dealing with statistical effect.  ¶   So let’s say I take a million people, and for each of them, I have this online dossier that’s been created by observing them in detail for years through their phones. And then I send out messages that are calculated to, for instance, make them a little cynical on Election Day if they were tending to vote for a candidate I don’t like.  ¶   I can say, without knowing exactly which people I influenced — let’s say 10 percent became 3 percent less likely to vote because I got them confused and bummed out and cynical. It’s a slight thing, but here’s something about slight changes.  ¶   When you have slight changes that you can predict well, and you can use them methodically, you can actually make big changes.” (n. pag.)
  And again:
  “[JARON LANIER:] Well, it’s even a little sneakier than that, because, for instance, they might be sending you notifications about singles services because, statistically, people who are in the same grouping with you get a little annoyed about that, and that engages them a little bit more.” (n. pag.)
  And finally:
  “[PAUL SOLMAN:] So, how to become a cat? Lanier has long argued that we have to force the social media business model to change, insisting companies should be paid by users, instead of third-party advertisers, subscription, instead of supposedly free TV.” (n. pag.)

For more on the growing dispute over Russian trolls using data-driven demagoguery in the digital agon to foment division and subvert pluralist democratic societies, see:
1. The PBS NewsHour interview with Kathleen Hall Jamieson, author of Cyberwar: How Russian Hackers and Trolls Helped Elect a President (first aired 11/1/2018).
  SUMMARY: “Did the involvement of Russian trolls and hackers swing the 2016 presidential election? Kathleen Hall Jamieson, author of Cyberwar, believes it is ‘highly probable’ that they did. She joins Judy Woodruff to discuss her research on how the Russians found the right messages and delivered them to key audiences using social media — as well as how we can manage foreign election meddling in the future.”
  Of note, Jamieson also blames the mainstream media — not just the new social media — for propagating “Russian stolen content hacked from Democratic accounts illegally.”
  “[KATHLEEN HALL JAMIESON:] The social media platforms have made many changes to try to minimize the likelihood that they will be able to replicate 2016. They have increased the likelihood that they’re going to catch anybody trying to illegally buy ads as a foreign national, for example.  ¶   The place that we haven’t seen big changes is with the press. We haven’t heard from our major media outlets. If tomorrow, somebody hacked our candidates and released the content into the media stream, how would you cover it? Would you cover it the same? And would you assume its accuracy, instead of questioning it and finding additional sourcing for it, before you release it into the body politic?  ¶   I would like to know what the press is going to do confronted with the same situation again.  ¶   I do have some sense of what the social media platforms will do.” (n. pag.)
2. Aaron Maté’s article, “New Studies Show Pundits Are Wrong about Russian Social-Media Involvement in US Politics: Far from being a sophisticated propaganda campaign, it was small, amateurish, and mostly unrelated to the 2016 election” (posted to The Nation website on 12/28/2018).
  According to Maté, Russian trolls “were actually engaging in clickbait capitalism: targeting unique demographics like African Americans or evangelicals in a bid to attract large audiences for commercial purposes. Reporters who have profiled the IRA have commonly described it as ‘a social media marketing campaign.’ Mueller’s indictment of the IRA disclosed that it sold ‘promotions and advertisements’ on its pages that generally sold in the $25-$50 range. ‘This strategy,’ Oxford observes, ‘is not an invention for politics and foreign intrigue, it is consistent with techniques used in digital marketing.’ New Knowledge notes that the IRA even sold merchandise that ‘perhaps provided the IRA with a source of revenue,’ hawking goods such as T-shirts, ‘LGBT-positive sex toys and many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.’” (n. pag.)

In his book Zucked: Waking Up to the Facebook Catastrophe (Penguin Press, 2019), the tech venture capitalist, early mentor to Mark Zuckerberg, and Facebook investor, Roger McNamee “also proposes [like many critics of social media before him, including Jaron Lanier] that digital platforms ditch advertising for subscription-based models (think Netflix). This, he hopes, would tame political microtargeting and end the click race among digital platforms. Funded by subscriptions, the platforms would not need to worry about selling their users’ ‘headspace’ to advertisers.” (Evgeny Morozov’s book review of Zucked, “A Former Social Media Evangelist Unfriends Facebook,” posted to The Washington Post website, 2/14/2019, n. pag.)
  But Evgeny Morozov is rightly skeptical of evangelizing subscriber funding as a panacea: “But would McNamee’s subscription-based models reduce addiction? Probably not. As long as user choices (and the data they leave behind) help ‘curate’ digital platforms, subscription-based alternatives will still have incentives to extract user data, deploying it to personalize their offerings and ensure that users do not leave the site. Companies with inferior curation systems would simply be eaten away by their competitors.” (E. Morozov, n. pag.)
  The subscription-based publishing model was developed in 17th-century Britain as an alternative to the traditional patronage model, which for centuries funded the arts & sciences. For details, see She-philosopher.​com’s IN BRIEF topic on the early-modern practitioners of subscription.

+

Roger McNamee offers his humble opinion on why, as consumers, we need to stop being passive and take control of how we share our personal information in a PBS NewsHour webessay, “The Dangers of Our ‘New Data Economy,’ and How to Avoid Them” (first aired 3/14/2019).

If you aren’t worried enough yet about how data brokers are manipulating us at all levels of society, see the feature story, “Big Brother Is Watching, and Wants Your Vote: Data brokers are using phones and other devices to track users and selling the info to political campaigns” by Evan Halper (Los Angeles Times, 2/24/2019, pp. A1 and A12), retitled “Your Phone and TV Are Tracking You, and Political Campaigns Are Listening In” for online posting.
  And it is not just political campaigns that are able to track your movements “with unnerving accuracy”: “Antiabortion groups, for example, used the technology to track women who entered waiting rooms of abortion clinics in more than half a dozen cities. RealOptions, a California-based network of so-called pregnancy crisis centers, along with a partner organization, had hired a firm to track cell phones in and around clinic lobbies and push ads touting alternatives to abortion. Even after the women left the clinics, the ads continued for a month.” (E. Halper, A12)
  Advocacy groups lobbying politicians are also “building ‘geo-fences’ ... around the homes, workplaces and hangouts of legislators and their families, enabling a campaign to bombard their devices with a message and leave the impression that a group’s campaign is much bigger in scope than it actually is.” (E. Halper, A12)
  Most insidious, as things stand now, “Which political campaigns and other clients receive all that tracking information can’t be traced.” (E. Halper, A12)
  At least one individual commenting on this story has called for vigorous regulation of Big Data brokers and their clients: “It is time to outlaw such invasion of privacy in the USA. Make it a felony. Also specify a monetary fine per offense that is high, with a right of private enforcement and the right to class action and nullification of mandatory arbitration as contrary to the public good.” (comment posted by “independentshold thebalance,” n. pag.)

For more on how the filter bubbles constructed for us by Big Data (especially e-commerce analytics firms) and Big Search companies determine what we see and don’t see, what we know and don’t know, see sidebar entry about Eli Pariser’s tech criticism at She-philosopher.​com’s “A Note on Site Design” Web page.

Search engine shenanigans are drawing scrutiny from a number of different groups these days, from socially-conscious Google employees in revolt against business as usual ... to irate conservatives complaining about search results they believe are biased in favor of the liberal establishment ... to me!
  See my discussion of this website’s contrarian business model and design philosophy — including our reliance on ethical search tools — at She-philosopher.com’s “A Note on Site Design” Web page.
  See also the sidebar entry on that same page for an intriguing suggestion re. a Big Search utopian alternative: a public search engine.

While I generally think that a public search engine is a good idea, I’m not sure it would — or could, given First Amendment protections — do much about the spread of junk news, misinformation, tribalism, & hate via social media which “relies on users to supply the content.”
  Some of the difficulties faced even by private companies trying to stop the rapid-fire spread of junk news online are raised in the Los Angeles Times editorial, “Pinterest Takes On Fake News” (2/24/2019, p. A15), retitled “Pinterest Strikes Back at Online Disinformation. Are You Paying Attention, Facebook?” for online posting.
  The editorial applauds Pinterest’s trial run restricting the spread of “fake health news” by “disabling searches related to these topics. Now, searching on Pinterest for ‘vaccine harms’ will return a blank page with the explanation, ‘Pins about this topic often violate our community guidelines, so we’re currently unable to show search results.’ The same happens on a search for ‘diabetes cures,’ for example.” (A15)
  However, Pinterest’s latest attempt at controlling “what gets found and shared” is a work-in-progress, as the platform experiments with redirects for filtered searches which push vetted, higher-quality health information at users (here taking on an editorial role that other purveyors of junk news, such as Facebook, have refused to assume). “Anti-vaxxers may bristle at the censorship Pinterest is imposing and complain that their speech rights are being infringed. But as a private company, Pinterest has the right to enforce its own rules for what gets shared on its site, and to define the line between idle chatter and harmful misinformation. We welcome its efforts on the health front, and hope it blazes a trail for other social networks to follow.” (Editorial Board for The Los Angeles Times, A15)

As numerous position papers from the Electronic Frontier Foundation (EFF) have warned — e.g., “Platform Censorship: Lessons From the Copyright Wars” by Corynne McSherry (posted 9/26/2018) — most current schemes for preventing the spread of “fake health news” on social-media platforms have had unintended consequences, sometimes censoring the very voices and “good” content we wish to promote.
  Even the best models for “content moderation” on the Web involve trade-offs, to which we need to give a lot more thought.
  E.g., EFF has argued that “Corporate Speech Police Are Not the Answer to Online Hate” (posted 10/25/2018). EFF acknowledges that a range of players “are trying to articulate better content moderation practices, and we appreciate that goal. But we are also deeply skeptical that even the social media platforms can get this right, much less the broad range of other services that fall within the rubric proposed here. We have no reason to trust that they will, and every reason to expect that their efforts to do so will cause far too much collateral damage.” (Corynne McSherry, n. pag.)
  Seemingly benign calls to treat social-media platforms created and run by corporations as “public forums” actually threaten “the free speech rights of Internet users and the platforms they use.” See the Electronic Frontier Foundation position paper, “EFF To U.S. Supreme Court: Rule Carefully in Free Speech Case about Private Operators, State Actors, and the First Amendment,” by Karen Gullo and David Greene (posted 12/12/2018).
  And laws like FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act) — making it illegal to post content on the Internet that “facilitates” prostitution or “contributes” to sex trafficking — have led Internet websites and forums “to censor speech with adult content on their platforms to avoid running afoul of the new anti-sex trafficking law FOSTA. The measure’s vague, ambiguous language and stiff criminal and civil penalties are driving constitutionally protected content off the Internet.  ¶   The consequences of this censorship are devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. Fearing that comments, posts, or ads that are sexual in nature will be ensnared by FOSTA, many vulnerable people have gone offline and back to the streets, where they’ve been sexually abused and physically harmed.” See EFF’s position paper, “With FOSTA Already Leading to Censorship, Plaintiffs Are Seeking Reinstatement of their Lawsuit Challenging the Law’s Constitutionality” by Karen Gullo and David Greene (posted 3/1/2019).

Amendment I to the United States Constitution reads in full: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” (adopted in 1791)
  It is customary to think of the freedoms of speech and press enshrined in the Bill of Rights at the end of the 18th century as representing the founding principles of these United States.
  But our 17th-century founders saw things somewhat differently. At a time when deep political divisions, and growing Stuart authoritarianism in England, threatened to derail popular government in New Jersey, the colonists’ representative assembly chose to limit freedom of the press in the new proprietary colony, making it a criminal offense to “wittingly and willingly forge or publish any false news.”
  Hence, government regulation of mendacious speech and communication in this country dates back to 1675, when the first law against the spread of fake news (called “false news” in the statute) — “whereby the minds of people are frequently disquieted or exasperated in relation to publick affairs” — was enacted by the Third General Assembly in the rebellious proprietorship of New Jersey. Accordingly, propagators of false news were to be fined ten shillings, which was also the punishment for the first offence of slander (the second offence being twenty shillings). A subsequent law enacted by the Sixth General Assembly of New Jersey imposed even harsher penalties for spreading fake news and slander.
  These initial attempts to promote populist truth-telling in and about “any part of America” are documented in a new write-up on “prudential” law-making, to be added to the section of She-philosopher.​com’s study page re. California’s flawed “Good Neighbor Fence Act of 2013” entitled, “Legislative process in the most ‘rebellious’ of the founding Thirteen American Colonies (East New Jersey)” (still forthcoming, as of March 2019).
  See also the sidebar entry (this page) on academic definitions of “fake news.”

The Electronic Frontier Foundation (EFF) continues to oppose most government mandates responding to the “fake news” phenomenon.
  In the position paper, “EFF to the Inter-American System: If You Want to Tackle ‘Fake News,’ Consider Free Expression First” (posted 2/28/2019), Veridiana Alimonti notes that “Disinformation flows are not a new issue, neither is the use of ‘fake news’ as a label to attack all criticism as baseless propaganda. The lack of a set definition for this term magnifies the problem, rendering its use susceptible to multiple and inconsistent meanings. Time and again legitimate concerns about misinformation and manipulation were misconstrued or distorted to entrench the power of established voices and stifle dissent. To combat these pitfalls, EFF’s submission presented recommendations — and stressed that the human rights standards on which the Inter-American System builds its work, already provide substantial guidelines and methods to address disinformation without undermining free expression and other fundamental rights.  ¶   The Americas’ human rights standards — which include the American Convention on Human Rights — declare that restrictions to free expression must be (1) clearly and precisely defined by law, (2) serve compelling objectives authorized by the American Convention, and (3) be necessary and appropriate in a democratic society to accomplish the objectives pursued as well as strictly proportionate to the intended objective. New prohibitions on the online dissemination of information based on vague ideas, such as ‘false news,’ for example, fail to comply with this three-part test. Restrictions on free speech that vaguely claim to protect the ‘public order’ also fall short of meeting these requirements.” (V. Alimonti, n. pag.)
  In another position paper, “Victory! Dangerous Elements Removed from California’s Bot-Labeling Bill" (posted 10/5/2018), Jamie Williams and Jeremy Gillula describe how California Senate Bill 1001 — “a new law requiring all ‘bots’ used for purposes of influencing a commercial transaction or a vote in an election to be labeled” — “originally included a provision that would have been abused as a censorship tool, and would have threatened online anonymity and resulted in the takedown of lawful human speech.” (J. Williams and J. Gillula, n. pag.)
  Also seeEFF to Court: Remedy for Bad Content Moderation Isn’t to Give Government More Power to Control Speech” by David Greene (posted 11/26/2018), which documents EFF’s ongoing struggle for a voluntary, not government-mandated, “human rights framing for removing or downgrading content and accounts” from social-media sites: “We’ve taken Internet service companies and platforms like Facebook, Twitter, and YouTube to task for bad content moderation practices that remove speech and silence voices that deserve to be heard. We’ve catalogued their awful decisions. We’ve written about their ambiguous policies, inconsistent enforcement, and failure to appreciate the human rights implications of their actions. We’re part of an effort to devise a human rights framing for removing or downgrading content and accounts from their sites, and are urging all platforms to adopt them as part of their voluntary internal governance. Just last week, we joined more than 80 international human rights groups in demanding that Facebook clearly explain how much content it removes, both rightly and wrongly, and provide all users with a fair and timely method to appeal removals and get their content back up.  ¶   These efforts have thus far been directed at urging the platforms to adopt voluntary practices rather than calling for them to be imposed by governments through law. Given the long history of governments using their power to regulate speech to promote their own propaganda, manipulate the public discourse, and censor disfavored speech, we are very reluctant to hand the U.S. government a role in controlling the speech that appears on the Internet via private platforms. This is already a problem in other countries.” (D. Greene, n. pag.)
  And EFF’s alternative approach to reform has generated some real wins: “Facebook Responds to Global Coalition’s Demand That Users Get a Say in Content Removal Decisions” by Karen Gullo and Jillian C. York (posted 12/20/2018).
  Click/tap here (direct link to PDF file) to read the text of EFF’s recommended Santa Clara Principles on Transparency and Accountability in Content Moderation — a set of minimum content moderation standards with a human rights framing created by EFF and its partners.

For modern political communities long accustomed to self-government (which our 17th-century founders were not), an alternate community-based model of quality control — for promoting a popular culture of truth, and restricting the spread of fake news — is being pioneered at Wikipedia, with its “large, diverse editor base” of amateurs, which “makes it very difficult for any person or group to censor and impose bias” over time.

Another alternative proposal for building digital democracy from the grassroots: “The Rise of a Cooperatively Owned Internet: Platform cooperativism gets a boost” by Nathan Schneider (The Nation, vol. 303, no. 18, 31 Oct. 2016, p. 4).
  “Platform cooperatives weren’t something one could even call for until December 2014, when New School professor Trebor Scholz posted an online essay about ‘platform cooperativism,’ putting the term on the map. A year later, he and I organized a packed conference on the subject in New York City. We’re about to publish Ours to Hack and to Own [OR Books, 2017], a collective manifesto with contributions from more than 60 authors that Scholz and I edited. The authors include leading tech critics like Yochai Benkler, Douglas Rushkoff, and Astra Taylor, as well as entrepreneurs, labor organizers, workers, and others. The theory and practice of platform cooperativism are spreading.” (N. Schneider, 4)

As for Facebook’s latest solution to the growing problems of a digital agon shaped by data-driven demagoguery: see “Mark Zuckerberg Says He’ll Reorient Facebook toward Privacy and Encryption,” by Elizabeth Dwoskin of The Washington Post (posted to the Los Angeles Times website, 3/6/2019).
  Of note, “Zuckerberg described the changes using the metaphor of transforming Facebook from a town square into a living room. ‘As I think about the future of the internet,’ he wrote, ‘I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.’” (E. Dwoskin, n. pag.)
  “The moves — which Zuckerberg, in a blog post, outlined in broad strokes rather than as a set of specific product changes — would shift the company’s focus from a social network in which people widely broadcast information to one in which people communicate with smaller groups, with their content disappearing after a short period of time, Zuckerberg said. Facebook’s core social network is structured around public conversation, but it also owns private messaging services WhatsApp and Messenger, which are closed networks. Instagram, Facebook’s photo-sharing platform, has also seen huge growth thanks to ephemeral messaging.” (E. Dwoskin, n. pag.)
  I fail to see how any of this addresses the real issues with Facebook’s profitable practice of building detailed psychographic profiles of users, which can be sold to unidentified parties, as well as used by Facebook and advertisers to “target” us as website visitors, consumers, citizens, and voters.
  I’m also not sure that social media denizens are going to be all that eager to vacate the town square and keep to their living rooms! I expect liberated grandmothers the world over have come to enjoy digital life beyond the domestic sphere, encountering the unknown, and communicating with hundreds of voices beyond their immediate circle of family and friends.
  Fortunately, Zuckerberg’s vision of “the future of the internet” — a more intimate digital public sphere, colonized by private capital — is not our only option.

It is not news to designers (in any field, including tech) that brands operate as emotional triggers.
  But a study published in the Proceedings of the National Academies of Science suggesting that we “limit the presentations” of brands and other graphic symbols (by which we sort ourselves into tribes) online in order “to eliminate echo chambers and partisan rancor on social media” is news worth debating.
  As reported by Nsikan Akpan in “How Seeing a Political Logo Can Impair Your Understanding of Facts” (posted to the PBS NewsHour website, 9/3/2018):
  “Merely seeing these political and social labels can cause you to reject facts that you would otherwise support,” according to researchers.
  “There’s a name for the behavioral pattern observed among participants who saw the logos — priming — and it is common among political and social discourse. Research shows that priming with small partisan cues, whether they involve politics, race or economics, can sway the opinions of people. These knee-jerk decisions happen even if you’re encountering an issue for the first time or have little time to react.  ¶   ‘When people are reminded of their partisan identity — when that is brought to the front of their minds – they think as partisans first and as “thoughtful people” second,’ Dannagal Young, a communications psychologist at the University of Delaware, told the PBS NewsHour via email.  ¶   Young said this happens because logos prompt our minds to change how they process our goals. When reminded of our partisan identity, we promote ideas that our consistent with our attitudes and social beliefs in a bid to seem like good members of the group — in this case, Democrat or Republican.  ¶   Like teenagers at a digital school lunch table, we emphasize our most extreme opinions and place less weight on facts. When partisan cues are stripped away, people made considerations based on objective accuracy, rather than choosing goals by beliefs or peer pressure.  ¶   Young said any online social network — including the ones in the study — are conceptually distinct from the way humans exist in the world, but Centola’s experiments offer insights into how partisan cues can affect people’s attitudes and opinions in digital spaces.  ¶   The study also offers clues for journalists reporting on partisan issues as well as for the designers of social networks like Facebook, which has pledged to reduce the damage caused by the spread of political news and propaganda on its platform.  ¶   ‘The biggest takeaway for me is that individuals and journalists seeking to overcome partisan biases need to drop the “Republicans say this and Democrats say that” language from their discussion of policy,’ Young said. ‘These findings encourage journalists to cover policy in ways that are more about the substance of the issues rather than in terms of the personalized, dramatized political fights.’” (Nsikan Akpan, n. pag.)

As noted above, EFF has argued that our “lack of a set definition for” the term “fake news” presents numerous problems for would-be regulators. Trying to reconcile “multiple and inconsistent meanings” across the ages further complicates things.
  E.g., post-2016, Donald Trump has popularized a narrow and narcissistic definition of “fake news” as anything that is critical of him or his presidency.
  In contrast, some of our 17th-century founders, rebelling against the government’s sedition laws (which deem the truth of the matter irrelevant), took a much broader and radically-democratic view in 1670s New Jersey, defining “false news” as “wittingly or willingly mak[ing] and publish[ing] any lye ... with an intent to deceive people with false reports,” “whereof no certain authority or authentick letters out of any part of America, can be produced” in evidence of truthfulness, and “whereby the minds of people are frequently disquieted or exasperated in relation to publick affairs.” And the penalty for disrupting the public sphere in this manner was severe: any perpetrator of false reports was “to be stockt or whipt” if they lacked the means to pay the escalating fines which “shall be levied upon his or their estate, for the use of the publick.” (Were we still subject to our founders’ original laws & values, pathological liars in high places, such as Donald Trump, would be in big trouble! ;-)
  Given our evolving cultural anxieties over “fake news” in a post-First Amendment digital age — when anyone with an Internet connection can easily “deceive people with false reports” — it is to be expected that academic researchers would also look into the concept, in hopes of better understanding what it is and how it propagates.
  Axel Gelfert’s article, “Fake News: A Definition,” published in the academic journal, Informal Logic: Reasoning and Argumentation in Theory and Practice, stresses the importance of distinguishing fake news from related, but distinct, types of public disinformation, false or misleading claims, and propaganda. “Fake news, I argue, is best defined as the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design.” (A. Gelfert, 85–6)
  Noting that “Fake news is not itself a new phenomenon,” Gelfert emphasizes the novel effects of it “when combined with online social media that enable the targeted, audience-specific manipulation of cognitive biases and heuristics”: this “forms a potent — and, as the events of 2016 show, politically explosive — mix.... [O]nline social media, which, as a Psychology Today article puts it, work on cognitive biases ‘like steroids’ ... has opened up new systemic ways of presenting consumers with news-like claims that are misleading by design. As a result, given the increasing permeability between online and offline news sources, and with traditional news media often reporting on fake news in order to debunk it (a worthy goal that is rendered ineffective by further cognitive biases such as source confusion, belief perseverance, and the backfire effect), we find ourselves increasingly confronted with publicly disseminated disinformation that masquerades as news, yet whose main purpose it is to feed off our cognitive biases in order to ensure its own continued production and reproduction.” (A. Gelfert, 113)
  For more on the first American laws against the spread of fake news, see sidebar entry (this page).

To learn more about the engraver of the 17th-century head-piece pictured to the left, see the IN BRIEF biography for Wenceslaus Hollar.

N O T E

Can’t find something you’re sure you learned about here?
  Try using our customized search tool (search box at the top of the right-hand sidebar on this page), which is updated every time new content is added to the public areas of the website, thus ensuring the most comprehensive and reliable searches of She-philosopher.​com.
  Learn more about our ethical, customized search tool here.
symbol
  To ensure that you’re viewing She-philosopher.​com’s most recently-updated content (both here and elsewhere at the website), don’t forget to use your browser’s Reload current page button — typically, an icon featuring a broken circle, with arrowhead on one end. For some computers, the keyboard shortcuts, Ctrl+R and F5 or Command-R, will also work; or you can right-click for a context-sensitive menu with the Reload this page button/command.
  Refreshing a page is especially important if you find yourself visiting the same Web page more than once within a relatively short time frame. I may have made modifications to the page in the interim, and you won’t always know this unless you force your browser to access the server (rather than your computer’s cache) to retrieve the requested Web page.

go to TOP of page


First Published:  23 February 2019
Revised (substantive):  25 March 2019


Opening quotation mark…So easie are men to be drawn to believe any thing, from such men as have gotten credit with them; and can with gentlenesse, and dexterity, take hold of their fear, and ignorance.Closing quotation mark

 THOMAS HOBBES (1588–1679), Leviathan, or the Matter, Forme, & Power of a Common-Wealth Ecclesiasticall and Civill, 1st edn., 1651, p. 56

Under Construction

S O R R Y,  but this Web page on data-driven demagoguery — harnessing the power of Big Data and psychographics in the service of rhetorical trickery — is still under construction.

printer's decorative block

^ 17th-century head-piece, showing six boys with farm tools, engraved by Wenceslaus Hollar (1607–1677).

We apologize for the inconvenience, and hope that you will return to check on its progress another time.

If you have specific questions relating to She-philosopher.com’s ongoing research projects, contact the website editor.

go to TOP of page

go up a level: She-philosopher.com’s IN BRIEF page