Page 1 of 2 12 LastLast
Results 1 to 15 of 21

Thread: The Futures of Earth Prime and Earth II

  1. #1
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts

    The Futures of Earth Prime and Earth II

    The Future of Truth and Misinformation Online

    Experts are evenly split on whether the coming decade will see a reduction in false and misleading narratives online. Those forecasting improvement place their hopes in technological fixes and in societal solutions. Others think the dark side of human nature is aided more than stifled by technology.
    BY JANNA ANDERSON AND LEE RAINIE

    In late 2016, Oxford Dictionaries selected “post-truth” as the word of the year, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

    The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election highlighted how the digital age has affected news and cultural narratives. New information platforms feed the ancient instinct people have to find information that syncs with their perspectives: A 2016 study that analyzed 376 million Facebook users’ interactions with over 900 news outlets found that people tend to seek information that aligns with their views.

    This makes many vulnerable to accepting and acting on misinformation. For instance, after fake news stories in June 2017 reported Ethereum’s founder Vitalik Buterin had died in a car crash its market value was reported to have dropped by $4 billion.

    Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to.
    TOM ROSENSTIEL

    When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “grand challenges we face in the 21st century” many named the breakdown of trusted information sources. “The major new challenge in reporting news is the new shape of truth,” said Kevin Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact and all these counterfacts and facts look identical online, which is confusing to most people.”

    Americans worry about that: A Pew Research Center study conducted just after the 2016 election found 64% of adults believe fake news stories cause a great deal of confusion and 23% said they had shared fabricated political stories themselves – sometimes by mistake and sometimes intentionally.

    The question arises, then: What will happen to the online information environment in the coming decade? In summer 2017, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large canvassing of technologists, scholars, practitioners, strategic thinkers and others, asking them to react to this framing of the issue:

    The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.

    The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?

    Respondents were then asked to choose one of the following answer options:

    The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.

    The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.

    Some 1,116 responded to this nonscientific canvassing: 51% chose the option that the information environment will not improve, and 49% said the information environment will improve. (See “About this canvassing of experts” for details about this sample.) Participants were next asked to explain their answers. This report concentrates on these follow-up responses.

    Their reasoning revealed a wide range of opinions about the nature of these threats and the most likely solutions required to resolve them. But the overarching and competing themes were clear: Those who do not think things will improve felt that humans mostly shape technology advances to their own, not-fully-noble purposes and that bad actors with bad motives will thwart the best efforts of technology innovators to remedy today’s problems.

    And those who are most hopeful believed that technological fixes can be implemented to bring out the better angels guiding human nature.

    More specifically, the 51% of these experts who expect things will not improve generally cited two reasons:

    The fake news ecosystem preys on some of our deepest human instincts: Respondents said humans’ primal quest for success and power – their “survival” instinct – will continue to degrade the online information environment in the next decade. They predicted that manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.

    Our brains are not wired to contend with the pace of technological change: These respondents said the rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. They predicted a future information landscape in which fake information crowds out reliable information. Some even foresaw a world in which widespread information scams and mass manipulation cause broad swathes of public to simply give up on being informed participants in civic life.

    The 49% of these experts who expect things to improve generally inverted that reasoning:

    Technology can help fix these problems: These more hopeful experts said the rising speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in fake news and misinformation campaigns. Some predicted better methods will arise to create and promote trusted, fact-based news sources.

    It is also human nature to come together and fix problems: The hopeful experts in this canvassing took the view that people have always adapted to change and that this current wave of challenges will also be overcome. They noted that misinformation and bad actors have always existed but have eventually been marginalized by smart people and processes. They expect well-meaning actors will work together to find ways to enhance the information environment. They also believe better information literacy among citizens will enable people to judge the veracity of material content and eventually raise the tone of discourse.

    The majority of participants in this canvassing wrote detailed elaborations on their views. Some chose to have their names connected to their answers; others opted to respond anonymously. These findings do not represent all possible points of view, but they do reveal a wide range of striking observations.

    Respondents collectively articulated several major themes tied to those insights and explained in the sections below the following graphic. Several longer additional sets of responses tied to these themes follow that summary.

    The following section presents an overview of the themes found among the written responses, including a small selection of representative quotes supporting each point. Some comments are lightly edited for style or length.


    Theme 1: The information environment will not improve: The problem is human nature
    Most respondents who expect the environment to worsen said human nature is at fault. For instance, Christian H. Huitema, former president of the Internet Architecture Board, commented, “The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.”

    These experts predicted that the problem of misinformation will be amplified because the worst side of human nature is magnified by bad actors using advanced online tools at internet speed on a vast scale.

    The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.
    CHRISTIAN H. HUITEMA

    Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

    Michael J. Oghia, an author, editor and journalist based in Europe, said he expects a worsening of the information environment due to five things: “1) The spread of misinformation and hate; 2) Inflammation, sociocultural conflict and violence; 3) The breakdown of socially accepted/agreed-upon knowledge and what constitutes ‘fact.’ 4) A new digital divide of those subscribed (and ultimately controlled) by misinformation and those who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical thinking. 5) Further divides between communities, so that as we are more connected we are farther apart. And many others.”

    Leah Lievrouw, professor in the department of information studies at the University of California, Los Angeles, observed, “So many players and interests see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information.”

    Subtheme: More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives
    While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, many of these experts predicted that the speed, reach and low cost of online communication plus continuously emerging innovations will magnify the threat level significantly. A professor at a Washington, D.C.-area university said, “It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.”

    Jerry Michalski, futurist and founder of REX, replied, “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly; 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem; and 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”

    It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.
    ANONYMOUS PROFESSOR

    An institute director and university professor said, “The internet is the 21st century’s threat of a ‘nuclear winter,’ and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.”

    Bob Frankston, internet pioneer and software innovator, said, “I always thought that ‘Mein Kampf’ could be countered with enough information. Now I feel that people will tend to look for confirmation of their biases and the radical transparency will not shine a cleansing light.”

    David Harries, associate executive director for Foresight Canada, replied, “More and more, history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.”

    Glenn Edens, CTO for technology reserve at PARC, a Xerox company, commented, “Misinformation is a two-way street. Producers have an easy publishing platform to reach wide audiences and those audiences are flocking to the sources. The audiences typically are looking for information that fits their belief systems, so it is a really tough problem.”

    Subtheme: Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar
    The respondents who supported this view noted that people’s actions – from consciously malevolent and power-seeking behaviors to seemingly more benign acts undertaken for comfort or convenience – will work to undermine a healthy information environment.

    People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view.
    STARR ROXANNE HILTZ

    An executive consultant based in North America wrote, “It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring.”

    Serge Marelli, an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”

    Starr Roxanne Hiltz, distinguished professor of information systems and co-author of the visionary 1970s book “The Network Nation,” replied, “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the U.S. frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”

    Nigel Cameron, a technology and futures editor and president of the Center for Policy on Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of course, be manipulated). And the political environment is bad.”

    Ian O’Byrne, assistant professor at the College of Charleston, replied, “Human nature will take over as the salacious is often sexier than facts. There are multiple information streams, public and private, that spread this information online. We can also not trust the businesses and industries that develop and facilitate these digital texts and tools to make changes that will significantly improve the situation.”

    Greg Swanson, media consultant with ITZonTarget, noted, “The sorting of reliable versus fake news requires a trusted referee. It seems unlikely that government can play a meaningful role as this referee. We are too polarized. And we have come to see the television news teams as representing divergent points of view, and, depending on your politics, the network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair referee that would be universally trusted.”

    Richard Lachmann, professor of sociology at the State University of New York at Albany, replied, “Even though systems [that] flag unreliable information can and will be developed, internet users have to be willing to take advantage of those warnings. Too many Americans will live in political and social subcultures that will champion false information and encourage use of sites that present such false information.”

    There were also those among these expert respondents who said inequities, perceived and real, are at the root of much of the misinformation being produced.

    A professor at MIT observed, “I see this as problem with a socioeconomic cure: Greater equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less a technological problem than a human problem, a problem of belief, of ideology. Profound levels of ungrounded beliefs about things both sacred and profane existed before the branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge relationships, explain the unexplainable.”

    Julian Sefton-Green, professor of new media education at Deakin University in Australia, said, “The information environment is an extension of social and political tensions. It is impossible to make the information environment a rational, disinterested space; it will always be susceptible to pressure.”

    A respondent affiliated with Harvard University’s Berkman Klein Center for Internet & Society wrote, “The democratization of publication and consumption that the networked sphere represents is too expansive for there to be any meaningful improvement possible in terms of controlling or labeling information. People will continue to cosset their own cognitive biases.”

    Subtheme: In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil
    A large number of respondents said the interests of the most highly motivated actors, including those in the worlds of business and politics, are generally not motivated to “fix” the proliferation of misinformation. Those players will be a key driver in the worsening of the information environment in the coming years and/or the lack of any serious attempts to effectively mitigate the problem.

    Scott Shamp, a dean at Florida State University, commented, “Too many groups gain power through the proliferation of inaccurate or misleading information. When there is value in misinformation, it will rule.”

    Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.
    ZBIGNIEW ŁUKASIAK

    Alex “Sandy” Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, commented, “We know how to dramatically improve the situation, based on studies of political and similar predictions. What we don’t know is how to make it a thriving business. The current [information] models are driven by clickbait, and that is not the foundation of a sustainable economic model.”

    Stephen Downes, researcher with the National Research Council of Canada, wrote, “Things will not improve. There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.”

    An anonymous respondent said, “Actors can benefit socially, economically, politically by manipulating the information environment. As long as these incentives exist, actors will find a way to exploit them. These benefits are not amenable to technological resolution as they are social, political and cultural in nature. Solving this problem will require larger changes in society.”

    A number of respondents mentioned market capitalism as a primary obstacle to improving the information environment. A professor based in North America said, “[This] is a capitalist system. The information that will be disseminated will be biased, based on monetary interests.”

    Seth Finkelstein, consulting programmer and winner of the Electronic Freedom Foundation’s Pioneer Award, commented, “Virtually all the structural incentives to spread misinformation seem to be getting worse.”

    A data scientist based in Europe wrote, “The information environment is built on the top of telecommunication infrastructures and services developed following the free-market ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market products.”

    Zbigniew Łukasiak, a business leader based in Europe, wrote, “Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.”

    A vice president for public policy at one of the world’s foremost entertainment and media companies commented, “The small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”

    Patricia Aufderheide, professor of communications and founder of the Center for Media and Social Impact at American University, said, “Major interests are not invested enough in reliability to create new business models and political and regulatory standards needed for the shift. … Overall there are powerful forces, including corporate investment in surveillance-based business models, that create many incentives for unreliability, ‘invisible handshake’ agreements with governments that militate against changing surveillance models, international espionage at a governmental and corporate level in conjunction with mediocre cryptography and poor use of white hat hackers, poor educational standards in major industrial countries such as the U.S., and fundamental weaknesses in the U.S. political/electoral system that encourage exploitation of unreliability. It would be wonderful to believe otherwise, and I hope that other commentators will be able to convince me otherwise.”

    James Schlaffer, an assistant professor of economics, commented, “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”

    Subtheme: Human tendencies and infoglut drive people apart and make it harder for them to agree on “common knowledge.” That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem
    Many respondents expressed concerns about how people’s struggles to find and apply accurate information contribute to a larger social and political problem: There is a growing deficit in commonly accepted facts or some sort of cultural “common ground.” Why has this happened? They cited several reasons:

    Online echo chambers or silos divide people into separate camps, at times even inciting them to express anger and hatred at a volume not seen in previous communications forms.
    Information overload crushes people’s attention spans. Their coping mechanism is to turn to entertainment or other lighter fare.
    High-quality journalism has been decimated due to changes in the attention economy.
    They said these factors and others make it difficult for many people in the digital age to create and come to share the type of “common knowledge” that undergirds better and more-responsive public policy. A share of respondents said a lack of commonly shared knowledge leads many in society to doubt the reliability of everything, causing them to simply drop out of civic participation, depleting the number of active and informed citizens.

    Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”

    Philip J. Nickel, lecturer at Eindhoven University of Technology in the Netherlands, said, “The decline of traditional news media and the persistence of closed social networks will not change in the next 10 years. These are the main causes of the deterioration of a public domain of shared facts as the basis for discourse and political debate.”

    Kenneth Sherrill, professor emeritus of political science at Hunter College, City University of New York, predicted, “Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”

    The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing.
    JAMAIS CASCIO

    What is truth? What is a fact? Who gets to decide? And can most people agree to trust anything as “common knowledge”? A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.

    An anonymous respondent observed, “Whatever is devised will not be seen as impartial; some things are not black and white; for other situations, facts brought up to come to a conclusion are different that other facts used by others in a situation. Each can have real facts, but it is the facts that are gathered that matter in coming to a conclusion; who will determine what facts will be considered or what is even considered a fact.”

    A research assistant at MIT noted, “‘Fake’ and ‘true’ are not as binary as we would like, and – combined with an increasingly connected and complex digital society – it’s a challenge to manage the complexity of social media without prescribing a narrative as ‘truth.’”

    An internet pioneer and longtime leader at ICANN said, “There is little prospect of a forcing factor that will emerge that will improve the ‘truthfulness’ of information in the internet.”

    A vice president for stakeholder engagement said, “Trust networks are best established with physical and unstructured interaction, discussion and observation. Technology is reducing opportunities for such interactions and disrupting human discourse, while giving the ‘feeling’ that we are communicating more than ever.”

    Subtheme: A small segment of society will find, use and perhaps pay a premium for information from reliable sources. Outside of this group “chaos will reign” and a worsening digital divide will develop
    Some respondents predicted that a larger digital divide will form. Those who pursue more-accurate information and rely on better-informed sources will separate from those who are not selective enough or who do not invest either the time or the money in doing so.

    There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.
    ANONYMOUS RESPONDENT.

    Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”

    Alexander Halavais, associate professor of social technologies at Arizona State University, said, “As there is value in accurate information, the availability of such information will continue to grow. However, when consumers are not directly paying for such accuracy, it will certainly mean a greater degree of misinformation in the public sphere. That means the continuing bifurcation of haves and have-nots, when it comes to trusted news and information.”

    An anonymous editor and publisher commented, “Sadly, many Americans will not pay attention to ANY content from existing or evolving sources. It’ll be the continuing dumbing down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know, and continue to battle.”

    An anonymous respondent said, “There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.”

    Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale
    Many who see little hope for improvement of the information environment said technology will not save society from distortions, half-truths, lies and weaponized narratives. An anonymous business leader argued, “It is too easy to create fake facts, too labor-intensive to check and too easy to fool checking algorithms.’’ And this response of an anonymous research scientist based in North America echoed the view of many participants in this canvassing: “We will develop technologies to help identify false and distorted information, BUT they won’t be good enough.”

    In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.
    DAVID CONRAD

    Paul N. Edwards, Perry Fellow in International Security at Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

    Vian Bakir, professor in political communication and journalism at Bangor University in Wales, commented, “It won’t improve because of 1) the evolving nature of technology – emergent media always catches out those who wish to control it, at least in the initial phase of emergence; 2) online social media and search engine business models favour misinformation spreading; 3) well-resourced propagandists exploit this mix.”

    Many who expect things will not improve in the next decade said that “white hat” efforts will never keep up with “black hat” advances in information wars. A user-experience and interaction designer said, “As existing channels become more regulated, new unregulated channels will continue to emerge.”

    Subtheme: Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars
    Many of those who expect no improvement of the information environment said those who wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of the methods meant to stop them. They said certain actors in government, business and other individuals with propaganda agendas are highly driven to make technology work in their favor in the spread of misinformation, and there will continue to be more of them.

    There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.
    JASON HONG

    A number of respondents referred to this as an “arms race.” David Sarokin of Sarokin Consulting and author of “Missed Information,” said, “There will be an arms race between reliable and unreliable information.” And David Conrad, a chief technology officer, replied, “In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.”

    Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”

    John Markoff, retired journalist and former technology reporter for The New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”

    Scott Spangler, principal data scientist at IBM Watson Health, said technologies now exist that make fake information almost impossible to discern and flag, filter or block. He wrote, “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”

    Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”

    A research professor of robotics at Carnegie Mellon University observed, “Defensive innovation is always behind offensive innovation. Those wanting to spread misinformation will always be able to find ways to circumvent whatever controls are put in place.”

    A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Problems will get worse faster than solutions can address, but that only means solutions are more needed than ever.”

    Subtheme: Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI
    Some respondents expect a dramatic rise in the manipulation of the information environment by nation-states, by individual political actors and by groups wishing to spread propaganda. Their purpose is to raise fears that serve their agendas, create or deepen silos and echo chambers, divide people and set them upon each other, and paralyze or confuse public understanding of the political, social and economic landscape.

    We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. … Given that there is freedom of speech, I wonder how the situation can ever improve.
    ANONYMOUS PROJECT LEADER FOR A SCIENCE INSTITUTE

    This has been referred to as the weaponization of public narratives. Social media platforms such as Facebook, Reddit and Twitter appear to be prime battlegrounds. Bots are often employed, and AI is expected to be implemented heavily in the information wars to magnify the speed and impact of messaging.

    A leading internet pioneer who has worked with the FCC, the UN’s International Telecommunication Union (ITU), the General Electric Co. (GE) and other major technology organizations commented, “The ‘internet-as-weapon’ paradigm has emerged.”

    Dean Willis, consultant for Softarmor Systems, commented, “Governments and political groups have now discovered the power of targeted misinformation coupled to personalized understanding of the targets. Messages can now be tailored with devastating accuracy. We’re doomed to living in targeted information bubbles.”

    An anonymous survey participant noted, “Misinformation will play a major role in conflicts between nations and within competing parties within nation states.”

    danah boyd, principal researcher at Microsoft Research and founder of Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”

    Peter Lunenfeld, a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

    These experts noted that the public has turned to social media – especially Facebook – to get its “news.” They said the public’s craving for quick reads and tabloid-style sensationalism is what makes social media the field of choice for manipulative narratives, which are often packaged to appear like news headlines. They note that the public’s move away from more-traditional mainstream news outlets, which had some ethical standards, to consumption of social newsfeeds has weakened mainstream media organizations, making them lower-budget operations that have been forced to compete for attention by offering up clickbait headlines of their own.

    An emeritus professor of communication for a U.S. Ivy League university noted, “We have lost an important social function in the press. It is being replaced by social media, where there are few if any moral or ethical guidelines or constraints on the performance of informational roles.”

    A project leader for a science institute commented, “We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. The existence of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do not bother to read entire articles, nor look for trusted sources. Given that there is freedom of speech, I wonder how the situation can ever improve. Most users just read the headline, comment and share without digesting the entire article or thinking critically about its content (if they read it at all).”

    Subtheme: The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online
    The rise of new and highly varied voices with differing agendas and motivations might generally be considered to be a good thing. But some of these experts said the recent major successes by misinformation manipulators have created a threatening environment in which many in the public are encouraging platform providers and governments to expand surveillance. Among the technological solutions for “cleaning up” the information environment are those that work to clearly identify entities operating online and employ algorithms to detect misinformation. Some of these experts expect that such systems will act to identify perceived misbehaviors and label, block, filter or remove some online content and even ban some posters from further posting.

    Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world.
    RETIRED PROFESSOR

    An educator commented, “Creating ‘a reliable, trusted, unhackable verification system’ would produce a system for filtering and hence structuring of content. This will end up being a censored information reality.”

    An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

    In fact, a share of the respondents predicted that the online information environment will not improve in the next decade because any requirement for authenticated identities would take away the public’s highly valued free-speech rights and allow major powers to control the information environment.

    A distinguished professor emeritus of political science at a U.S. university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.” An anonymous respondent wrote, “There is always a fight between ‘truth’ and free speech. But because the internet cannot be regulated free speech will continue to dominate, meaning the information environment will not improve.”

    But another share of respondents said that is precisely why authenticated identities – which are already operating in some places, including China – will become a larger part of information systems. A professor at a major U.S. university replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

    The executive director of a major global privacy advocacy organization argued removing civil liberties in order to stop misinformation will not be effective, saying, “‘Problematic’ actors will be able to game the devised systems while others will be over-regulated.”

    Several other respondents also cited this as a major flaw of this potential remedy. They argued against it for several reasons, including the fact that it enables even broader government and corporate surveillance and control over more of the public.

    Emmanuel Edet, head of legal services at the National Information Technology Development Agency of Nigeria, observed, “The information environment will improve but at a cost to privacy.”

    Bill Woodcock, executive director of the Packet Clearing House, wrote, “There’s a fundamental conflict between anonymity and control of public speech, and the countries that don’t value anonymous speech domestically are still free to weaponize it internationally, whereas the countries that do value anonymous speech must make it available to all, [or] else fail to uphold their own principle.”

    James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “Information systems incentivize getting attention. Lying is a powerful way to do that. To stop that requires high surveillance – which means government oversight which has its own incentives not to tell the truth.”

    Tom Valovic, contributor to The Technoskeptic magazine and author of “Digital Mythologies,” said encouraging platforms to exercise algorithmic controls is not optimal. He wrote: “Artificial intelligence that will supplant human judgment is being pursued aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to replacing human judgment are subject to hidden bias and will ultimately fail to accomplish this goal. They will only continue the centralization of power in a small number of companies that control the flow of information.”

    Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content
    Most of the respondents who gave hopeful answers about the future of truth online said they believe technology will be implemented to improve the information environment. They noted their faith was grounded in history, arguing that humans have always found ways to innovate to overcome problems. Most of these experts do not expect there will be a perfect system – but they expect advances. A number said information platform corporations such as Google and Facebook will begin to efficiently police the environment to embed moral and ethical thinking in the structure of their platforms. They hope this will simultaneously enable the screening of content while still protecting rights such as free speech.

    If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made … In other words, if there’s a will, there’s way.
    ADAM LELLA

    Larry Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University, said, “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”

    An associate professor at a U.S. university wrote, “I do not see us giving up on seeking truth.” And a researcher based in Europe said, “Technologies will appear that solve the trust issues and reward logic.”

    Adam Lella, senior analyst for marketing insights at comScore Inc., replied, “There have been numerous other industry-related issues in the past (e.g., viewability, invalid traffic detection, cross-platform measurement) that were seemingly impossible to solve, and yet major progress was made in the past few years. If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made to help mitigate this issue in the long run. In other words, if there’s a will, there’s way.”

    Subtheme: Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of “trust ratings”
    Many respondents who hope for improvement in the information environment mentioned ways in which new technological solutions might be implemented.

    Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Two developments will help improve the information environment: 1) News will move to a subscription model (like music, movies, etc.) and subscription providers will have a vested interest in culling down false narratives; 2) Algorithms that filter news will learn to discern the quality of a news item and not just tailor to ‘virality’ or political leaning.”

    In order to reduce the spread of fake news, we must deincentivize it financially.
    AMBER CASE

    Laurel Felt, lecturer at the University of Southern California, “There will be mechanisms for flagging suspicious content and providers and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even install filters so that, when they’re doing searches, hits that don’t meet a certain trust threshold will not appear on the list.”

    A longtime U.S. government researcher and administrator in communications and technology sciences said, “The intelligence, defense and related U.S. agencies are very actively working on this problem and results are promising.”

    Amber Case, research fellow at Harvard University’s Berkman Klein Center for Internet & Society, suggested withholding ad revenue until veracity has been established. She wrote, “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. … In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”

    Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation. The legal climate for security research will continue to improve, as its connection to national security becomes increasingly obvious. These changes will drive significant corporate and public sector improvements in security during the next decade.”

    Larry Keeley, founder of innovation consultancy Doblin, predicted technology will be improved but people will remain the same, writing, “Capabilities adapted from both bibliometric analytics and good auditing practices will make this a solvable problem. However, non-certified, compelling-but-untrue information will also proliferate. So the new divide will be between the people who want their information to be real vs. those who simply want it to feel important. Remember that quote from Roger Ailes: ‘People don’t want to BE informed, they want to FEEL informed.’ Sigh.”

    Anonymous survey participants also responded:

    “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to correct for a feedback loop.”
    “Semantic technologies will be able to cross-verify statements, much like meta-analysis.”
    “The credibility history of each individual will be used to filter incoming information.”
    “The veracity of information will be linked to how much the source is perceived as trustworthy – we may, for instance, develop a trust index and trust will become more easily verified using artificial-intelligence-driven technologies.”
    “The work being done on things like verifiable identity and information sharing through loose federation will improve things somewhat (but not completely). That is to say, things will become better but not necessarily good.”
    “AI, blockchain, crowdsourcing and other technologies will further enhance our ability to filter and qualify the veracity of information.”
    “There will be new visual cues developed to help news consumers distinguish between trusted news sources and others.”

    Subtheme: Regulatory remedies could include software liability law, required identities, unbundling of social networks like Facebook
    A number of respondents believe there will be policy remedies that move beyond whatever technical innovations emerge in the next decade. They offered a range of suggestions, from regulatory reforms applied to the platforms that aid misinformation merchants to legal penalties applied to wrongdoers. Some think the threat of regulatory reform via government agencies may force the issue of required identities and the abolition of anonymity protections for platform users.

    Sonia Livingstone, professor of social psychology at the London School of Economics and Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to continue by those with power, as we are already seeing with increased national pressure on providers/companies by a range of means from law and regulation to moral and consumer pressures.”

    Willie Currie, a longtime expert in global communications diffusion, wrote, “The apparent success of fake news on platforms like Facebook will have to be dealt with on a regulatory basis as it is clear that technically minded people will only look for technical fixes and may have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse that the scale of posts on social media platforms makes human intervention impossible will not be a defense. Regulatory options may include unbundling social networks like Facebook into smaller entities. Legal options include reversing the notion that providers of content services over the internet are mere conduits without responsibility for the content. These regulatory and legal options may not be politically possible to affect within the U.S., but they are certainly possible in Europe and elsewhere, especially if fake news is shown to have an impact on European elections.”

    Sally Wentworth, vice president of global policy development at the Internet Society, warned against too much dependence upon information platform providers in shaping solutions to improve the information environment. She wrote: “It’s encouraging to see some of the big platforms beginning to deploy internet solutions to some of the issues around online extremism, violence and fake news. And yet, it feels like as a society, we are outsourcing this function to private entities that exist, ultimately, to make a profit and not necessarily for a social good. How much power are we turning over to them to govern our social discourse? Do we know where that might eventually lead? On the one hand, it’s good that the big players are finally stepping up and taking responsibility. But governments, users and society are being too quick to turn all of the responsibility over to internet platforms. Who holds them accountable for the decisions they make on behalf of all of us? Do we even know what those decisions are?”

    A professor and chair in a department of educational theory, policy and administration commented, “Some of this work can be done in private markets. Being banned from social media is one obvious one. In terms of criminal law, I think the important thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain venues, but obviously not in all. Federal (and perhaps even international) guidelines would be useful. Without a framework for regulation, I can’t imagine penalties.”

    Theme 4: The information environment will improve, because people will adjust and make things better
    Many of those who expect the information environment to improve anticipate that information literacy training and other forms of assistance will help people become more sophisticated consumers. They expect that users will gravitate toward more reliable information – and that knowledge providers will respond in kind.

    When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.
    IRENE WU

    Frank Kaufmann, founder and director of several international projects for peace activism and media and information, commented, “The quality of news will improve, because things always improve.” And Barry Wellman, virtual communities expert and co-director of the NetLab Network, said, “Software and people are becoming more sophisticated.”

    One hopeful respondent said a change in economic incentives can bring about desired change. Tom Wolzien, chairman of The Video Call Center and Wolzien LLC, said, “The market will not clean up the bad material, but will shift focus and economic rewards toward the reliable. Information consumers, fed up with false narratives, will increasingly shift toward more-trusted sources, resulting in revenue flowing toward those more trusted sources and away from the junk. This does not mean that all people will subscribe to either scientific or journalistic method (or both), but they will gravitate toward material the sources and institutions they find trustworthy, and those institutions will, themselves, demand methods of verification beyond those they use today.”

    A retired public official and internet pioneer predicted, “1) Education for veracity will become an indispensable element of secondary school. 2) Information providers will become legally responsible for their content. 3) A few trusted sources will continue to dominate the internet.”

    Irene Wu, adjunct professor of communications, culture and technology at Georgetown University, said, “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”

    Charlie Firestone, executive director at the Aspen Institute Communications and Society Program, commented, “In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate.”

    Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “As I wrote in ‘Net Smart’ in 2012, some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online – with the caveat that misinformation/disinformation versus verified information is likely to be a continuing arms race. In 2012, Facebook, Google and others had no incentive to pay attention to the problem. After the 2016 election, the issue of fake information has been spotlighted.”

    Subtheme: Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material
    Many respondents agree that misinformation will persist as the online realm expands and more people are connected in more ways. Still, the more hopeful among these experts argue that progress is inevitable as people and organizations find coping mechanisms. They say history validates this. Furthermore, they said technologists will play an important role in helping filter out misinformation and modeling new digital literacy practices for users.

    We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again.
    JONATHAN GRUDIN

    Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years of experience at the BBC and as a digital consultant, wrote, “Our information environment has been immeasurably improved by the democratisation of the means of publication since the creation of the web nearly 25 years ago. We are now seeing the downsides of that transformation, with bad actors manipulating the new freedoms for antisocial purposes, but techniques for managing and mitigating those harms will improve, creating potential for freer, but well-governed, information environments in the 2020s.”

    Jonathan Grudin, principal design researcher at Microsoft, said, “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression; contradictory claims have always existed in print, but have been manageable and often healthy.”

    Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “‘Fake news’ is not new. The Weekly World News had a circulation of over a million for its mostly fictional news stories that are printed and sold in a format closely resembling a newspaper. Many readers recognized it as entertainment, but not all. More subtly, its presence on the newsstand reminded everyone that anything can be printed.”

    Joshua Hatch, president of the Online News Association, noted, “I’m slightly optimistic because there are more people who care about doing the right thing than there are people who are trying to ruin the system. Things will improve because people – individually and collectively – will make it so.”

    Many of these respondents said the leaders and engineers of the major information platform companies will play a significant role. Some said they expect some other systematic and social changes will alter things.

    John Wilbanks, chief commons officer at Sage Bionetworks, replied, “I’m an optimist, so take this with a grain of salt, but I think as people born into the internet age move into positions of authority they’ll be better able to distill and discern fake news than those of us who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not that the environment will get better, it’s that those younger will be better fitted to survive it.”

    Danny Rogers, founder and CEO of Terbium Labs, replied, “Things always improve. Not monotonically, and not without effort, but fundamentally, I still believe that the efforts to improve the information environment will ultimately outweigh efforts to devolve it.”

    Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “Growing digital literacy and the use of automated systems will tip the balance towards a better information environment.”

    A number of these respondents said information platform corporations such as Google and Facebook will begin to efficiently police the environment through various technological enhancements. They expressed faith in the inventiveness of these organizations and suggested the people of these companies will implement technology to embed moral and ethical thinking in the structure and business practices of their platforms, enabling the screening of content while still protecting rights such as free speech.

    Patrick Lambe, principal consultant at Straits Knowledge, commented, “All largescale human systems are adaptive. When faced with novel predatory phenomena, counter-forces emerge to balance or defeat them. We are at the beginning of a largescale negative impact from the undermining of a social sense of reliable fact. Counter-forces are already emerging. The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g., Google, Facebook) aids in this counter-response.”

    A professor in technology law at a West-Coast-based U.S. university said, “Intermediaries such as Facebook and Google will develop more-robust systems to reward legitimate producers and punish purveyors of fake news.”

    A longtime director for Google commented, “Companies like Google and Facebook are investing heavily in coming up with usable solutions. Like email spam, this problem can never entirely be eliminated, but it can be managed.”

    Sandro Hawke, technical staff at the World Wide Web Consortium, predicted, “Things are going to get worse before they get better, but humans have the basic tools to solve this problem, so chances are good that we will. The biggest risk, as with many things, is that narrow self-interest stops people from effectively collaborating.”

    Anonymous respondents shared these remarks:

    “Accurate facts are essential, particularly within a democracy, so this will be a high, shared value worthy of investment and government support, as well as private-sector initiatives.”
    “We are only at the beginning of drastic technological and societal changes. We will learn and develop strategies to deal with problems like fake news.”
    “There is a long record of innovation taking place to solve problems. Yes, sometimes innovation leads to abuses, but further innovation tends to solve those problems.”
    Consumers have risen up in the past to block the bullshit, fake ads, fake investment scams, etc., and they will again with regard to fake news.”
    “As we understand more about digital misinformation we will design better tools, policies and opportunities for collective action.”
    “Now that it is on the agenda, smart researchers and technologists will develop solutions.”
    “The increased awareness of the issue will lead to/force new solutions and regulation that will improve the situation in the long-term even if there are bound to be missteps such as flawed regulation and solutions along the way.”

    Subtheme: Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)
    A number of these experts said solutions such as tagging, flagging or other labeling of questionable content will continue to expand and be of further use in the future in tackling the propagation of misinformation

    The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.
    ANONYMOUS ENGINEER

    J. Nathan Matias, a postdoctoral researcher at Princeton University and previously a visiting scholar at MIT’s Center for Civic Media, wrote, “Through ethnography and largescale social experiments, I have been encouraged to see volunteer communities with tens of millions of people work together to successfully manage the risks from inaccurate news.”

    A researcher of online harassment working for a major internet information platform commented, “If there are nonprofits keeping technology in line, such as an ACLU-esque initiative, to monitor misinformation and then partner with spaces like Facebook to deal with this kind of news spam, then yes, the information environment will improve. We also need to move away from clickbaity-like articles, and not algorithmically rely on popularity but on information.”

    An engineer based in North America replied, “The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.”

    Micah Altman, director of research for the Program on Information Science at MIT, commented, “Technological advances are creating forces pulling in two directions: It is increasingly easy to create real-looking fake information; and it is increasingly easy to crowdsource the collection and verification of information. In the longer term, I’m optimistic that the second force will dominate – as transaction cost-reduction appears to be relatively in favor of crowds versus concentrated institutions.”

    A past chairman of a major U.S. scientific think tank and former CEO replied, “[The information environment] should improve because there are many techniques that can be brought to bear both human-mediated – such as collective intelligence via user voting and rating – and technological responses that are either very early in their evolution or not or not deployed at all. See spam as an analog.”

    Some predicted that digital distributed ledger technologies, known as blockchain, may provide some answers. A longtime technology editor and columnist based in Europe, commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute content. DECENT is an early example.” And an anonymous respondent from Harvard University’s Berkman Klein Center for Internet & Society said, “They will be cryptographically verified, with concepts.”

    But others were less confident that blockchain will work. A leading researcher studying the spread of misinformation observed, “I know systems like blockchain are a start, but in some ways analog systems (e.g., scanned voting ballots) can be more resilient to outside influence than digital solutions such as increased encryption. There are always potential compromises when our communication networks are based on human-coded technology and hardware; this [is] less the case with analog-first, digital-second systems.”

    A professor of media and communication based in Europe said, “Right now, reliable and trusted verification systems are not yet available; they may become technically available in the future but the arms race between corporations and hackers is never ending. Blockchain technology may be an option, but every technological system needs to be built on trust, and as long as there is no globally governed trust system that is open and transparent, there will be no reliable verification systems.”

    Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education
    There was common agreement among many respondents – whether they said they expect to see improvements in the information environment in the next decade or not – that the problem of misinformation requires significant attention. A share of these respondents urged action in two areas: A bolstering of the public-serving press and an expansive, comprehensive, ongoing information literacy education effort for people of all ages.

    We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.
    MIKE DEVITO

    A sociologist doing research on technology and civic engagement at MIT said, “Though likely to get worse before it gets better, the 2016-2017 information ecosystem problems represent a watershed moment and call to action for citizens, policymakers, journalists, designers and philanthropists who must work together to address the issues at the heart of misinformation.”

    Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee commented, “This is a social problem that cannot be solved via technology.”

    Subtheme: Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press
    Many respondents noted that while the digital age has amplified countless information sources it has hurt the reach and influence of the traditional news organizations. These are the bedrock institutions much of the public has relied upon for objective, verified, reliable information – information undergirded by ethical standards and a general goal of serving the common good. These respondents said the information environment can’t be improved without more, well-staffed, financially stable, independent news organizations. They believe that material can rise above misinformation and create a base of “common knowledge” the public can share and act on.

    This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.
    RICH LING

    Susan Hares, a pioneer with the National Science Foundation Network (NSFNET) and longtime internet engineering strategist, now a consultant, said, “Society simply needs to decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased and verified information.”

    Christopher Jencks, a professor emeritus at Harvard University, said, “Reducing ‘fake news’ requires a profession whose members share a commitment to getting it right. That, in turn, requires a source of money to pay such professional journalists. Advertising used to provide newspapers with money to pay such people. That money is drying up, and it seems unlikely to be replaced within the next decade.”

    Rich Ling, professor of media technology at the School of Communication and Information at Nanyang Technological University, said, “We have seen the consequences of fake news in the U.S. presidential election and Brexit. This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.”

    Maja Vujovic, senior copywriter for the Comtrade Group, predicted, “The information environment will be increasingly perceived as a public good, making its reliability a universal need. Technological advancements and civil-awareness efforts will yield varied ways to continuously purge misinformation from it, to keep it reasonably reliable.”

    An author and journalist based in North America said, “I believe this era could spawn a new one – a flight to quality in which time-starved citizens place high value on verified news sources.”

    A professor of law at a major U.S. state university commented, “Things won’t get better until we realize that accurate news and information are a public good that require not-for-profit leadership and public subsidy.”

    Marc Rotenberg, president of the Electronic Privacy Information Center, wrote, “The problem with online news is structural: There are too few gatekeepers, and the internet business model does not sustain quality journalism. The reason is simply that advertising revenue has been untethered from news production.”

    With precarious funding and shrinking audiences, healthy journalism that serves the common good is losing its voice. Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship at the University of Virginia, wrote, “There are no technological solutions that correct for the dominance of Facebook and Google in our lives. These incumbents are locked into monopoly power over our information ecosystem and as they drain advertising money from all other low-cost commercial media they impoverish the public sphere.”

    Subtheme: Elevate information literacy: It must become a primary goal at all levels of education
    Many of these experts said the flaws in human nature and still-undeveloped norms in the digital age are the key problems that make users susceptible to false, misleading and manipulative online narratives. One potential remedy these respondents suggested is a massive compulsory crusade to educate all in digital-age information literacy. Such an effort, some said, might prepare more people to be wise in what they view/read/believe and possibly even serve to upgrade the overall social norms of information sharing.

    Information is only as reliable as the people who are receiving it.
    JULIA KOLLER

    Karen Mossberger, professor and director of the School of Public Affairs at Arizona State University, wrote, “The spread of fake news is not merely a problem of bots, but part of a larger problem of whether or not people exercise critical thinking and information-literacy skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address these aspects of online skills in the media and to address these as fundamental educational competencies in our education system. Online information more generally has an almost limitless diversity of sources, with varied credibility. Technology is driving this issue, but the fix isn’t a technical one alone.”

    Mike DeVito, graduate researcher at Northwestern University, wrote, “These are not technical problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.”

    Miguel Alcaine, International Telecommunication Union area representative for Central America, commented, “The boundaries between online and offline will continue to blur. We understand online and offline are different modalities of real life. There is and will be a market (public and private providers) for trusted information. There is and will be space for misinformation. The most important action societies can take to protect people is education, information and training.”

    An early internet developer and security consultant commented, “Fake news is not a product of a flaw in the communications channel and cannot be fixed by a fix to the channel. It is due to a flaw in the human consumers of information and can be repaired only by education of those consumers.”

    An anonymous respondent from the Harvard University’s Berkman Klein Center for Internet & Society noted, “False information – intentionally or inadvertently so – is neither new nor the result of new technologies. It may now be easier to spread to more people more quickly, but the responsibility for sifting facts from fiction has always sat with the person receiving that information and always will.”

    An internet pioneer and rights activist based in the Asia/Pacific region said, “We as a society are not investing enough in education worldwide. The environment will only improve if both sides of the communication channel are responsible. The reader and the producer of content, both have responsibilities.”

    Deirdre Williams, retired internet activist, replied, “Human beings are losing their capability to question and to refuse. Young people are growing into a world where those skills are not being taught.”

    Julia Koller, a learning solutions lead developer, replied, “Information is only as reliable as the people who are receiving it. If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”

    Ella Taylor-Smith, senior research fellow at the School of Computing at Edinburgh Napier University, noted, “As more people become more educated, especially as digital literacy becomes a popular and respected skill, people will favour (and even produce) better quality information.”

    Constance Kampf, a researcher in computer science and mathematics, said, “The answer depends on socio-technical design – these trends of misinformation versus verifiable information were already present before the internet, and they are currently being amplified. The state and trends in education and place of critical thinking in curricula across the world will be the place to look to see whether or not the information environment will improve – cyberliteracy relies on basic information literacy, social literacy and technological literacy. For the environment to improve, we need substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy (see Laura Gurak’s book ‘Cyberliteracy’).”

    Su Sonia Herring, an editor and translator, commented, “Misinformation and fake news will exist as long as humans do; they have existed ever since language was invented. Relying on algorithms and automated measures will result in various unwanted consequences. Unless we equip people with media literacy and critical-thinking skills, the spread of misinformation will prevail.”

    Responses from additional key experts regarding the future of the information environment
    This section features responses by several of the top analysts who participated in this canvassing. Following this wide-ranging set of comments is a much more expansive set of quotations directly tied to the five primary themes identified in this report.

    Ignorance breeds frustration and ‘a growing fraction of the population has neither the skills nor the native intelligence to master growing complexity’
    Mike Roberts, pioneer leader at ICANN and Internet Hall of Fame member, replied, “There are complex forces working both to improve the quality of information on the net, and to corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net improvement, but viewed with hindsight, the improvement may be viewed as inadequate. The other side of the complexity coin is ignorance. The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did 50 or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So ignorance breeds frustration, which breeds acting out, which breeds antisocial and pathological behavior, such as the disinformation, which was the subject of the survey, and many other undesirable second order effects. Issues of trustable information are certainly important, especially since the technological intelligentsia command a number of tools to combat untrustable info. But the underlying pathology won’t be tamed through technology alone. We need to replace ignorance and frustration with better life opportunities that restore confidence – a tall order and a tough agenda. Is there an immediate nexus between widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a virtuous circle where acquisition of trustable information reduces ignorance, which leads to better use of better information, etc.”

    The truth of news is murky and multifaceted
    Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted methods will emerge to block false narratives and allow accurate information to prevail, and, yes, the quality and veracity of information online will deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas. Of course, the definition of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to assure the veracity of their work, and the questions they ask have well-defined answers – and still there can be controversy about what is true, what work was free from outside influence. The truth of news stories is far murkier and multi-faceted. A story can be distorted, disproportional, meant to mislead – and still, strictly speaking, factually accurate. … But a pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald Trump’s repeated ‘fake news’ smears of The New York Times, Washington Post, etc., are among his most destructive non-truths.”

    “Algorithms weaponize rhetoric,” influencing on a mass scale

    Susan Etlinger, industry analyst at Altimeter Research, said, “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cybersecurity experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”

    There’s no technical solution for the fact that ‘news’ is a social bargain

    Clay Shirky, vice provost for educational technology at New York University, replied, “‘News’ is not a stable category – it is a social bargain. There’s no technical solution for designing a system that prevents people from asserting that Obama is a Muslim but allows them to assert that Jesus loves you.”

    ‘Strong economic forces are incentivizing the creation and spread of fake news’

    Amy Webb, author and founder of the Future Today Institute, wrote, “In an era of social, democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in stories. With our limbic systems continuously engaged, we’re more likely to pay attention to stories that make us want to fight, take flight or fill our social media accounts with links. As a result, there are strong economic forces incentivizing the creation and spread of fake news. In the digital realm, attention is currency. It’s good for democracy to stop the spread of misinformation, but it’s bad for business. Unless significant measures are taken in the present – and unless all the companies in our digital information ecosystem use strategic foresight to map out the future – I don’t see how fake news could possibly be reduced by 2027.”

    Propagandists exploit whatever communications channels are available

    Ian Peter, internet pioneer, historian and activist, observed, “It is not in the interests of either the media or the internet giants who propagate information, nor of governments, to create a climate in which information cannot be manipulated for political, social or economic gain. Propaganda and the desire to distort truth for political and other ends have always been with us and will adapt to any form of new media which allows open communication and information flows.”

    Expanding information outlets erode opportunities for a ‘common narrative’

    Kenneth R. Fleischmann, associate professor at the School of Information at the University of Texas, Austin, wrote, “Over time, the general trend is that a proliferation of information and communications technologies (ICTs) has led to a proliferation of opportunities for different viewpoints and perspectives, which has eroded the degree to which there is a common narrative – indeed, in some ways, this parallels a trend away from monarchy toward more democratic societies that welcome a diversity of perspectives – so I anticipate the range of perspectives to increase, rather than decrease, and for these perspectives to include not only opinions but also facts, which are inherently reductionist and can easily be manipulated to suit the perspective of the author, following the old aphorism about statistics Mark Twain attributed to Benjamin Disraeli [‘There are three kinds of lies: lies, damned lies and statistics.’], which originally referred to experts more generally.”

    ‘Broken as it might be, the internet is still capable of routing around damage’

    Paul Saffo, longtime Silicon-Valley-based technology forecaster, commented, “The information crisis happened in the shadows. Now that the issue is visible as a clear and urgent danger, activists and people who see a business opportunity will begin to focus on it. Broken as it might be, the internet is still capable of routing around damage.”

    It will be impossible to distinguish between fake and real video, audio, photos

    Marina Gorbis, executive director of the Institute for the Future, predicted, “It’s not going to be better or worse but very different. Already we are developing technologies that make it impossible to distinguish between fake and real video, fake and real photographs, etc. We will have to evolve new tools for authentication and verification. We will probably have to evolve both new social norms as well as regulatory mechanisms if we want to maintain online environment as a source of information that many people can rely on.”

    A ‘Cambrian explosion’ of techniques will arise to monitor the web and non-web sources

    Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures, said, “The rapid rise of AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media sources and social networks and rapidly identifying and tagging fake and misleading content.”

    Well, there’s good news and bad news about the information future …

    Jeff Jarvis, professor at the City University of New York’s Graduate School of Journalism, commented, “Reasons for hope: Much attention is being directed at manipulation and disinformation; the platforms may begin to recognize and favor quality; and we are still at the early stage of negotiating norms and mores around responsible civil conversation. Reasons for pessimism: Imploding trust in institutions; institutions that do not recognize the need to radically change to regain trust; and business models that favor volume over value.”

    A fear of the imposition of pervasive censorship

    Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “False and misleading information has always been part of all cultures (gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be. I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is MORE speech. The only major fear I have is of massive communications conglomerates imposing pervasive censorship.”

    People have to take responsibility for finding reliable sources

    Steven Miller, vice provost for research at Singapore Management University, wrote, “Even now, if one wants to find reliable sources, one has no problem doing that, so we do not lack reliable sources of news today. It is that there are all these other options, and people can choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity of sources that can be compared, and focus on what they want to believe. That type of situation will continue. Five or 10 years from now, I expect there to continue to be many reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable sources will have no problems doing so. Those who want to make sure they are getting a multiplicity of sources to see the range of inputs, and to sort through various types of inputs, will be able to do so, but I also expect that those who want to be in the game of influencing perceptions of reality and changing the perceptions of reality will also have ample means to do so. So the responsibility is with the person who is seeking the news and trying to get information on what is going on. We need more individuals who take responsibility for getting reliable sources.”
    “El revolucionario: te meteré la bota en el culo"

  2. #2
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts

    About this canvassing of experts

    The expert predictions reported here about the impact of the internet over the next 10 years came in response to a question asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 2 and Aug. 7, 2017. This is the eighth “Future of the Internet” study the two organizations have conducted together. For this project, we invited more than 8,000 experts and members of the interested public to share their opinions on the likely future of the internet. Overall, 1,116 people responded and answered this question:

    The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.

    The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

    Respondents were then asked to choose one of the following answers and follow up by answering a series of six questions allowing them to elaborate on their thinking:

    The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.

    The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.

    The web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during the previous seven “Future of the Internet” studies, as well as those identified across 12 years of studying the internet realm during its formative years. Among those invited were people who are active in the global internet policy community and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR), and Organization for Economic Cooperation and Development (OECD). We also invited a large number of professionals, innovators and policy people from technology businesses; government, including the National Science Foundation, Federal Communications Commission and the European Union; the media and media-watchdog organizations; and think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications), as well as globally located people working with communications technologies in government positions; top universities’ engineering/computer science departments, business/entrepreneurship faculties, and graduate students and postgraduate researchers; plus many who are active in civil society organizations such as the Association for Progressive Communications (APC), the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units examining ethics and the digital age. Invitees were encouraged to share the canvassing questionnaire link with others they believed would have an interest in participating, thus there was a “snowball” effect as the invitees were joined by those they invited to weigh in.

    Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample.

    The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.

    About 74% of respondents identified themselves as being based in North America; the others hail from all corners of the world. When asked about their “primary area of internet interest,” 39% identified themselves as research scientists; 7% as entrepreneurs or business leaders; 10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An additional 22% specified their primary area of interest as “other.”

    More than half the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted where relevant in this report.

    Here are some of the key respondents in this report (note, position titles and organization names were provided by respondents at the time of this canvassing and may not be current):
    Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the Program on Information Science at MIT; Robert Atkinson, president of the Information Technology and Innovation Foundation; Patricia Aufderheide, professor of communications at American University; Mark Bench, former executive director of World Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs; danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures; Tim Bray, senior principal technologist at Amazon; Marcel Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communication; Jamais Cascio, distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; Stephen Downes, researcher at the National Research Council of Canada; Johanna Drucker, professor of information studies at the University of California, Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at ICANN; Glenn Edens, CTO for Technology Reserve at PARC, a Xerox company; Paul N. Edwards, fellow in international security at Stanford University; Mohamed Elbashir, senior manager for internet regulatory policy at Packet Clearing House; Susan Etlinger, industry analyst at Altimeter Research; Bob Frankston, internet pioneer and software innovator; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Mark Glaser, publisher and founder of MediaShift.org; Marina Gorbis, executive director at the Institute for the Future; Jonathan Grudin, principal design researcher at Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Susan Hares, a pioneer with the NSFNET and longtime internet engineering strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute; Starr Roxanne Hiltz, author of “Network Nation” and distinguished professor of information systems; Helen Holder, distinguished technologist at Hewlett Packard (HP); Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon University; Christian H. Huitema, past president of the Internet Architecture Board; Alan Inouye, director of public policy for the American Library Association; Larry Irving, CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at the City University of New York’s Graduate School of Journalism; Christopher Jencks, a professor emeritus at Harvard University; Bart Knijnenburg, researcher on decision-making and recommender systems at Clemson University; James LaRue, director of the Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, web consultant, developer and activist; Mark Lemley, professor of law at Stanford University; Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia Livingstone, professor of social psychology at the London School of Economics; Alexios Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at Northeastern University; Giacomo Mazzone, head of institutional relations for the World Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor at the Oxford Internet Institute; Alex ‘Sandy’ Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum; Ian Peter, internet pioneer, historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab; Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smart”; Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo, longtime Silicon-Valley-based technology forecaster; David Sarokin, author of “Missed Information: Better Information for Building a Wealthier, More Sustainable Future”; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Jack Schofield, longtime technology editor and now columnist at The Guardian; Clay Shirky, vice provost for educational technology at New York University; Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig Siegele, technology editor at The Economist; Evan Selinger, professor of philosophy at Rochester Institute of Technology; Scott Spangler, principal data scientist at IBM Watson Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard D. Titus, CEO for Andronik; Joseph Turow, professor of communication at the University of Pennsylvania; Stuart A. Umpleby, professor emeritus at George Washington University; Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship at the University of Virginia; Tom Valovic, The Technoskeptic magazine; Hal Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Kevin Werbach, professor of legal studies and business ethics at the Wharton School at the University of Pennsylvania; John Wilbanks, chief commons officer at Sage Bionetworks; and Irene Wu, adjunct professor of communications, culture and technology at George Washington University.

    A brief selection of institutions at which respondents work or have affiliations:

    Adroit Technologies, Altimeter Group, Amazon, American Press Institute, Asia-Pacific Network Information Centre (APNIC), AT&T, BrainPOP, Brown University, BuzzFeed, Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic Design, Center for Democracy/Development/Rule of Law (CDDRL), Center for Media Literacy, Cesidian Root, Cisco, City University of New York’s Graduate School of Journalism, Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data & Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy Information Center, Farpoint Group, Federal Communications Commission (FCC), Fundación REDES, Future Today Institute, George Washington University, Google, Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard Business School, Hewlett Packard (HP), Hyperloop, IBM Research, IBM Watson Health, ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network, Internet Engineering Task Force, Internet Society, International Telecommunication Union (ITU), Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global, LearnLaunch, LMU Munich, Massachusetts Institute of Technology (MIT), Mathematica Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang Technological University, National Academies of Sciences/Engineering/Medicine, National Research Council of Canada, National Science Foundation, Netapp, NetLab Network, Network Science Group of Indiana University, Neural Archives Foundation, New York Law School, New York University, OpenMedia, Oxford University, Packet Clearing House, Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, Rose-Hulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network, Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S. Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles campuses), University of Michigan, University of Milan, University of Pennsylvania, University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester Polytechnic Institute, World Broadcasting Union, W3C, Xerox’s PARC, Yale Law.

    Complete sets of for-credit and anonymous responses can be found here:

    http://www.elon.edu/e-web/imagining/...t_credit.xhtml
    “El revolucionario: te meteré la bota en el culo"

  3. The Following User Says Thank You to Emil El Zapato For This Useful Post:

    Elen (29th November 2020)

  4. #3
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Theme 1: The information environment will not improve. The problem is human nature
    BY JANNA ANDERSON AND LEE RAINIE

    Misinformation and “fake news” have been around for as long as people have communicated. But today’s instant, low-budget, far-reaching communications capabilities have the potential to make the problem orders of magnitude more dangerous than in the past.

    Mankind has always lied, and always will; which is why the winners of wars get to write the history their way and others have no say, but with the internet, the losers have a say!
    WILLIAM L. SCHRADER

    As Frederic Filloux explains: “‘Misinformation’ – a broader concept that encompasses intentional deception, low-quality information and hyperpartisan news – is seen as a serious threat to democracies. … The Dark Web harbours vast and inexpensive resources to take advantage of the social loudspeaker. For a few hundred bucks, anyone can buy thousands of social media accounts that are old enough to be credible, or millions of email addresses. Also, by using Mechanical Turk or similar cheap crowdsourcing services widely available on the open web, anyone can hire legions of ‘writers’ who will help to propagate any message or ideology on a massive scale. That trade is likely to grow and flourish with the emergence of what experts call the ‘weaponized artificial intelligence propaganda,’ a black magic that leverages microtargeting where fake news stories (or hyperpartisan ones) will be tailored down to the individual level and distributed by a swarm of bots. What we see unfolding right before our eyes is nothing less than Moore’s Law applied to the distribution of misinformation: An exponential growth of available technology coupled with a rapid collapse of costs.”

    Roughly half the experts in this canvassing generally agreed with Filloux’s description of how technologies are emerging to enable misinformation distribution, and they worry about what may come next. Many expressed deep concerns about people’s primal traits, behaviors and cognitive responses and how they play out in new digital spaces. They said digital platforms are often amplifying divisions and contentiousness, driving users to mistrust those not in their “tribe.”

    As William L. Schrader, a former CEO with PSINet, wrote, “Mankind has always lied, and always will; which is why the winners of wars get to write the history their way and others have no say, but with the internet, the losers have a say! So which is better? Both sides, or just the winner? We have both sides today.”

    Respondents discussed the scale of the problem and how difficult it can be to assess and weed out bad information, saying that even sophisticated information consumers are likely to struggle in the coming information environment and credulous consumers may have little chance of working their way to true information. Nathaniel Borenstein, chief scientist at Mimecast, commented, “Internet technologies permit anyone to publish anything. Any attempt to improve the veracity of news must be done by some authority, and people don’t trust the same authorities, so they will ultimately get the news that their preferred authority wants them to have. There is nothing to stop them choosing an insane person as their authority.”

    More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives
    Some experts argued that the scale of the problem – too much bad information too easily disseminated – is their major concern. The internet facilitates too many information actors with divergent motives to allow for consistent identification of reliable information and effective strategies to flag false information.

    Andrew Odlyzko, professor of math and former head of the University of Minnesota’s Supercomputing Institute, observed, “‘What is truth has almost always been a contentious issue. Technological developments make it possible for more groups to construct their ‘alternate realities,’ and the temptation to do it is likely to be irresistible.”

    Andrew Nachison, author, futurist and founder of WeMedia, noted, “Technology will not overcome malevolence. Systems built to censor communication, even malevolent communication, will be countered by people who circumvent them.”

    Technological developments make it possible for more groups to construct their ‘alternate realities,’ and the temptation to do it is likely to be irresistible.
    ANDREW ODLYZKO

    David Weinberger, writer and senior researcher at Harvard University’s Berkman Klein Center for Internet & Society, noted, “It is an urgent problem, so it will be addressed urgently, and imperfectly.”

    Jan Schaffer, executive director of J-Lab, said, “There are so many people seeking to disseminate fake news and produce fake videos in which officials appear to be talking that it will be impossible to shut them all down. Twitter and Facebook and other social media players could play a stronger role. Only a few national news organizations will be trusted sources – if they can manage to survive.”

    Brian Cute, longtime internet executive and ICANN participant, said, “I am not optimistic that humans will collectively develop the type of rigorous habits that can positively impact the fake news environment. Humans have to become more effective consumers of information for the environment to improve. That means they have to be active and effective ‘editors’ of the information they consume. And that means they have to be active and effective editors of the information they share on the internet, because poorly researched information feeds the fake news cycle.”

    Rajnesh Singh, Asia-Pacific director for a major internet policy and standards organization, observed, “The issue will be how to cope with the volume of information that is generated and the proportion of it that is inaccurate or fake.”

    Steve Axler, a user-experience researcher, replied, “Social media and the web are on too large a scale to control content.”

    A software engineer referred to the human quest for power and authority as the underlying problem, writing, “Automation, control and monopolization of information sources and distribution channels will expand, with a goal to monetize or obfuscate.”

    Allan Shearer, associate professor at the University of Texas, Austin, observed, “The problem is the combination of the proliferation of platforms to post news and an increasing sense of agency in each person that his/her view matter, and the blurring of facts and opinions.”

    A vice president for stakeholder engagement said, “With a deluge of data, people look for shortcuts to determine what they believe, making them susceptible to filter bubbles and manipulation.”

    Jens Ambsdorf, CEO at The Lighthouse Foundation, based in Germany, replied, “The variability of information will increase. The amount of ‘noise’ and retweeted stuff will increase and without skills and tools it will become more difficult for citizens to sort out reliable from unreliable sources.”

    A professor at Harvard Business School wrote, “The vast majority of new users and a majority of existing users are not sophisticated readers of news facts, slants or content, nor should we expect them to be. Meanwhile, the methods for manipulation are getting better.”

    Diana Ascher, information scholar at the University of California, Los Angeles, observed, “Fake news, misinformation, disinformation and propaganda are not new; what’s new is the algorithmic propagation of such information. In my research, I call this the new yellow journalism.”

    Axel Bender, a group leader for Defence Science and Technology (DST) Group of Australia, said, “The veracity of information is unlikely to improve as 1) there will be an increase in the number and heterogeneity of (mis)information sources; and 2) artificially intelligent misinformation detectors will not be smart enough to recognise semantically sophisticated misinformation.”

    Adrian Schofield, an applied research manager based in Africa, commented, “The passive majority remains blissfully unaware of the potential (and real) threats posed by malicious operators in the ICT [information and communications technology] space. As fast as the good guys develop barriers … the bad guys will devise ways to leapfrog the barriers. It’s cheap and it’s borderless.”

    Collette Sosnowy, a respondent who shared no additional personal details, wrote, “The sources of information and the speed with which they are spread are so numerous I don’t see how they could effectively be curtailed.”

    I am concerned that as artificial intelligences advance, distinguishing between what is written by a human and what is generated by a bot will become more difficult.
    TIFFANY SHLAIN

    Monica Murero, a professor and researcher based in Europe, wrote, “The information environment will not improve easily, in part because of the technical nature of digitalized information and the tendency of re-elaborating and sharing information by anyone able to act in a prosumeristic fashion. For example, fake news (or unreliable information) is easy to produce thanks to the technical nature of digital information (duplicable, easy to modify, free of costs, durable over time, etc.) and the availability of programs [software] and tools (pre-designed format for elaborating images and contents) are widely available to anyone at an easy reach (a few words on any search engine). In the next 10 years I foresee disparities among countries in terms of improvements and deteriorations of the information environment (depending on country and their regulation, i.e., China, Europe, North Korea, U.S., etc.).”

    Sebastian Benthall, junior research scientist, New York University Steinhardt, responded, “The information environment is getting more complex. This complexity provides more opportunities for production and consumption of misinformation.”

    Tiffany Shlain, Filmmaker & Founder, The Webby Award, wrote, “I am concerned that as artificial intelligences advance, distinguishing between what is written by a human and what is generated by a bot will become more difficult.”

    Matt Moore, a business leader, observed, “The pressures driving the creation of ‘fake news’ will only increase – political partisanship, inter-state rivalry, plus the technologies needed to create and disseminate fake news will also increase in power and decrease in cost. New verification tools will emerge but these will not be sufficient to counter these other forces.”

    Jon Lebkowsky, web consultant/developer, author and activist, commented, “Given the complexity of the evolving ecosystem, it will be hard to get a handle on it. The decentralization of education is another difficult aspect: universal centralized digital literacy education could potentially mitigate the problem, but we could be moving away from universal standard educational systems.”

    The executive director of a major global privacy advocacy organization said, “What’s essentially happening today is basic human behaviour and powerful systems at play. It is only out-of-touch advocates and politicians who believe we can somehow constrain these results.”

    Veronika Valdova, managing partner at Arete-Zoe, noted, “Rogue regimes like Russia will continue exploiting the information environment to gain as much power and influence as possible. Jurisdictional constraints will make intervention less practicable. Also, whilst the overall information environment in English-speaking countries might improve due to the employment of artificial intelligence and easier neutralization of bots, this may not necessarily be the case for small nations in Europe where the environment is compartmented by language.”

    Joel Reidenberg, chair and professor of law at Fordham University, wrote, “The complexity of the information ecosystem and the public’s preference for filter bubbles will make improvements very difficult to achieve at scale.”

    Garrett A. Turner, a vice president for global engineering, predicted, “The information environment will not improve because [promotion of misinformation] has proven to be very effective and it is also extremely time-consuming to validate or police. In the transmission of information online it is difficult to decipher factual news from entertainment.”

    An author and journalist based in North America wrote, “Fragmenting social groups and powerful economic interests have the motive and means to create their own narratives. Who is the status quo that can defeat this in a modern society that likes to define itself as disruptive, countercultural, rebel, radical – choose the term that fits your tribe.”

    Anonymous respondents also commented:

    “There is just too much information and the environment has become so fragmented.”
    “The sheer volume of information and communication is too much.”
    “Many users seem to be indifferent or confused about objectively accurate information, which is difficult to confirm in an environment of information overload.”
    Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar
    A share of these respondents supported a view articulated by Peter Eckart, director of information technology at the Illinois Public Health Institute. He argued, “The problem isn’t with the sources of information, but with the hearers of it. If we don’t increase our collective ability to critically analyze the information before us, all of the expert systems in the world won’t help us.” People believe what they want to believe, these experts argued, and now have new ways to disseminate the things they believe to others.

    The information superhighway’s very speed and ease have made people sloppier thinkers, not more discerning.
    AUTHOR AND FORMER JOURNALISM PROFESSOR

    David Sarokin, writer, commented, “People spread the information they want to spread, reliable or not. There’s no technology that will minimize that tendency.”

    Helen Holder, distinguished technologist at Hewlett Packard (HP), said, “People have a strong tendency to believe things that align with their existing understanding or views. Unreliable information will have a substantial advantage wherever it reinforces biases, making it difficult to discredit or correct. Also, people are more inclined to believe information received from more than one source, and the internet makes it trivial to artificially simulate multiple sources and higher levels of popular support or belief.”

    Bill Jones, chairman of Global Village Ltd., predicted, “Trust can be so easily abused that it’s our collective ability to discern false from true, which ultimately is the key, but that is fraught with challenges. No one can do it for us.

    A futurist/consultant based in North America said, “The toxicity of the modern information landscape is as much attributable to vulnerabilities in human neurobiology as it is to anything embedded in software systems. Many of us, including those with the most control over the information environment, badly want things to improve, but it’s unclear to me that purely technical methods can solve these problems.”

    Cliff Cook, planning information manager for the City of Cambridge, Massachusetts, noted, “Fake news and related problems thrive when they have a receptive audience. The underlying problem is not one of fake news – rumors were no doubt a problem in ancient Rome and the court of King Henry VIII – but the presence of a receptive audience. Until a means is found to heal the fundamental breakdown in trust among Americans, I do not see matters improving, no matter what the technical fix.”

    An anonymous respondent wrote, “Google and Facebook are focusing money and attention on the problem of false information. … We have not yet reached a societal tipping point where facts are valued, however.”

    Matt Armstrong, an independent research fellow working with King’s College and former executive director of the U.S. Advisory Commission on Public Diplomacy, replied, “The influence of bad information will not change until people change. At present, there is little indication that people will alter their consumption habits. When ‘I heard it on the internet’ is a mark of authority rather than derision as it was, we are in trouble. This is coupled with the disappointing reality that we are now in a real war of words where many consumers do not check whether the words are/were/will be supported by actions or facts. The words of now are all that matter to too many audiences.”

    An assistant professor of political science wrote, “Improving information environments does little to address demand for misinformation by users.”

    An anonymous research scientist observed, “False narratives are not new to the internet, but authority figures are now also beginning to create them.”

    A former journalism professor and author of a book on the future of news commented, “The information superhighway’s very speed and ease have made people sloppier thinkers, not more discerning.”

    A researcher based in Europe replied, “The problem with fake news is not a technological one, but one related to human nature, fear, ignorance and power. … In addition, as new tools are developed to fight fake news, those interested in spreading them will also become more savvy and sophisticated.”

    Walter Bender, a senior research scientist at MIT, wrote, “I don’t think the problem is technological. It is social, and it is not much different from the American Aurora of 1800 in Philadelphia [a one-sided and largely discredited publication in American Revolution times]. People want to believe what reinforces their current positions, and there will be ‘publishers’ willing to accommodate them.”

    Many respondents mentioned distrust in authority as a motivating factor behind the uptick in the spread of misinformation, and some said political polarization and the destruction of trust are feeding the emergence of more misinformation.

    Daniel Kreiss, associate professor of communication at University of North Carolina, Chapel Hill, commented, “Misinformation/fake news/ideological/identity media is a political problem. They are the outcome, not the cause, of political polarization.”

    A senior fellow at a center focusing on democracy and the rule of law wrote, “Many people do not care about the veracity of the news they consume and circulate to others, and these people will continue spreading false information; those who do so from within established democracies can be punished/penalized, but many will remain in non-democracies where access to reliable information will deteriorate. My prediction is that in parts of the world things will improve, in others they will deteriorate. On average things will not improve.”

    Anonymous respondents also wrote:

    “To really solve this issue we need to look deeper at what truth means and who cares about it. It will take more than a decade to sort that out and implement solutions.”
    “Collective-action problems require a collective-action response, and I don’t think we’ll manage that in the international environment.”
    “The information environment reflects society at its best or worst; changes in human behavior, not technology, will impact on the information environment.”
    “At best, the definition of ‘lie’ will simply change and official disinformation will be called information anyway.”
    “I have yet to see any evidence that the most-active political media consumers want more facts and less opinion.”
    “There has never been a wholly truthful human environment, and there are too many vested interests in fantasy, fiction and untruths.”
    “I do not think technology can keep up with people’s creativity or appetite for information they find congenial to their pre-existing beliefs.”
    “As long as people want to believe a lie, the lie will spread.”
    “From propaganda to humour, the natural drive to share information will overcome any obstacles that hinder it.”
    “It will be a constant game of whack-a-mole, and polarization has now come to facts. It’s almost like facts are a philosophy class exercise now – what is truth?”
    In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil
    A number of these experts predicted that little will change as long as social media platforms favor content that generates lots of clicks – and therefore ad dollars – whether the information is true or not. A typical version of this view came from Jonathan Brewer, consulting engineer for Telco2. He commented, “The incentives for social media providers are at odds with stemming the spread of misinformation. Outrageous claims and hyperbole will always generate more advertising revenue than measured analysis of an issue.”

    Gina Neff, professor at the Oxford Internet Institute, said, “The economic stakes are simply too high to rein in an information ecosystem that allows false information to spread. Without the political commitment of major social media platforms to address the problem, the technical challenges to solving this problem will never be met.”

    The basic incentive structure that promotes untrustworthy information flow won’t change, and the bad guys will improve their approaches faster than the good guys.
    PROFESSOR OF LEGAL ISSUES AND ETHICS

    Ari Ezra Waldman, associate professor of law at the New York Law School, wrote, “The spread of misinformation will only improve if platforms take responsibility for their role in the process. So far, although intermediaries like Facebook have nodded toward doing something about ‘fake news’ and cyberharassment and other forms of misleading or harmful speech, they simultaneously continue to maintain that they are merely neutral conduits and, therefore, uneasy about maintaining any sort of control over information flow. The ‘neutral conduit’ canard is a socio-legal strategy that is little more than a fancy way of absolving themselves of responsibility for their essential role in the spread of misinformation and the decay of discourse.”

    Joseph Turow, professor of communication at the University of Pennsylvania, commented, “The issues of ‘fake’ and ‘weaponized’ news are too complex to be dealt with through automated, quantitative or algorithmic means. These activities have always existed under one label or another, and their rapid distribution by activist groups, companies and governments as a result of new technologies will continue. One reason is that the high ambiguity of these terms makes legislating against them difficult without infringing on speech and the press. Another reason is that the people sending out such materials will be at least as creative as those trying to stop them.”

    A professor of legal issues and ethics at one of the pre-eminent graduate schools of business in the United States said, “The basic incentive structure that promotes untrustworthy information flow won’t change, and the bad guys will improve their approaches faster than the good guys.”

    Dave Burstein, editor of FastNet.news, said, “Speaking of reports on policy and technology, the important thoroughly misleading information usually comes from the government and especially lobbyists and their shills. All governments lie, I.F. Stone taught us, and I can confirm that’s been true of both Obama’s people and the Republicans this century I have reported. Corporate advocates with massive budgets – Verizon and AT&T in the hundreds of billions – bamboozle reporters and governments into false claims. The totally outnumbered public-interest advocates often go over the line sometimes as well.”

    Johanna Drucker, professor of information studies at the University of California, Los Angeles, commented, “The constructedness (sic) of discourse removes news from the frameworks in which verification can occur. Responsible journalism will continue on the basis of ethical accountability, but nothing will prevent other modes of discourse from proliferating. No controls can effectively legislate for accuracy or verity. It is a structural impossibility to suture language and the lived.”

    Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, commented, “Fake news spreads faster than genuine news. It is more attractive and ‘hot.’ We do not see corresponding efforts from genuine news peddlers to give factual information that is timely and interesting. On the contrary, reporters have become lazy, lifting articles off social media and presenting only obvious facts. Fake news peddlers have invested resources (domains and bots) to propagate their agenda. There isn’t a corresponding effort by genuine news reporters. People will get so used to being ‘duped’ that they will treat everything they read with skepticism, even real news. It will no longer be financially viable to invest in real news as the readership may go down. In such an environment, it is likely fake news will continue to thrive.”

    A professor of media and communication based in Europe said, “The online information environment will not improve if its architectural design, operation and control is left to five big companies alone. If they do not open up their algorithms, data governance and business models to allow for democratic and civic participation (in other words, if there is only an economic driver to rule the information environment) the platform ecosystem will not improve its conditions to facilitate an open and democratic online world.”

    A leading researcher studying the spread of misinformation observed, “The payoffs for actors who are able to set the agenda in the emerging information environment are rising quickly. Our collective understanding of and ability to monitor these threats and establish ground rules across disparate communities, geographies and end devices will be challenged.”

    A research scientist at Oxford University commented, “Misinformation and disinformation and motivated reasoning are integral to platform capitalism’s business model.”

    Rick Hasen, professor of law and political science at the University of California, Irvine, said, “By 2027 there will be fewer mediating institutions such as acceptable media to help readers/viewers ferret out truth. And there will be more deliberate disinformation from people in and out of the U.S.”

    Raymond Hogler, professor of management at Colorado State University, replied, “Powerful state actors … will continue to disseminate false, misleading and ideologically driven narratives posing as ‘news.’”

    A member of the Internet Architecture Board said, “The online advertising ecosystem is very resistant to change, and it powers the fake news ‘industry.’ Parties that could do something about it (e.g., makers of browsers) don’t have a strong incentive to do so.”

    A professor of law at a state university replied, “Powerful incentives will continue for irresponsible politicians and others in the political industry (paid or not) to spread false information and for publications to allow it to circulate: attention, clicks, ad revenue, political power. Meanwhile the First Amendment will protect [sharing of all information] powerfully inside the United States as the overall moral and ethical character of the country continues to be debased.”

    An author/editor/journalist wrote, “Confirmation bias, plus corporate manipulation, will not allow an improvement in the information environment.”

    An internet pioneer and principal architect in computing science replied, “Clicks will remain paramount, and whether those clicks are on pages containing disinformation or not will be irrelevant.”

    Edward Kozel, an entrepreneur and investor, predicted, “Although trusted sources (e.g., The New York Times) will remain or new ones will emerge, the urge for mass audience and advertising revenue will encourage widespread use of untrusted information.”

    David Schultz, professor of political science at Hamline University, said, “The social media and political economic forces that are driving the fragmentation of truth will not significantly change in the next 10 years, meaning the forces that drive misinformation will continue.”

    Paul Gardner-Stephen, senior lecturer at the College of Science & Engineering at Flinders University, noted, “Increasing technical capability and automation, combined with the demonstrated dividends that can be obtained from targeted fake news makes an arms race inevitable. Governments and political parties are the major players. This is Propaganda 2.0.”

    Peter Levine, associate dean and professor at the Tisch College of Civic Life at Tufts University, observed, “I don’t think there is a big enough market for the kinds of institutions, such as high-quality newspapers, that can counter fake news, plus fake news pays.”

    A postdoctoral scholar at a major university’s center for science, technology and society predicted, “Some advances will be made in automatically detecting and filtering ‘fake news’ and other misinformation online. However, audience attention and therefore the financial incentives are not aligned to make these benefits widespread. Even if some online services implement robust filtering and detection, others will happily fill the void they leave, pandering to a growing audience willing to go to ‘alternative’ sites to hear what they want to hear.”

    David Brake, a researcher and journalist, pointed out, “The production and distribution of inaccurate information has lower cost and higher incentives than its correction does.”

    Mark Lemley, a professor of law at Stanford University, wrote, “Technology cannot easily distinguish truth from falsehood, and private technology companies don’t necessarily have the incentive to try.”

    Darel Preble, president and executive director at the Space Solar Power Institute, commented, “Even the technical media … is substituting ad hominem attacks (or volume) and repetition for technical accuracy to complex problems. Few people are familiar with or want to risk their paycheck to see these problems fixed, so these problems will continue growing for now.”

    Amali De Silva-Mitchell, a futurist, replied, “There is political and commercial value in misinformation. Absolutely ethical societies have never existed. Disclosures are critical and it will be important to state the source of news as being human or machine, with the legal obligation remaining with the human controller of the data.”

    Some said the information environment is impossible to fully tame due to the human drive to continually innovate, competing to upgrade, monetize and find new ways to assert power.

    Alan D. Mutter, media consultant and faculty at the graduate school of journalism at the University of California, Berkeley, replied, “The internet is, by design, an open and dynamically evolving platform. It’s the Wild West, and no one is in charge.”

    Anonymous respondents commented:

    “‘Fake news’ is just the latest incarnation of propaganda in late capitalism.”
    “The profit motive will be put in front of value. The reliance of corporations on algorithms that allow them to do better targeting leads to greater fragmentation and greater possibility for misinformation.”
    “People have to use platforms for internet communication. The information environment is managed by the owners of these platforms who may not be so interested in ethical issues.”
    “We cannot undo the technology and economics of the current information environment, nor can we force those who are profiting from misinformation to forego their monetary gains.”
    Human tendencies and infoglut drive people apart and make it harder for them to agree on ‘common knowledge.’ That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem
    Many of these experts said one of the most serious problems caused by digital misinformation and the disruption of public support of traditional news media models is the shrinkage of the kind of commonly embraced facts that are the foundation of civil debate – a consensus understanding of the world. An anonymous respondent predicted, “The ongoing fragmentation of communities and the lack of common voice will lead to the lower levels of trust.”

    A major issue here is that what one side believes is true, is not the same as what the other side believes. … We are facing an almost existential question here of ‘what is truth?’
    HISTORIAN AND FORMER LEGISLATIVE STAFFER

    A professor of education policy commented, “Since there is no center around which to organize truth claims (fragmented political parties, social groups, identity groups, institutional affiliations, fragmentation of work environments, increasing economic precarity, etc.) … there are likely to be more, not fewer, resources directed at destabilizing truth claims in the next 10 years.”

    An historian and former legislative staff person based in North America observed, “A major issue here is that what one side believes is true, is not the same as what the other side believes. Example: What Yankees and Confederates believed about the Civil War has never been the same, and there are differing social and cultural norms in different ages, times, regions and religions that have different ‘takes’ on what is right and proper behavior. We are facing an almost existential question here of ‘what is truth?’”

    Daniel Wendel, a research associate at MIT, said, “Trust is inherently personal. While central authorities can verify the identity of a particular website or person, consumers are less likely to trust a ‘trusted’ centralized fact checker [than the sources that express the same belief system as they and their friends]. For example, Snopes.com has already been discounted by right-wing pundits as being too ‘liberal.’ Trust must come from networks rather than authorities, but the ideas behind that are nascent and the technologies do not yet exist.”

    Philip Rhoades, retired IT consultant and biomedical researcher with the Neural Archives Foundation, said, “The historical trend is for information to be less reliable and for people to care less.”

    A professor of rhetoric and communication noted, “People can easily stay in their own media universe and never have to encounter ideas that conflict with their own. Also, the meshing of video and images with text creates powerful effects that appeal to the more rudimentary parts of the brain. It will take a long time for people to adapt to the new media environment.”

    A professor of journalism at New York University observed, “The fragmentation of the sources of media – and increasing audience participation – meant that it was no longer just canonical sources that could get their voices amplified.”

    A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible or not.

    A professor of political economy at a U.S. university wrote, “I don’t think there is a clear, categorical distinction between ‘false’ news and the other kind. Some falsehoods have been deliberately fostered by elites for purposes of political management – the scope has widened dramatically in recent years.”

    Greg Shatan, partner at Bortstein Legal Group based in New York, replied, “Unfortunately, the incentives for spreading false information, along with the incentives for destabilizing trust in internet-based information, will continue to incentivize the spread of ‘fake news.’ Perversely, heightened concerns about privacy and anonymity are counterproductive to efforts to increase trust and validation.”

    A project manager for the U.S. government responded, “It is going to get much worse before it gets better. There is no sign that people are willing to work at what we agree on, most would prefer to be divisive and focus on differences.”

    An anonymous research scientist said, “I do not buy the assumption that information, ‘accurate’ or not, is the basis of political or – in fact – any action. I actually think it never has been. Yes, this is the story we like to tell when justifying actions vis-a-vis everyone else. It helps us present ourselves as rational, educated and considerate human beings. But no, in practice we do and say and write and report whatever seems reasonable in the specific situation for the specific purposes at hand. And that is OK, as long as others have the opportunity to challenge and contest our claims.”

    Some respondents noted that trust has to be in place before people can establish any sort of shared knowledge or begin to debate and decide the facts on which decisions can be based.

    An anonymous internet activist/user based in Europe commented, “Who can determine what is or is not fake news?”

    A principal research scientist based in North America commented, “The trustworthiness of information is a subjective measure as seen by the consumer of that information.”

    An anonymous futurist/consultant said, “Technology and platform design is only one part of the problem. Building trust and spreading information-quality skills takes time and coordination.”

    A director with a digital learning research unit at a major university on the U.S. West Coast said, “As the technology evolves, we will find ways (technologically) and also culturally to become savvier about the way in which we manage and define ‘trustworthiness.’”

    A small segment of society will find, use and perhaps pay a premium for information from reliable, quality sources. Outside of this group ‘chaos will reign’ and a worsening digital divide will develop
    A deeper digital divide was predicted by some respondents who said that 10 years from now those who value accurate information and are willing to spend the time and/or money to get it will separate from those who do not. Alex ‘Sandy’ Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, predicted of the information environment, “Things will improve, but only for the minority willing to pay subscription prices.”

    An anonymous journalist observed, “One of today’s most glaring class divides is between those who are internet-savvy and so skilled at evaluating different sources and information critically that it’s almost instinctive/automatic, and those who have very limited skills in that department. This divide is usually glaringly obvious in anyone’s Facebook feed now that such a large portion of the population is on Facebook, and the lack of ability to evaluate sources online critically is most common in older persons with limited education and/or limited internet proficiency – and can sometimes also be observed in young people with the same attributes (limited education/internet proficiency).”

    Garland McCoy, president of the Technology Education Institute, predicted, “As most of us know there is the public internet, which operates as a ‘best effort’ platform and then there are private internets that command a premium because they offer much more reliable service. So it will be with the ‘news’ and information/content on the internet. Those who have the resources and want fact checking and vetting will pay for news services, which exist today, that charge a subscription and provide, for the most part, vetted/authenticated facts ‘news.’ Those who do not have the resources or who don’t see the ‘market value’ will take their chances exploring the world of uncensored, unfiltered and uncontrolled human mental exertion.”

    A professor whose research is focused on this topic wrote, “I can envisage [several] scenarios – trusted networks (where false information is pointed out), or the wild unbounded morass. It may well be that one will have to pay to join such a trusted network because those who can provide trusted information will be paid to do so.”

    Meamya Christie, user-experience designer with Style Maven Linx, replied, “There will be a division in how information is consumed. It will be like a fork in the road. People will have a choice to go through one portal or another based on their own set of core values, beliefs and truths.”

    A strategist for an institute replied, “The trust in 2027 will be only for the elites who can pay, or for the most-educated people.”

    A fellow at a UK-based university said, “I don’t think a technological or top-down solution can ‘fix’ the information environment without addressing a range of root issues relating to democratic disenfranchisement, deteriorating education and anti-intellectualism.”

    A senior research fellow working for the positive evolution of the information environment said, “Only a small fraction of the population (aged, educated, affluent – i.e., ready to pay for news) will have good, balanced, fair accurate, timely, contextualized, information.”
    “El revolucionario: te meteré la bota en el culo"

  5. #4
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale
    BY JANNA ANDERSON AND LEE RAINIE

    Many respondents who expect no improvement in the information environment argue that certain actors in government,business and other individuals with propaganda agendas and special interests are turning technology to their favor in the spread of misinformation. There are too many of them and they are clever enough that they will continue to infect the online information environment, according to these experts.

    A clear articulation of this view came from Howard Greenstein, adjunct professor of management studies at Columbia University. He argued, “This is an asymmetric problem. It is much easier for single actors and small groups to create things that are spread widely, and once out, are hard to ‘take back.’” Moreover, the process of distinguishing between legitimate information and questionable material is very difficult, those who support this line of reasoning said.

    An anonymous respondent wrote, “Whack-a-mole seems to be our future. There is an inability to prevent new ways of disrupting our information systems. New pathways will emerge as old ones are closed.”

    Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars
    Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, D.C., replied, “Distinguishing between fake news, humor, strange-but-true news or unpopular news is too hard for humans to figure out, no less a computer.”

    While technology may stop bots from spreading fake news, I don’t think it will be that easy to stop people who want to believe the fake news and/or make up the fake news.
    ANONYMOUS PROGRAM OFFICER

    Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of emerging technologies at The Hastings Center, wrote, “While means will be developed to filter out existing forms of misinformation, the ability to undermine core values will continue to be relatively easy while steps to remediate destructive activities will be much harder and more costly. Furthermore, a gap will expand as technological possibilities speed ahead of their ethical-legal oversight. Those willing to exploit this gap for ideological purposes and personal gain will continue to do so.”

    Justin Reich, assistant professor of comparative media studies at MIT, noted, “Strategies to label fake news will require algorithmic or crowd-sourced approaches. Purveyors of fake news are quite savvy at reverse engineering and gaming algorithms, and equally adept at mobilizing crowds to apply ‘fake’ labels to their positions and ‘trusted’ labels to their opponents.”

    Sean Goggins, an associate professor and sociotechnical data scientist, wrote, “Our technical capacity to manipulate information will continue to grow. With investment tilted toward for-profit enterprise and the intelligence community and away from public-sector research like that sponsored by the National Science Foundation, it’s doubtful that technology for detecting misinformation will keep up with technologies designed to spread misinformation.”

    An associate professor of communication studies at a Washington-based university said, “The fake news problem is not one that can be fixed with engineering or technological intervention short of a total reimagination of communication network architecture.”

    Fredric Litto, professor emeritus at the University of São Paulo in Brazil, wrote, “The incredibly complex nature of contemporary information technology will inevitably make for a continuing battle to reduce (note: I dare not say eliminate) false and undesirable ‘news’ and other information permeating electronic media. Without a foolproof method of truly eliminating the possibility of anonymity – and I cannot see this really happening by 2027 – there will be no end to the malicious use of most, if not all, modes of communication.”

    Michel Grossetti, research director at CNRS (French National Center for Scientific Research), commented, “It is the old story of the bullet and the cuirass. Improvement on one side, improvement on the other.”

    Daniel Berleant, author of the book “The Human Race to the Future,” predicted, “Digital and psychological technologies for the spreading of misinformation will continue to improve, and there will always be actors motivated to use it. Ways to prevent it will develop as well but will be playing catch-up rather than taking the lead.”

    John Lazzaro, a retired electrical engineering and computing sciences professor at the University of California, Berkeley, wrote, “I don’t think society can reach a consensus on what constitutes misinformation, and so trying to automate the removal of misinformation won’t be possible.”

    Andreas Birkbak, assistant professor at Aalborg University in Copenhagen, said, “The information environment will not improve because there is no way to automate fact checking. Facts are context-dependent.”

    A North American program officer wrote, “While technology may stop bots from spreading fake news, I don’t think it will be that easy to stop people who want to believe the fake news and/or make up the fake news.”

    A researcher based in North America said, “News aggregators such as Facebook will get better at removing low-information content from their news feeds but the amount of mis/disinformation will continue to increase.”

    Joseph Konstan, distinguished professor of computer science and engineering at the University of Minnesota, observed, “Those trying to manipulate the public have great resources and ingenuity. While there are technologies that can help identify reliable information, I have little confidence that we are ready for widespread adoption of these technologies (and the censorship risks that relate to them).”

    A former software systems architect replied, “Bad actors will always find ways to work around technical measures. In addition, it is always going to be human actors involved in the establishment of trust relationships and those can be gamed. I do not envision media organizations being willing participants.”

    Can technology detect and flag trustworthy information? A North American research scientist said the idea of basing likely veracity on people’s previous information-sharing doesn’t always work, writing, “People don’t just share information because they think it’s true. They share to mark identity. Truth-seeking algorithms, etc. don’t address this crucial component.”

    A vice president for an online information company wrote, “It is really hard to automatically determine that some assertion is fake news or false. Using social media and ‘voting’ is overcome by botnets for example.

    J. Cychosz, a content manager and curator for a scientific research organization, commented, “False information has always been around and will continue to remain, technology will emerge that will help identify falsehoods and culture will shift, but there will always be those who find a path around.”

    Philippa Smith, research manager and senior lecturer in new media at Auckland University of Technology, noted, “Efforts to keep pace with technology and somehow counteract the spread of misinformation or fake news may be more difficult than we imagine. I have concerns that the horse has bolted when it comes to trying to improve the information environment.”

    Frank Odasz, president of Lone Eagle Consulting, observed, “Having watched online scams of all kinds evolve to be increasingly insidious, I expect this trend will continue and our best cybersecurity will forever be catching up with, not eradicating [it]. The battle between good and evil is accelerated digitally.”

    Ed Terpening, an industry analyst with the Altimeter Group, replied, “Disinformation will accelerate, as trust in institutions we’ve thought of as unbiased widen polarization through either hiding or interpreting facts that fulfill an agenda.”

    Basavaraj Patil, principal architect at AT&T, wrote, “The rapid pace of technological change and the impact of false information on a number of aspects of life are key drivers.”

    Bradford W. Hesse, chief of the health communication and informatics research branch of the U.S. National Cancer Institute, said, “Communication specialists have been dealing with the consequences of propaganda, misinformation and misperceived information from before and throughout the Enlightenment. What has changed is the speed with which new anomalies are detected and entered into the public discourse. The same accelerated capacity will help move the needle on social discourse about the problem, while experimenting with new solutions.”

    Liam Quin, an information specialist at the World Wide Web Consortium, said the information environment is unlikely to be improved because “human nature won’t change in such a short time, and people will find ways around technology.”

    Alan Inouye, director of public policy for the American Library Association, commented, “New technologies will continue to provide bountiful opportunities for mischief. We’ll be in the position of playing defense as new abuses or attacks arise.” However, he also added, “This will be a future that is, on balance, not worse than today’s situation.”

    A distinguished engineer for a major provider of IT solutions and hardware warned that any sort of filtering system will flag, filter or delete useful content along with the misinformation, “It’s not possible to censor the untrustworthy news without filtering some trustworthy news. That struggle means the situation is unlikely to improve.”

    Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI
    Some respondents noted that the people best served by the manipulation of public sentiment, arousing fear and anger and obfuscating reality, are encouraged by their success now and that gives them plenty of incentive to make things worse in the next decade. As a professor and author based in the United States put it, “Too many people have realized that lying helps their cause.”

    An anonymous respondent based in Asia/Southeast Asia replied, “We are being ‘gamed,’ simply put.”

    Just as it’s now straightforward to alter an image, it’s already becoming much easier to manipulate and alter documents, audio, and video, and social media users help these fires spread much faster than we can put them out.
    MARTIN SHELTON

    Alexis Rachel, user researcher and consultant, said, “The logical progression of things at this point (unless something radical occurs) is that there will be increasingly more ‘sources’ of information that are unverified and vetted – a gift from the internet and the ubiquitous publishing platform it is. All it takes is something outrageous and plausible enough to go viral, and once out there, it becomes exceedingly difficult to extinguish – fact or fiction.”

    Martin Shelton, a security researcher with a major technology company, said, “Just as it’s now straightforward to alter an image, it’s already becoming much easier to manipulate and alter documents, audio, and video, and social media users help these fires spread much faster than we can put them out.”

    Matt Stempeck, a director of civic technology, noted, “The purveyors of disinformation will outpace fact-checking groups in both technology and compelling content unless social media platforms are able to stem the tide.”

    Alf Rehn, chair of management and organization studies at Åbo Akademi University, commented, “Better algorithms will sort out some of the chaff [and may improve the overall information environment] but at the same time the weaponization of fake news will develop. As strange as it seems, we may enter a time of less, but ‘better’ [more effective] fake news.”

    An anonymous respondent, wrote, “Distrust of academics and scientists is so high it’s hard to imagine how to construct a fact-checking body that would trusted by the broader population.”

    The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online
    While some people believe more surveillance and requirements for identity authentication are go-to solutions for reining in the negative impacts of misinformation, a number of these experts said bad actors will evade these measures and platform providers, governments and others taking these actions will expand unwanted surveillance and curtail civil liberties.

    Fred Davis, a futurist based in North America, wrote, “Automated efforts to reduce fake news will be gamed, just like search is. That’s 20 years of gaming the system – search engine optimization and other things that corrupt the information discovery process have been in place for over 20 years, and the situation is still bad. Also, it may be difficult to implement technology because it could also be used for mass censorship. Mass censorship would have a very negative effect on free speech and society in general.”

    Adam Powell, project manager at the Internet of Things Emergency Response Initiative at the University of Southern California, said, “The democratization of the internet, and of information on the internet, means just that: Everyone has and will have access to receiving and creating information, just as at a watercooler. Not only won’t the internet suddenly become ‘responsible,’ it shouldn’t, because that is how totalitarian regimes flourish (see: Firewall, Great, of China).”

    An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

    The owner of a consultancy replied, “We’re headed to a world where most people will use sources white-listed (explicitly or not) by third parties (e.g., Facebook, Apple, etc.).”

    A distinguished professor emeritus of political science at a U.S. university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.”

    A professor at a major U.S. university replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

    Among the respondents to this canvassing who recommended the removal of anonymity was Romella Janene El Kharzazi, a content producer and entrepreneur, who said, “One obvious solution is required authentication; fake news is spread anonymously and if that is taken away, then half of the battle is fought and won.” A research scientist based in Europe predicted, “The different actors will take appropriate measures – including efficient interfaces for reporting and automatic detection – and implement efficient decision mechanisms for the censorship of such content.”

    A senior researcher and distinguished fellow for a major futures consultancy observed, “Reliable fact checking is possible. Google in particular has both the computational resources and talent to successfully launch a good service. Facebook may also make progress, perhaps in a public consortium including Google. Twitter is problematic and would need major re-structuring including a strict, true names policy for accounts – which is controversial among some privacy sectors.”

    A retired consultant and strategist for U.S. government organizations replied, “Regardless of technological improvements, the change agents here are going to have to be, broadly speaking, U.S. Supreme Court judges’ rulings on constitutional interpretations of free speech, communication access and any number of other constitutional issues brought to the fore by many actors at both the state and national level, and these numerous judicial change agents’ decisions are, in turn, affected by the citizen opinion and behavior.”

    Anonymous respondents also commented:

    “The means and speed of dissemination have changed [the information environment]. It cannot be legislated without limiting free speech.”
    “It’s impossible to filter content without bias.”
    “The internet is designed to be decentralized; not with the purpose of promoting accuracy or social order.”
    “There is no way – short of overt censorship – to keep any given individual from expressing any given thought.”
    “Blocking (a.k.a. censoring) information is just too dangerous.”
    “I do not think it can be stopped without doing a lot of damage to freedom of speech.”
    “Forces of evil will get through the filters and continue to do damage while the majority will lose civil rights and many will be filtered or banned for no good reason.”
    “It’s a hard problem to solve fairly.”
    “El revolucionario: te meteré la bota en el culo"

  6. #5
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content
    BY JANNA ANDERSON AND LEE RAINIE

    Many respondents who said they hope for or expect an improvement in the information environment 10 years from now mentioned ways in which new technological and verification solutions might be implemented. A number of these proposed solutions include the hope that technology will be created to evaluate content – making it “accessible.”

    Technology moves fast and humans adapt more slowly, but we have a proven capability to solve problems we create with technology.
    ROBERT BELL

    Andrea Forte, associate professor at Drexel University, said, “As mainstream social media take notice of information quality as an important feature of the online environment, there will be a move towards designing for what I call ‘assessability’ – interfaces that help people appropriate assessments of information quality.”

    Filippo Menczer, professor of informatics and computing at Indiana University, noted, “Technical solutions can be developed to incorporate journalistic ethics into social media algorithms, in a way similar to email spam filters.”

    Scott Fahlman, professor emeritus of AI and language technologies at Carnegie Mellon University, commented, “For people who are seriously trying to figure out what to believe, there will be better online tools to see which things are widely regarded as true and which have been debunked.”

    Robert Bell, co-founder of the Intelligent Community Forum, commented, “Technology moves fast and humans adapt more slowly, but we have a proven capability to solve problems we create with technology.”

    Joanna Bryson, associate professor and reader at University of Bath and affiliate with the Center for Information Technology Policy at Princeton University, responded, “We are in the information age, and I believe good tools are likely to be found in the next few years.”

    David J. Krieger, director of the Institute for Communication & Leadership in Lucerne, Switzerland, commented, “The information environment will improve because a data-driven society needs reliable information, and it is possible to weed out the false information.”

    Andrew McStay, professor of digital life at Bangor University in Wales, wrote, “Undoubtedly, fake news and weaponised information will increase in sophistication, but so will attempts to combat it. For example, the scope to analyse at the level of metadata is a promising opportunity. While it is an arms race, I do not foresee a dystopian outcome.”

    Clifford Lynch, director of the Coalition for Networked Information, noted, “The severity of the problem has now been recognized fairly widely, and while I expect an ongoing ‘arms race’ in the coming decade, I think that we will make some progress on the problem.”

    A CEO and research director noted, “There are multiple incentives, economic and political, to solve the problem.”

    An anonymous respondent said, “The public will insist that online platforms take more responsibility for their actions and provide more tools to ensure information veracity.”

    Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of ‘trust ratings

    Matt Mathis, a research scientist at Google, responded, “The missing concept is an understanding of the concept of ‘an original source.’ For science, this is an experiment, for history (and news) an eyewitness account by somebody who was (verifiably) present. Adding ‘how/why we know this’ to non-original sources will help the understanding that facts are verifiable.”

    Federico Pistono, entrepreneur, angel investor and researcher with Hyperloop TT, commented, “Algorithms will be tailored to optimize more than clicks – as this will be required by advertisers and consumers alike – and deep learning approaches will improve.”

    Tatiana Tosi, netnographer at Plugged Research, commented, “The information environment will improve due to new artificial-intelligence bots that will verify the information. This should balance privacy and human rights in the automated environment.”

    A web producer/developer for a U.S.-funded scientific agency predicted, “The reliance on identity services for real-world, in-person interactions, which start with trust in web-based identification, will force reliability of information environments to improve.”

    An associate professor of business at a university in Australia commented, “Artificial intelligence technologies are advancing quickly enough to create an ‘Integrity Index’ for news sources even down to the level of individual commentators. Of course, other AI engines will attempt to game such a system. I can envisage an artificial blogger that achieves high levels of integrity before dropping the big lie just in time for an election. Big lies take a day or more to be disproven so it may just work, but the penalty for a big lie, or any lie, can be severe so everyone who gained from the big lie will be tainted.”

    A distinguished engineer for one of the world’s largest networking technologies companies commented, “Certificate technologies already exist to validate a website’s sources and are in use for financial transactions. These will be used to verify sources for information in the future. Of course, there will always be people who look for information (true or false) that validates their biases.”

    Ayaovi Olevie Kouami, chief technology officer at the Free and Open Source Software Foundation for Africa, said, “The actual framework of the internet ecosystem could have a positive impact on the information environment by setting up all the requisite institutions, beginning with DNSSEC, IXPs, FoE, CIRT/CERT/CSIRT, etc.”

    Jean Paul Nkurunziza, a consultant based in Africa, commented, “The expected mass adoption of the IPv6 protocol will allow every device to have a public IP address and then allow the tracking of the origin of any online publication.”

    Mark Patenaude, vice president for innovation, cloud and self-service technology at ePRINTit Cloud Technology, replied, “New programming tech and knowledge will create a new language that will teach us to recognize malicious, false, misleading information by gathering all news and content sources and providing us with accurate and true information.”

    Hazel Henderson, futurist and CEO of Ethical Markets Media, said, “Global ethical standards and best practices are being developed in the many domains affected. New verification technologies, including blockchain and smart contracts, will help.”

    An anonymous respondent based in North America who has confidence things may be improved listed a series of technologies likely to be effective, writing: “Artificial intelligence, machine learning, exascale computing from everywhere, quantum computing, the Internet of Things, sensors, big data science and global collaborative NREN (National Research and Education Network) alliances.”

    An anonymous respondent based in Europe warned, “Technical tools and shields to filter and recognize manipulations will be more effective than attempts at education in critical thinking for end users.”

    Anonymous survey participants also responded:

    “Relatively simple steps and institutional arrangements can minimize the malign influence of misinformation.”
    “Machines are going to get increasingly better at validating accuracy of information and will report on it.”
    “Artificial intelligence technologies will advance a lot, making it easy to make fake news more difficult to be discovered and identified.”
    “Technology for mass verification should improve as will the identification of posters. Fakers will still exist but hopefully the half-life of their information will shrink.”
    “Things will improve due to [better tracking of the] provenance of data and security and privacy laws.”
    Regulatory remedies could include software liability law, required identities and the unbundling of social networks like Facebook
    A number of respondents said that evidence suggests people and internet content platform providers can’t solve this problem and argued there will be pressure for regulatory reforms that hold consistently bad actors responsible.

    I hope regulators will recognise that social media companies are publishers, not technology companies, and therefore must take responsibility for what they carry.
    ANONYMOUS RESPONDENT

    An associate professor at a major Canadian university said, “As someone who has followed the information-retrieval community develop over the past 15 years – dealing with spam, link farms, etc. – given a strong enough incentive, technologies will advance to address the challenge of misinformation. This may, however, be unevenly distributed, and may be more effective in domains such as e-business where there is a financial incentive to combat misinformation.”

    An anonymous respondent wrote, “I hope regulators will recognise that social media companies are publishers, not technology companies, and therefore must take responsibility for what they carry. Perhaps then social media companies will limit the publication of false advertising and misinformation.”

    A professor of media and communication based in Europe said, “It will be very difficult to assign penalties to culprits when platforms deny responsibility for any wrongdoing by their ‘users.’ Accountability and liability should definitely be assumed by platform operators who spread news and information, regardless of its source and even if unwittingly. Government has very limited power to ‘fake news’ or ‘misinformation’ but it can definitely help articulate which actors in society are responsible.”

    A senior vice president for government relations predicted, “Governments should and will impose additional obligations on platforms to increase their responsibility for content on their services.”

    One possibility that a notable share of respondents mentioned is the requirement of an authenticated ID for every user of a platform. An anonymous respondent said, “Bad actors should be banned from access, but this means that a biography or identification of some sort would be necessary of all participants.”

    Those in support of requiring internet users to provide a real identity when participating online also mentioned the establishment of a reputation system. A partner in a services and development company based in Switzerland commented, “A bad reputation is the best penalty for a liar. It is the job of society to organize itself in a way to make sure that the bad reputation is easily visible. It should also extend to negligence and any other related behaviour allowing the spread of misinformation. Penal law alone is too blunt a tool and should not be regarded as a solution. Modern reputation tools (similar in approach to what financial audits and ratings have achieved in the 20th century) need to be built and their use must become an expected standard (just like financial audits are now a legal requirement).”

    An anonymous activist/user wrote, “Loss of anonymity might be a way of ensuring some discipline in the system, yet the institutions which would be deciding such punishments today have no credibility with most of the population.”

    An anonymous ICT for development consultant and retired professor commented, “Government best plays a regulating role and laws are punitive; so both regulation and laws should be stringently applied.”

    A post-doctoral fellow at a center for governance and innovation replied, “Jail time and civil damages should be applied where injuries are proven. Strictly regulate non-traditional media especially social media.”

    An associate professor at Brown University wrote, “Essentially we are talking about the regulation of information, which is nearly impossible since information can be produced by anyone. Government can establish ethical guidelines, perhaps similar to the institutional review boards that regulate scientific research. Or it can be done outside government, like a better business bureau.”

    An anonymous respondent based in Europe wrote, “Publicity, monetary fines and definitely jail terms, depending on the scope and consequences of the spreading false information. In terms of the government role in terms of prevention, it should not be different than any other area, including sound legal regulation, strengthened capacities identify false information and stop at early stages using legal mechanism, education and awareness raising of citizens, as well as higher ethical stands (or zero tolerance) for public officials walking on the edge.”

    A postdoctoral scholar based in North America wrote, “If we are talking about companies such as Facebook, I do think there is room for discussion on the federal level of their responsibility as, basically, a private utility. Regulation shouldn’t be out of the question.”

    A legal researcher based in Asia/Southeast Asia said, “Stop them from using any internet. Government should create regulations for internet companies to prevent the distribution of false information.”

    A professor of humanities said, “Penalties are a nice idea, but who will decide which instances of ‘fake news’ require greater penalties than others? The bureaucracy to make these decisions would have to be huge.”

    Theme 4: The information environment will improve, because people will adjust and make things better
    BY JANNA ANDERSON AND LEE RAINIE

    Most respondents who expect an improvement in the information environment in the coming years put their faith in maturing – and more discerning – information consumers finding ways to cope personally and band together to effect change.

    Fake news and information manipulation are no longer ‘other people’s problems.’ This new awareness of the importance of media will shift resources, education and behaviors across society.
    PAMELA RUTLEDGE

    Alexios Mantzarlis, director of the International Fact-Checking Network based at the Poynter Institute for Media Studies, commented, “While the risk of misguided solutions is high, lots of clever people are trying to find ways to make the online information ecosystem healthier and more accurate. I am hopeful their aggregate effect will be positive.”

    Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., observed, “Globally, we have more people with more tools with more access to more information – and yes, more conflicting intent – than ever before; but, while messy and confusing, this will ultimately improve the information environment. We will continue to widen access to all types of information – access for citizen journalists, professionals, technical experts, others – so while the information environment becomes more diverse, the broader arc of human knowledge bends towards revelation and clarity; only mass suppression will stop the paid and unpaid information armies from discovering and revealing the truth.”

    A North American research scientist replied, “I’m an optimist, and believe we are going through a period of growing pains with the spread of knowledge. In the next decade, we’ll create better ways to suss out truth.”

    Sharon Tettegah, professor at the University of Nevada, commented, “As we learn more about the types of information, we will be able to isolate misinformation and reliable sources.”

    Pamela Rutledge, director of the Media Psychology Research Center, noted, “Fake news and information manipulation are no longer ‘other people’s problems.’ This new awareness of the importance of media will shift resources, education and behaviors across society.”

    Dariusz Jemielniak, professor of organization studies in the department of management in networked and digital societies (MiNDS) at Kozminski University, said, “There are a number of efforts aimed at eliminating fake news, and we as a society are going to make them work.”

    Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material

    Many respondents said the online realm as simply yet another step in human and communications evolution and that history’s lessons here should be comforting. They argued that previous information revolutions have inspired people to invent new ways to handle problems with information overload, the proliferation of misinformation, and opportunities for schemers to manipulate the emerging systems. The more hopeful among these experts believe that dynamic will play out again in the digital age.

    Society always adjusts to new media and responds to weaknesses and flaws. Individuals will adjust, as will the technology.
    ANONYMOUS JOURNALISM AND COMMUNICATIONS DEAN

    A professor of media studies at a European university wrote, “The history of technology shows repeatedly that as a new technology is introduced – whatever the intentions of the designers and manufacturers, bad actors will find ways to exploit the technology in darker, more dangerous ways. In the short run, they can succeed, sometimes spectacularly: in the long run, however, we usually find ways to limit and control the damage.”

    A futurist/consultant replied, “We’re seeing the same kinds of misinformation that used to be in supermarket tabloids move online – it’s the format that has changed, not the human desire for salacious and dubious news.”

    Robin James, an associate professor of philosophy at a North American university, wrote, “The original question assumes that things have recently gotten worse. Scholars know that phenomena like patriarchy and white supremacy have created ‘epistemologies of ignorance’ that have been around for hundreds of years. ‘Fake news’ is just a new variation on this.”

    The dean of one of the top 10 journalism and communications schools in the United States replied, “Society always adjusts to new media and responds to weaknesses and flaws. Individuals will adjust, as will the technology.”

    Lokman Tsui, assistant professor at the School of Journalism and Communication at The Chinese University of Hong Kong, commented, “The information environment will improve. This is not a new question; we had concerns about fake news when radio broadcasting and mass media first appeared (for example, the Orson Welles’ reading of ‘War of the Worlds’). People will develop literacy. Standards, norms and conventions to separate advertising from ‘organic’ content will develop. Bad actors who profit from fake news will be identified and targeted.”

    Adam Nelson, a developer at Amazon, replied, “We had yellow journalism a hundred years ago and we have it now. We’re at a low point of trust, but people will begin to see the value of truth once people become more comfortable with what social platforms do and how they work.”

    Axel Bruns, professor at the Digital Media Research Centre at the Queensland University of Technology, commented, “Moral panics over new media platforms are nothing new. The web, television, radio, newspapers and even the alphabet were seen as making it easier to spread misinformation. The answer is media literacy amongst the public, which always takes some years to catch up with the possibilities of new media technologies.”

    An anonymous respondent who predicts improvement replied, “Powerful social trends have a life cycle, and the pendulum typically swings back over time.”

    An anonymous respondent said, “It is the nature of the technical development that politics and regulatory forces are only able to react ex post, but they will.”

    A senior researcher at a U.S.-based nonprofit research center replied, “The next generation of news and information users will be more attuned to the environment of online news and will hopefully be more discerning as to its veracity. While there are questions as to whether the digital native generation can accurately separate real news from fake, they at least will have the technical and experiential knowledge that the older generations mostly do not.”

    Many respondents expressed faith that technologists would be at the forefront of helping people meet the challenges of misinformation. A managing partner and fellow in economics predicted, “In order to avoid censorship, the internet will remain relatively open, but technology will develop to more effectively warn and screen for fact-inaccurate information. Think of it as an automated ‘PolitiFact’ that will point out b******* passively to the reader.”

    An author and journalist based in North America said, “Social media, technology and legacy media companies have an ethical and economic incentive to place a premium on trusted, verified news and information. This will lead to the creation of new digital tools to weed out hoaxes and untrusted sources.”

    Susan Price, lead experience strategist at Firecat Studio, observed, “There will always be a demand for trusted information, and human creativity will continue to be applied to create solutions to meet that demand.”

    Dane Smith, president of the public policy research and equity advocacy group Growth & Justice, noted, “I’m an optimist. Truth will find a way and prevail.”

    Louisa Heinrich, founder of Superhuman Ltd., commented, “The need to tell our stories to one another is a deeply rooted part of human nature, and we will continue to seek out better ways of doing so. This drive, combined with the ongoing upward trend of accessibility of technology, will lead more people to engage with the digital information environment, and new trust frameworks will emerge as old ones collapse.”

    Michael R. Nelson, public policy executive at Cloudflare, replied, “Some news sites will continue to differentiate themselves as sources of verified, unbiased information, and as these sites learn how to better distinguish themselves from ‘fake news’ sites, more and more advertisers will pay a premium to run their ads on such sites.”

    Steven Polunsky, writer with the Social Strategy Network, replied, “As with most disruptive events, people will adjust to accommodate needs and the changing environment.”

    Liz Ananat, an associate professor of public policy and economics at a major U.S. university wrote, “It will likely get worse first, but over 10 years, civil society will respond with resources and innovation in an intensive effort. Historically, when civil society has banded together and given its all to fight destructive forces, it has been successful.”

    Jane Elizabeth, senior manager at the American Press Institute, said, “The information environment will improve because the alternative is too costly. Misinformation and disinformation will contribute to the crumbling of a democratic system of government.”

    A number of these respondents said they expect information platform providers to police the environment in good faith, implementing the screening of content and/or other solutions while still protecting rights such as free speech.

    A principal network architect for a major edge cloud platform company replied, “Retooling of social networking platforms will likely, over time, reduce the value of stupid/wrong news.”

    A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “The problem of fake news is largely a problem of untrusted source. Online media platforms delegated the role of human judgment to algorithms and bots. I expect that these social media platforms will begin to exercise more discretion in what is posted when.”

    An anonymous respondent said, “Information platforms optimized for the internet are in their infancy. Like early e-commerce models, which merely sought to replicate existing, known systems, there will be massive shifts in understanding and therefore optimizing new delivery platforms in the future.”

    An anonymous respondent wrote, “Google and other outlets like Facebook are taking measures to become socially responsible content promoters. Combined with research trends in AI and other computing sectors, this may help improve the ‘fake news’ trends by providing better attribution channels.”

    Adam Gismondi, a researcher at the Institute for Democracy & Higher Education at Tufts University, predicted, “Ultimately, the information distributors – primarily social media platform companies, but others as well – will be forced, through their own economic self-interest and public pushback, to play a pivotal role in developing filters and signals that make the information environment easier for consumers to navigate.”

    Anonymous respondents shared these related remarks:

    “Everything we know about how human ingenuity and persistence has shaped the commercial and military (and philanthropic) drivers of the internet, and the web suggests to me that we will continue to ensure this incredible resource remains useful and beneficial to our development.”
    “The tide of false information has to be stemmed. The alternative will be dystopia.”
    “People will gain in sophistication, especially after witnessing the problems caused by the spread of misinformation in this decade. Vetting will be more sophisticated, and readers/viewers will be more alert to the signs that a source is not reliable.”
    “I have hope in human goodness.”
    “Over the next 10 years, users will become much more savvy and less credulous on average.”
    “People will develop better practices for dealing with information online.”
    Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)
    Some respondents expressed optimism about the potential for people’s capabilities in improving the visibility of the most-useful content, including the implementation of human-machine evaluation of content to identify sources, grade their credibility and usefulness, and possibly flag, tag or ban propagators of misinformation. An anonymous respondent wrote, “AI, blockchain and crowdsourcing appear to have promise.”

    There will be new forms of crowdsourcing – a radical kind of curation – participation in which will improve critical-thinking skills and will mitigate the effects of misinformation.
    JACK PARK

    An assistant professor at a university in the U.S. Midwest wrote, “Crowd-based systems show promise in this area. Consider some Reddit forums where people are called out for providing false information … if journalists were called out/tagged/flagged by large numbers of readers rather than their bosses alone, we would be inching the pebble forward.”

    But whose “facts” are being verified in this setting? Ned Rossiter, professor of communication at Western Sydney University, argued, “Regardless of advances in verification systems, information environments are no longer enmeshed in the era of broadcast media and national publics or ‘imagined communities’ on national scales. The increasing social, cultural and political fragmentation will be a key factor in the ongoing contestation of legitimacy. Informational verification merely amplifies already existing conditions.”

    Richard Rothenberg, professor and associate dean at the School of Public Health at Georgia State University, said, “It is my guess that the dark end of the internet is relatively small but it has an outsized presence. … If nothing else, folks have demonstrated enormous resourcefulness, particularly in crowd endeavors, and I believe methods for assuring veracity will be developed.”

    An anonymous research scientist based in North America wrote, “A system that enables commentary on public assertions by certified, non-anonymous reviewers – such that the reviewers themselves would be subject to Yelp-like review – might work, with the certification provided by Verisign-like organizations. Wikipedia is maybe a somewhat imperfect prototype for the kind of system I’m thinking of.”

    A Ph.D. candidate in informatics, commented, “It is possible to create systems that are reliable and trusted, but probably not unhackable. I imagine there could be systems that leverage the crowd to check facts in real time. Computational systems would be possible, but it would be very difficult to create algorithms we could trust.”

    Jack Park, CEO at TopicQuests Foundation, predicted, “There will be new forms of crowdsourcing – a radical kind of curation – participation in which will improve critical-thinking skills and will mitigate the effects of misinformation.”

    Some respondents also pointed out the rise of additional platforms where people can publish useful information could be a positive force. An anonymous respondent wrote, “The rise of more public platforms for media content (online opinion/editorials and platforms such as Medium) gives me confidence that as information is shared, knowledge will increase so that trust and reliability will grow. Collaboration is key here.”

    Blockchain systems were mentioned by a number of respondents – a senior expert in technology policy based in Europe, commented, “… use blockchain to verify news” – but with mixed support, as many hedged their responses. A journalist who writes about science and technology said, “We can certainly create blockchain-like systems that are pretty reliable. Nothing is ever perfect, though, and trusted systems are often hard to use.”

    The president of a center for media literacy commented, “The technology capability [of potential verification systems] is immature and the costs are high. Blockchain technology offers great promise and hope.”

    A journalist and experience strategist at one of the world’s top five technology companies said, “The blockchain can be used to create an unhackable verification system. However, this does not stop the dissemination of ‘fake news,’ it simply creates a way to trace information.”

    A chief executive officer said, “Can P2P, blockchain, with attribution be unhackable? We need a general societal move to more transparency.”
    “El revolucionario: te meteré la bota en el culo"

  7. #6
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Theme 4: The information environment will improve, because people will adjust and make things better
    BY JANNA ANDERSON AND LEE RAINIE

    Most respondents who expect an improvement in the information environment in the coming years put their faith in maturing – and more discerning – information consumers finding ways to cope personally and band together to effect change.

    Fake news and information manipulation are no longer ‘other people’s problems.’ This new awareness of the importance of media will shift resources, education and behaviors across society.
    PAMELA RUTLEDGE

    Alexios Mantzarlis, director of the International Fact-Checking Network based at the Poynter Institute for Media Studies, commented, “While the risk of misguided solutions is high, lots of clever people are trying to find ways to make the online information ecosystem healthier and more accurate. I am hopeful their aggregate effect will be positive.”

    Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., observed, “Globally, we have more people with more tools with more access to more information – and yes, more conflicting intent – than ever before; but, while messy and confusing, this will ultimately improve the information environment. We will continue to widen access to all types of information – access for citizen journalists, professionals, technical experts, others – so while the information environment becomes more diverse, the broader arc of human knowledge bends towards revelation and clarity; only mass suppression will stop the paid and unpaid information armies from discovering and revealing the truth.”

    A North American research scientist replied, “I’m an optimist, and believe we are going through a period of growing pains with the spread of knowledge. In the next decade, we’ll create better ways to suss out truth.”

    Sharon Tettegah, professor at the University of Nevada, commented, “As we learn more about the types of information, we will be able to isolate misinformation and reliable sources.”

    Pamela Rutledge, director of the Media Psychology Research Center, noted, “Fake news and information manipulation are no longer ‘other people’s problems.’ This new awareness of the importance of media will shift resources, education and behaviors across society.”

    Dariusz Jemielniak, professor of organization studies in the department of management in networked and digital societies (MiNDS) at Kozminski University, said, “There are a number of efforts aimed at eliminating fake news, and we as a society are going to make them work.”

    Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material

    Many respondents said the online realm as simply yet another step in human and communications evolution and that history’s lessons here should be comforting. They argued that previous information revolutions have inspired people to invent new ways to handle problems with information overload, the proliferation of misinformation, and opportunities for schemers to manipulate the emerging systems. The more hopeful among these experts believe that dynamic will play out again in the digital age.

    Society always adjusts to new media and responds to weaknesses and flaws. Individuals will adjust, as will the technology.
    ANONYMOUS JOURNALISM AND COMMUNICATIONS DEAN

    A professor of media studies at a European university wrote, “The history of technology shows repeatedly that as a new technology is introduced – whatever the intentions of the designers and manufacturers, bad actors will find ways to exploit the technology in darker, more dangerous ways. In the short run, they can succeed, sometimes spectacularly: in the long run, however, we usually find ways to limit and control the damage.”

    A futurist/consultant replied, “We’re seeing the same kinds of misinformation that used to be in supermarket tabloids move online – it’s the format that has changed, not the human desire for salacious and dubious news.”

    Robin James, an associate professor of philosophy at a North American university, wrote, “The original question assumes that things have recently gotten worse. Scholars know that phenomena like patriarchy and white supremacy have created ‘epistemologies of ignorance’ that have been around for hundreds of years. ‘Fake news’ is just a new variation on this.”

    The dean of one of the top 10 journalism and communications schools in the United States replied, “Society always adjusts to new media and responds to weaknesses and flaws. Individuals will adjust, as will the technology.”

    Lokman Tsui, assistant professor at the School of Journalism and Communication at The Chinese University of Hong Kong, commented, “The information environment will improve. This is not a new question; we had concerns about fake news when radio broadcasting and mass media first appeared (for example, the Orson Welles’ reading of ‘War of the Worlds’). People will develop literacy. Standards, norms and conventions to separate advertising from ‘organic’ content will develop. Bad actors who profit from fake news will be identified and targeted.”

    Adam Nelson, a developer at Amazon, replied, “We had yellow journalism a hundred years ago and we have it now. We’re at a low point of trust, but people will begin to see the value of truth once people become more comfortable with what social platforms do and how they work.”

    Axel Bruns, professor at the Digital Media Research Centre at the Queensland University of Technology, commented, “Moral panics over new media platforms are nothing new. The web, television, radio, newspapers and even the alphabet were seen as making it easier to spread misinformation. The answer is media literacy amongst the public, which always takes some years to catch up with the possibilities of new media technologies.”

    An anonymous respondent who predicts improvement replied, “Powerful social trends have a life cycle, and the pendulum typically swings back over time.”

    An anonymous respondent said, “It is the nature of the technical development that politics and regulatory forces are only able to react ex post, but they will.”

    A senior researcher at a U.S.-based nonprofit research center replied, “The next generation of news and information users will be more attuned to the environment of online news and will hopefully be more discerning as to its veracity. While there are questions as to whether the digital native generation can accurately separate real news from fake, they at least will have the technical and experiential knowledge that the older generations mostly do not.”

    Many respondents expressed faith that technologists would be at the forefront of helping people meet the challenges of misinformation. A managing partner and fellow in economics predicted, “In order to avoid censorship, the internet will remain relatively open, but technology will develop to more effectively warn and screen for fact-inaccurate information. Think of it as an automated ‘PolitiFact’ that will point out b******* passively to the reader.”

    An author and journalist based in North America said, “Social media, technology and legacy media companies have an ethical and economic incentive to place a premium on trusted, verified news and information. This will lead to the creation of new digital tools to weed out hoaxes and untrusted sources.”

    Susan Price, lead experience strategist at Firecat Studio, observed, “There will always be a demand for trusted information, and human creativity will continue to be applied to create solutions to meet that demand.”

    Dane Smith, president of the public policy research and equity advocacy group Growth & Justice, noted, “I’m an optimist. Truth will find a way and prevail.”

    Louisa Heinrich, founder of Superhuman Ltd., commented, “The need to tell our stories to one another is a deeply rooted part of human nature, and we will continue to seek out better ways of doing so. This drive, combined with the ongoing upward trend of accessibility of technology, will lead more people to engage with the digital information environment, and new trust frameworks will emerge as old ones collapse.”

    Michael R. Nelson, public policy executive at Cloudflare, replied, “Some news sites will continue to differentiate themselves as sources of verified, unbiased information, and as these sites learn how to better distinguish themselves from ‘fake news’ sites, more and more advertisers will pay a premium to run their ads on such sites.”

    Steven Polunsky, writer with the Social Strategy Network, replied, “As with most disruptive events, people will adjust to accommodate needs and the changing environment.”

    Liz Ananat, an associate professor of public policy and economics at a major U.S. university wrote, “It will likely get worse first, but over 10 years, civil society will respond with resources and innovation in an intensive effort. Historically, when civil society has banded together and given its all to fight destructive forces, it has been successful.”

    Jane Elizabeth, senior manager at the American Press Institute, said, “The information environment will improve because the alternative is too costly. Misinformation and disinformation will contribute to the crumbling of a democratic system of government.”

    A number of these respondents said they expect information platform providers to police the environment in good faith, implementing the screening of content and/or other solutions while still protecting rights such as free speech.

    A principal network architect for a major edge cloud platform company replied, “Retooling of social networking platforms will likely, over time, reduce the value of stupid/wrong news.”

    A senior solutions architect for a global provider of software engineering and IT consulting services wrote, “The problem of fake news is largely a problem of untrusted source. Online media platforms delegated the role of human judgment to algorithms and bots. I expect that these social media platforms will begin to exercise more discretion in what is posted when.”

    An anonymous respondent said, “Information platforms optimized for the internet are in their infancy. Like early e-commerce models, which merely sought to replicate existing, known systems, there will be massive shifts in understanding and therefore optimizing new delivery platforms in the future.”

    An anonymous respondent wrote, “Google and other outlets like Facebook are taking measures to become socially responsible content promoters. Combined with research trends in AI and other computing sectors, this may help improve the ‘fake news’ trends by providing better attribution channels.”

    Adam Gismondi, a researcher at the Institute for Democracy & Higher Education at Tufts University, predicted, “Ultimately, the information distributors – primarily social media platform companies, but others as well – will be forced, through their own economic self-interest and public pushback, to play a pivotal role in developing filters and signals that make the information environment easier for consumers to navigate.”

    Anonymous respondents shared these related remarks:

    “Everything we know about how human ingenuity and persistence has shaped the commercial and military (and philanthropic) drivers of the internet, and the web suggests to me that we will continue to ensure this incredible resource remains useful and beneficial to our development.”
    “The tide of false information has to be stemmed. The alternative will be dystopia.”
    “People will gain in sophistication, especially after witnessing the problems caused by the spread of misinformation in this decade. Vetting will be more sophisticated, and readers/viewers will be more alert to the signs that a source is not reliable.”
    “I have hope in human goodness.”
    “Over the next 10 years, users will become much more savvy and less credulous on average.”
    “People will develop better practices for dealing with information online.”
    Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)
    Some respondents expressed optimism about the potential for people’s capabilities in improving the visibility of the most-useful content, including the implementation of human-machine evaluation of content to identify sources, grade their credibility and usefulness, and possibly flag, tag or ban propagators of misinformation. An anonymous respondent wrote, “AI, blockchain and crowdsourcing appear to have promise.”

    There will be new forms of crowdsourcing – a radical kind of curation – participation in which will improve critical-thinking skills and will mitigate the effects of misinformation.
    JACK PARK

    An assistant professor at a university in the U.S. Midwest wrote, “Crowd-based systems show promise in this area. Consider some Reddit forums where people are called out for providing false information … if journalists were called out/tagged/flagged by large numbers of readers rather than their bosses alone, we would be inching the pebble forward.”

    But whose “facts” are being verified in this setting? Ned Rossiter, professor of communication at Western Sydney University, argued, “Regardless of advances in verification systems, information environments are no longer enmeshed in the era of broadcast media and national publics or ‘imagined communities’ on national scales. The increasing social, cultural and political fragmentation will be a key factor in the ongoing contestation of legitimacy. Informational verification merely amplifies already existing conditions.”

    Richard Rothenberg, professor and associate dean at the School of Public Health at Georgia State University, said, “It is my guess that the dark end of the internet is relatively small but it has an outsized presence. … If nothing else, folks have demonstrated enormous resourcefulness, particularly in crowd endeavors, and I believe methods for assuring veracity will be developed.”

    An anonymous research scientist based in North America wrote, “A system that enables commentary on public assertions by certified, non-anonymous reviewers – such that the reviewers themselves would be subject to Yelp-like review – might work, with the certification provided by Verisign-like organizations. Wikipedia is maybe a somewhat imperfect prototype for the kind of system I’m thinking of.”

    A Ph.D. candidate in informatics, commented, “It is possible to create systems that are reliable and trusted, but probably not unhackable. I imagine there could be systems that leverage the crowd to check facts in real time. Computational systems would be possible, but it would be very difficult to create algorithms we could trust.”

    Jack Park, CEO at TopicQuests Foundation, predicted, “There will be new forms of crowdsourcing – a radical kind of curation – participation in which will improve critical-thinking skills and will mitigate the effects of misinformation.”

    Some respondents also pointed out the rise of additional platforms where people can publish useful information could be a positive force. An anonymous respondent wrote, “The rise of more public platforms for media content (online opinion/editorials and platforms such as Medium) gives me confidence that as information is shared, knowledge will increase so that trust and reliability will grow. Collaboration is key here.”

    Blockchain systems were mentioned by a number of respondents – a senior expert in technology policy based in Europe, commented, “… use blockchain to verify news” – but with mixed support, as many hedged their responses. A journalist who writes about science and technology said, “We can certainly create blockchain-like systems that are pretty reliable. Nothing is ever perfect, though, and trusted systems are often hard to use.”

    The president of a center for media literacy commented, “The technology capability [of potential verification systems] is immature and the costs are high. Blockchain technology offers great promise and hope.”

    A journalist and experience strategist at one of the world’s top five technology companies said, “The blockchain can be used to create an unhackable verification system. However, this does not stop the dissemination of ‘fake news,’ it simply creates a way to trace information.”

    A chief executive officer said, “Can P2P, blockchain, with attribution be unhackable? We need a general societal move to more transparency.”
    “El revolucionario: te meteré la bota en el culo"

  8. #7
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education
    BY JANNA ANDERSON AND LEE RAINIE

    A large share of respondents said that technology alone can’t work to improve the information environment. Among these respondents, most pointed out two areas of concern: 1) The need for better funding of and support for journalism that serves the common good. The attention economy of the digital age does not support journalism of the general quality of the news media of the late 20th century, which was fairly well-respected for serving the public good with information that helped create an informed citizenry capable of informed decisions; 2) The need for massive efforts to imbue the public with much better information literacy skills; this requires an education effort that reaches out to those of all ages, everywhere.

    Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press
    Many respondents said the information environment can’t be improved without more well-staffed, financially stable, independent news organizations capable of rising above the clamor of false and misleading content to deliver accurate, trusted content.

    The credibility of the journalism industry is at stake and the livelihood of many people is hanging in the balance of finding the tools, systems and techniques for validating the credibility of news.
    THOMAS FREY

    Susan Landau, a North American scientist/educator, wrote, “The underlying question is whether this dissemination will expand or not lies with many players, many in the private sector. How will the press handle ‘fake news’? How will the internet companies do so? And how will politicians, at least politicians post-Trump? The rise of ‘fake news’ is a serious threat to democracy. Post-election [U.S. 2016], some in the press have been pursuing news with the same care and incisiveness that we saw in the Watergate era, but others are not. We have a serious threat here, but it is not clear that interests are aligned in responding to it. And it is not cheap to do so: securing sites against hacking is very difficult when the threat comes from a powerful nation state. Is there a way to create trusted, unhackable verification systems? This depends on what the use case is; it is a not 0-1 answer, but an answer in scales of grey. … If society cannot adequately protect itself against the co-opting of public information by bad actors, then democracy itself is in serious risk. We have had this problem for quite some time. … What has changed is the scope and scale of these efforts, partially through domestic funding, partially through foreign actors and partially through the ability of digital technologies to change the spread of ‘false news.’ What is needed to protect society against the coopting of public information is not only protecting the sources of the information, but also creating greater public capability to discern nonsense from sense. … I do not see a role for government in preventing the spread of ‘fake news’ – that comes too close to government control of speech – but I do see one for government in preventing tampering with news and research organizations, disrupting flows of information, etc.”

    Timothy Herbst, senior vice president at ICF International, noted, “We have no choice but to come up with mechanisms to improve our information environment. The implications of not doing so will further shake trust and credibility in our institutions needed for a growing and stable democracy. Artificial intelligence (AI) should help but technological solutions won’t be enough. We also need high-touch solutions and a reinforcement of norms that value accuracy to address this challenge.”

    Peter Jones, associate professor in strategic foresight and innovation at OCAD University in Toronto, predicted, “By 2027 decentralized internet services will displace mainstream news, as corporate media continues to erode trust and fails to find a working business model. Field-level investigative journalism will be crowdfunded by smaller consortiums, as current news organizations will have moved into entertainment, such as CNN already has.”

    A senior international communications advisor commented, “I don’t believe that the next 10 years will yield a business model that will replace the one left behind – particularly with respect to print journalism, which in the past offered audiences more in-depth coverage than was possible with video or radio. Today, print journalists effectively work for nothing [and] are exposed to liability and danger that would have been unheard of 25 years ago. Moreover, the separation between the interests of those corporations interested in disseminating news and editorial has all but closed – aside from a few noteworthy exceptions. Moreover, consumers of media appear to be having a harder time distinguishing spurious from credible sources – this could be the end result of decades of neglect regarding the public school system, a growing reliance on unsourced and uncross-checked social media or any number of other factors. Bottom line is that very few corporations seem willing [to] engage in a business enterprise that has become increasingly unfeasible from a financial point of view.”

    A futurist/consultant based in Europe said, “News has always been biased, but the apparent value of news on the internet has been magnified and so the value of exploiting it has also increased. Where there is such perceived value, the efforts to generate misleading news, false news and fake news will increase.”

    An anonymous respondent wrote, “There are too many pressures from the need to generate ‘clicks’ and increase advertising revenue.”

    There were complaints about news organizations in survival mode that neglect their role of informing the public in favor of pandering to it to stay afloat. Other experts worried about the quality of reporting in an age when newsrooms have been decimated.

    An anonymous respondent wrote, “The talent pool the media system draws its personnel from will further deteriorate. Media personnel are influenced by defective information, and – even more – the quality of inferences and interpretations will decrease.”

    Some expressed concerns about finding unbiased details about the world in an online environment that becomes more cluttered all the time with content that does not feature this. An anonymous survey participant wrote, “I worry that sources of information will proliferate to the point at which it will be difficult to discern relatively unbiased sources from sources that are trying to communicate a point of view independent of supporting facts.”

    Thomas Frey, executive director and senior futurist at the DaVinci Institute, replied, “The credibility of the journalism industry is at stake and the livelihood of many people is hanging in the balance of finding the tools, systems and techniques for validating the credibility of news.”

    Eileen Rudden, co-founder of LearnLaunch, wrote, “The lack of trust in established institutions is at the root of the issue. Trust will need to be re-established.”

    An international internet policy expert said, “Demand for trusted actors will rise.”

    This is not an easy fix, by any means. Kelly Garrett, associate professor in the School of Communication at Ohio State University, said, “Although technology has altered how people communicate, it is not the primary source of distrust in authority, expertise, the media, etc. There are no simple technical solutions to the erosion of trust in those who produce and disseminate knowledge.”

    Rob Lerman, a retired information science professional, commented, “The combination of an established media which has encouraged opinion-based ‘news.’ The relative cheapness of websites, the proliferation of state-based misinformation and the seeming laziness of news consumers seems like an insurmountable obstacle to the improvement of the information environment.”

    Elevate information literacy: It must become a primary goal at all levels of education
    A number of participants in this canvassing urged an all-out effort to expand people’s knowledge about the ways in which misinformation is prepared and spread – an education in ways they can be wise and well-informed citizens in the digital age.

    The only way is to reduce the value of fake news by ensuring that people do not fall for it …
    JACQUELINE MORRIS

    Jeff MacKie-Mason, university librarian and professor of information science and economics at the University of California, Berkeley, commented, “One wonder of the internet is that it created a platform on which essentially anyone can publish anything, at essentially zero cost. That will become only more true. As a result, there will be a lot of information pollution. What we must do is better educate information consumers and provide better systems for reputation to help us distinguish the wheat from the chaff.”

    Sharon Roberts, a Ph.D. candidate, wrote, “Social changes will be the ones that will affect our perception of the information environment. Just like there are still 1-888 psychic call lines content on television or ‘Nigerian princes’ promising money sending me email, it’s a social understanding of those meanings to be scams that have curtailed their [proliferation], not any actual TV or email technology ‘trusted methods.’”

    Sharon Haleva-Amir, lecturer in the School of Communication at Bar Ilan University in Israel, said, “I fear that the phenomenon of fake news will not improve due to two main reasons: 1) There are too many interested actors in this field (both business and politics wise) who gain from dispersion of false news and therefore are interested in keeping things the way they are; 2) Echo chambers and filter bubbles will continue to exist as these attitudes are typical to people’s behavior offline and online. In order to change that, people will have to be educated since early childhood about the importance of both [the] credibility of sources as well as variability of opinions that create the market of ideas.”

    Sandra Garcia-Rivadulla, a librarian based in Latin America, replied, “It will be more important to educate people to be able to curate the information they get more effectively.”

    Jacqueline Morris, a respondent who did not share additional personal details, replied, “I doubt there will be systems that will halt the proliferation of fake news. … The only way is to reduce the value of fake news by ensuring that people do not fall for it, basically, by educating the population.”

    Mike O’Connor, a self-employed entrepreneur, wrote, “The internet is just like real life; bad actors will find ways to fool people. Healthy skepticism will be part of the mix.”

    Tomslin Samme-Nlar, technical lead at Dimension Data in Australia, commented, “I expect the information environment to improve if user-awareness programs and campaigns are incorporated in whatever solutions that are designed to combat fake news.”

    Geoff Scott, CEO of Hackerati, commented, “This isn’t a technical or information problem; it’s a social problem. Fake news works because it supports the point of view of the people it targets, which makes them feel good, right or vindicated in their beliefs. It takes critical thinking to overcome this, which requires effort and education.”

    Andreas Vlachos, lecturer in artificial intelligence at the University of Sheffield, commented, “I believe we will educate the public to identify misinformation better.”

    Iain MacLaren, director of the Centre for Excellence in Learning & Teaching at the National University of Ireland, Galway, commented, “The fact that more people are now fully aware of the existence of fake news, or propaganda, as it used to be known, means that there is increasing distrust of unverified/unrecognised providers of news and information. … I would like to hope, therefore, that a more sophisticated, critical awareness is growing across society, and I certainly hear much to that effect amongst the young people/students I work with. This also shows the importance of education.”

    Greg Wood, director of communications planning and operations for the Internet Society, replied, “The information environment will remain problematic – rumors, false information and outright lies will continue to propagate. However, I have hope that endpoints (people) can become more sophisticated consumers and thus apply improved filters. The evolution of email spam and how it has been dealt with provides a rough analogy.”

    Some people said, though, that information-literacy efforts, while possibly somewhat helpful in some cases, will not have an effect in many situations.

    Sam Punnett, research officer at TableRock Media, replied, “The information environment will improve but what will determine this will be a matter of individual choice. Media literacy, information literacy, is a matter of choosing to be educated.”

    David Manz, a cybersecurity scientist, replied, “Technology exists and will be created to attribute statements to their source in an easy-to-understand manner. However, this will still require the public to want to know the quality and source of their information.”

    Carol Chetkovich, professor emerita of public policy at Mills College, commented, “My negative assessment of the information environment has to do primarily with my sense that consumers of media (the public at large) are not sufficiently motivated and well-enough educated to think critically about what they read. There will always be some garbage put out by certain sources, so – even though it’s important that garbage be countered by good journalism – without an educated public, the task of countering fake news will be impossible.”

    Peter and Trudy Johnson-Lenz, founders of the online learning community Awakening Technology, combined on this response: “If we rely on technological solutions to verify trust and reliability of facts, then the number of states of the control mechanisms must be greater or equal to the number of states being controlled. With bots and trolls and all sorts of disinformation, that’s virtually impossible. There are probably some tech solutions, but that won’t solve the entire problem. And walling off some sections of the information ecosystem as ‘trusted’ or ‘verified fact-filled’ defeats the purpose of open communication. … If you study microtargeting during the 2016 election, it’s clear that Facebook in particular was used to spread disinformation and propaganda and discourage voting in a very effective manner. This kind of activity is hard to discern and uncover in real time, it adds greatly to the polluted ecosystem and it is virtually impossible to control. Ultimately, people are going to have to make critical-thinking discernments themselves. Unfortunately, there are people who have no interest in doing that, and in fact discourage anyone else from doing that. The echo chamber is noisy and chaotic and full of lies. The only hope is some combination of technological advances to trust and verify, people being willing to take the time to listen, learn and think critically, and a rebuilding of trust. In our accelerating world, that’s a very big ask! For an eye-opening perspective on acceleration, see Peter Russell’s recent essay, ‘Blind Spot: The Unforeseen End of Accelerating Change.’”

    Bruce Edmonds, a respondent who shared no additional identifying details, noted, “Lack of trust and misinformation are social problems that will not be solved with technical or central fixes. Rather, political and new normative standards will need to be developed in society.”

    Anonymous respondents wrote:

    “Bad information has always been produced and promulgated. The challenge remains for individuals to stay skeptical, consider numerous sources and consider their biases.”
    “The way to solve the issue is not so much in designing systems for detecting and eliminating fake news but rather in educating people to manage information appropriately. Media and information literacy is the answer.”
    “Continued misinformation will help people to learn first-hand how bad information functions in any system.”

    Acknowledgments
    BY JANNA ANDERSON AND LEE RAINIE

    This report is a collaborative effort based on the input and analysis of the following individuals.

    Primary researchers
    Janna Anderson, Director, Elon University’s Imagining the Internet Center
    Lee Rainie, Director, Internet and Technology Research

    Research team
    Aaron Smith, Associate Director, Research
    Claudia Deane, Vice President, Research

    Editorial and graphic design
    Margaret Porteus, Information Graphics Designer
    Shannon Greenwood, Copy Editor

    Communications and web publishing
    Shannon Greenwood, Associate Digital Producer
    Tom Caiazza, Communications Manager
    “El revolucionario: te meteré la bota en el culo"

  9. #8
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Absentee Ballot vs. Mail-In Ballot: Is There A Difference?

    As if the coronavirus pandemic wasn’t already challenging enough, the US will be holding a general election in the midst of it. Many people are rightfully concerned that traditional, in-person voting could spread COVID-19, and so some states are changing (or considering changing) their voting rules to make it easier for eligible people to vote by mail.

    Voting by mail can be done by what’s called an absentee ballot or mail-in ballot. But there is a lot of confusion—and misinformation—around these methods, which vary widely state by state. What’s more, some people use these terms interchangeably, others mean different things by them, and yet others employ different words altogether.

    Yes, it’s complicated. But we’ve got a primer for you on absentee ballots vs. mail-in ballots. And please note: use the information in this article for your general information, but consult your local election officials for when and how you may vote. To get started, visit usa.gov and vote.gov.

    What is an absentee ballot?
    Let’s start with some election basics. Normally, most US voters cast their ballots in person in a polling booth at a polling place/station based on where they are registered to vote. A ballot is the physical form (or electronic voting machine equivalent) that a voter fills out; it lists the candidates, issues, and so on that a person votes on.

    An absentee ballot is a ballot used to cast an absentee vote, which is submitted, usually by mail, by an absentee. Absentee, here, refers to a person who can’t physically be present at a voting center on Election Day. Absentee voting in America goes back to the Civil War era, and every state allows this kind of voting in some form—and federal law, in fact, requires ballots be sent to military and overseas voters for federal elections.

    To get an absentee ballot, a registered voter must request one through their state government, which accepts or rejects the application. When someone is approved to vote absentee, election officials mail the voter an absentee ballot, which they complete and sign, and return by mail or, under certain circumstances, fax. Officials can reject absentee ballots if they are improperly filled out, and voters face steep penalties if they falsify any information.

    All states, again, send absentee ballots to military and overseas voters who request them. In 16 states*, an absentee ballot is the only form of voting through the mail that is allowed by law, and the voter is required to give a reason why they can’t go to a voting location on Election Day. Exact rules vary, but qualifying reasons may include the following:

    • Being out of the county where they are registered to vote
    • Being a student living outside of the county
    • Having an illness or disability
    • Working or being on jury duty during voting hours
    • Serving as an election worker or poll watcher
    • Having religious beliefs or practices that prevent them from going to a voting center
    • Being in prison but still able to vote


    *Alabama, Arkansas, Connecticut, Delaware, Indiana, Kentucky, Louisiana, Massachusetts, Mississippi, Missouri, New Hampshire, New York, South Carolina, Tennessee, Texas, West Virginia

    What is a no-excuse absentee ballot?

    Twenty-nine states and the District of Columbia** use what’s sometimes called a no-excuse absentee ballot. This is similar to the strict absentee ballot previously noted in the 16 states above, but a registered voter doesn’t have to give a reason (excuse) why they can’t be at their polling location on Election Day. However, the states themselves may simply call this ballot an absentee ballot.

    The lexical wrinkles don’t stop there! Some of these states (e.g., Pennsylvania) may refer to the no-excuse absentee ballot as a mail-in ballot.

    **Alaska, Arizona, California, D.C., Florida, Georgia, Idaho, Illinois, Iowa, Kansas, Maine, Maryland, Michigan, Minnesota, Montana, Nebraska, Nevada, New Jersey, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Dakota, Vermont, Virginia, Wisconsin, Wyoming

    So, what is a mail-in ballot?
    Five states—Washington, Oregon, Colorado, Utah, and Hawaii—already conduct their elections through a mail-in process that’s often referred to as all-mail voting. Registered voters in these states automatically receive a mail ballot, which is sent to their address before Election Day and mailed back by the voter or deposited at a voting location or secure dropbox by a certain time on Election Day.

    In these states, the term absentee ballot can specifically refer to a ballot that is requested by a voter who will be out of the state (e.g., for college, traveling, etc.) at the time of the election, and so can’t receive their ballot at their registered address.

    What general mail-in ballots are called varies by state, as you can already tell. In all-mail voting states, the following names may be used:

    • advance ballot
    • ballots by mail
    • by-mail ballot
    • mail ballot
    • mail-in ballot
    • mailed ballot


    Absentee ballot vs. mail-in ballot
    So, all absentee ballots are sent through the mail (or very occasionally fax), but not all ballots sent through the mail are absentee ballots.

    The takeaway:

    An absentee ballot is generally used in every state to refer to a ballot filled out by a voter who cannot, for various reasons, physically make it to a voting location on Election Day.
    A mail-in ballot is used more broadly to refer to ballots sent through the mail, including in all-mail voting states and some forms of absentee voting.
    In popular discussions, some people will use the terms absentee ballots and mail-in ballots to mean the same thing: voting by mail, regardless of why. However, many people will use absentee ballots specifically to refer to ballots that are mailed when a person can’t vote in person, and use the term mail-in ballots in the context of voting policies that enable all people to vote by mail.

    Misinformation about voting by mail

    Since we’re here, let’s put to bed some myths and misinformation about voting by mail:

    Voting fraud is extremely rare in the US, and voting by mail is no exception. In the past 20 years, over 250,000,000 votes have been cast by mail, and according to data from the Heritage Foundation, there have been only 1,285 proven cases of voter fraud resulting in 1,100 convictions.

    Studies done by such organizations as Stanford University have found that voting by mail does not favor voter share or turnout of either major political party. Expanding access to voting by mail is generally considered great for all voters and their ability to exercise their right to vote.
    For more information on how and when you can vote, and how the coronavirus pandemic may have impacted how you can vote this year, consult your own state’s elections office website, or use resources like vote.org.

    And whether you vote in person or by mail, vote—regardless of what you call the ballot!
    “El revolucionario: te meteré la bota en el culo"

  10. #9
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Here's 2 good links for anyone suffering from conspiracitis:

    https://www.reddit.com/r/ReQovery/

    https://www.reddit.com/r/QAnonCasualties/
    “El revolucionario: te meteré la bota en el culo"

  11. The Following User Says Thank You to Emil El Zapato For This Useful Post:

    Elen (29th November 2020)

  12. #10
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    How to talk about conspiracy theories
    Jon Ward

    The holidays are coming up. And although gatherings will probably be smaller this year, or taking place virtually rather than in person, you may find yourself in a difficult conversation with a friend or family member.

    Now normally, a good rule of thumb is to avoid politics at the dinner table altogether — but this year, in the aftermath of a bitterly divisive election, and with a global pandemic raging, there may be no way around it. Political arguments are a part of life, but increasingly in the U.S. they take place between people who disagree over not just policies but also objective reality, posing a dilemma over how to respond when confronted with misinformation, baseless internet rumors or conspiracy theories.


    A mother and daughter having a disagreement at Christmastime. (Getty Images)

    Well, first, maybe don’t start by calling them conspiracy theories. The term is occasionally useful. But to those accused of harboring them, it increasingly comes across as a pat dismissal, a way of closing off discussion. It might be helpful, however, to point out the difference between a proven conspiracy and an unproven conspiracy theory, and we’ll talk about some of those differences in a moment.

    Here are the telltale signs of a conspiracy theory:

    • Negative evidence. The absence of evidence is a clear sign. Often someone who asks for evidence is then painted as closed-minded and potentially even part of the plot to suppress the truth.
    • Errant data. Conspiracy theories often rely on obscure statistical or historical data, meant to suggest a sophisticated approach but which doesn’t stand up to analysis.
    • An imaginary master plan. A hallmark of a conspiracy theory is that it discounts the possibility of coincidence or random events. There are no accidents; everything is part of the plot, or the counterplot. Of course, that’s not how the world works. Ask yourself: Is this true of anything in your own life?
    • A cabal behind the scenes. There is a shadowy, often nameless villain or group of bad guys pulling the strings.
    • Circular reasoning or contradictory claims. Conspiracy theories don’t hew to deductive logic.
    • Skepticism toward accepted truth. If you hear someone saying that we can’t actually know for sure what happened, that’s a hallmark of conspiracy theories.
    • Self-justifying rationales. Reality itself — the existence of a plausible explanation, even one backed by evidence — is part of the plot, because “that’s what they want you to believe.”


    So how to talk about it?

    If it’s simply someone who is not sure what to think, then talk about media literacy and the standards for distinguishing good information from bad.

    The Cornell Alliance for Science, in its Conspiracy Theory Handbook, recommends four basic questions to help someone assess the credibility of information.

    The four questions are:

    • Do I recognize the news organization that posted the story?
    • Does the information in the post seem believable?
    • Is the post written in a style that I expect from a professional news organization?
    • Is the post politically motivated?


    Other good questions to discuss are whether the information is coming from an organization that has layers of editing and vetting of information. Are the names of the people who run the organization public, so they are accountable? Are the people who wrote the article or created the content identified — and are they real? (Try a web search. Any legitimate reporter or writer will have left a trail of information on the internet.)


    A couple quarreling while preparing a meal. (Getty Images)

    These are all good questions that can lead to a discussion of how legitimate news organizations work and how there is a lot of information on the internet with no accountability and no standards.

    For the person who seems committed to believing their conspiracy theory, there might be other approaches that work better.

    One is to ask questions and use curiosity — with healthy but friendly skepticism — to untangle the assumptions underlying someone’s beliefs. Those presuppositions are usually what’s driving someone to believe in a grand conspiracy anyway, so it’s worthwhile to spend time there.

    Questions like “What makes you believe that?” and “What do you think is driving all this?” are one way to probe a person’s deeper motivations, and perhaps could take the conversation in a more personal direction that allows for building relational trust. Trust is the basis for truth telling.

    Another approach is to discuss real historical conspiracies and how they came to light: the Watergate scandal, or the CIA’s domestic spying program in the late ’60s and early ’70s, or the Catholic Church sexual abuse scandal, or the use of extraordinary rendition and torture of military detainees by the U.S. government after 9/11, or the tobacco industry’s deceit of the public about the health effects of smoking.


    Newspaper headlines on Aug. 8, 1974, being read by tourists in front of the White House tell of history in the making. (Bettmann Archive/Getty Images)

    All came to light through investigative journalism, the courage of whistleblowers working with the press, the justice system or congressional investigations. Tools like Freedom of Information Act requests have been crucial as well.

    It may help to create some common ground to talk about the idea of conspiracies, and how it’s wrong to say there is no such thing. This could be a bridge to discussing the distinguishing marks of the real conspiracies that have come to light, and how they differ from the unproven ones that people often fixate on.

    Empathy and humility can be great weapons for the truth. They might not give you the satisfaction of telling someone they’re a fool — but that’s not the point.
    “El revolucionario: te meteré la bota en el culo"

  13. #11
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Posted today by a deluded moron:

    Source: https://www.youtube.com/watch?v=IgsTy_krQp4


    The reality:
    Trump loses Pennsylvania ‘election fraud’ appeal
    President’s legal team will attempt to bring election case to Supreme Court


    In a blow to Donald Trump’s legal campaign to challenge the results of the 2020 presidential election, a federal appellate court has rejected his emergency appeal in a Pennsylvania lawsuit.

    The 3-0 rejection in the case, seeking another attempt to overturn the Pennsylvania results after a lower court had tossed out the suit last week, is yet another loss in the Trump campaign’s attempt to undermine president-elect Joe Biden’s victory.

    Mr Biden won the state by roughly 80,000 votes, according to Pennsylvania secretary of state’s office.

    A blistering ruling from the Third District’s Stephanos Bibas – a Trump-appointed judge – said “free, fair elections are the lifeblood of our democracy”.

    “Charges of unfairness are serious,” he said in his ruling on Friday. "But calling an election unfair does not make it so. Charges require specific allegations and then proof. We have neither here."

    In his opinion, attached to the 21-page ruling, Judge Bibas said the Trump campaign has tried to “repackage” its claims about state law governing poll observers as “unconstitutional discrimination”.
    The case suggested poll watchers did not have access to the vote-counting process, and that the state illegally allowed counties to decide whether voters could fix mail-in ballots with missing signatures or secrecy envelopes.

    “Yet its allegations are vague and conclusory,” the judge said. “It never alleges that anyone treated the Trump campaign or Trump votes worse than it treated the Biden campaign or Biden votes. And federal law does not require poll watchers or specify how they may observe. It also says nothing about curing technical state-law errors in ballots. Each of these defects is fatal.”

    The judge argued that the number of ballots that the campaign has targeted is far smaller than the margin of victory for it to have any meaningful impact.

    “And it never claims fraud or that any votes were cast by illegal voters,” Judge Bibas said. “Plus, tossing out millions of mail-in ballots would be drastic and unprecedented, disenfranchising a huge swath of the electorate and upsetting all down-ballot races too. That remedy would be grossly disproportionate to the procedural challenges raised."

    The president’s attorney Rudy Giuliani – who has repeatedly insisted that the election was marred by fraud, despite not presenting evidence in court – declared the lower-court ruling earlier this month was a victory that would bring the case closer to the US Supreme Court.

    Mr Giuliani, who appeared at a hearing with GOP state lawmakers in Pennsylvania on Wednesday, intends to take the appellate court decision to the nation’s high court.

    “The activist judicial machinery in Pennsylvania continues to cover up the allegations of massive fraud,” campaign attorney Jenna Ellis and Mr Giuliani said in a statement following Friday’s decision.

    Those “activist” judges on the appeals court’s three-judge panel were nominated by Republicans.

    “We are very thankful to have had the opportunity to present proof and the facts” to the state legislature, they said. “On to SCOTUS!”

    In the lower-court ruling, US district judge Matthew Brann called the campaign’s filings a “Frankenstein’s monster” that was “haphazardly stitched together”, in his rejection of Mr Giuliani’s attempt to amend the complaint a second time. Friday’s ruling agreed with that decision.

    More reality:
    The period required to lodge a formal objection to Act 77 lasted 180-days and none was filed: Case Closed

    the rest is outrageous b*llsh*t as the federal appellate Judge pointed out in the above article.
    “El revolucionario: te meteré la bota en el culo"

  14. #12
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Fake news story:

    More than 2,000 fake votes were found at Wisconsin Recount in Dane County on Friday!
    (The Wisconsin recount of 2020 election ballots is currently underway in Wisconsin)
    https://www.thegatewaypundit.com/202...-gop-observer/


    Here's the reality:

    Nearly 400 uncounted ballots found in Wisconsin recount
    SCOTT BAUER Associated Press Nov 25, 2020


    Election workers recount presidential ballots Saturday at the Wisconsin Center in Milwaukee.

    Nearly 400 absentee ballots cast in Milwaukee that were not opened on Election Day were discovered as part of a recount Tuesday, a mistake the city’s top elections official attributed to human error.

    Democrat Joe Biden won Wisconsin by 20,600 votes and President Donald Trump paid for a recount in just Milwaukee and Dane counties — the counties with the most votes for Biden.

    The 386 uncounted ballots were found on the fourth day of the recount.

    The city’s top elections official, Claire Woodall-Vogg, said not counting the 386 ballots on Nov. 3 was due to an error by new election inspectors. The unopened ballots were discovered underneath ballots that had been counted, she said. The county Board of Canvassers voted unanimously to count the ballots as part of the recount.

    “If there’s one positive to come out of the recount it’s that indeed that every vote is being counted, including these 386,” Woodall-Vogg said.

    As the recount in Wisconsin continued, Republicans filed a lawsuit Tuesday asking the Wisconsin Supreme Court to block certification of the state’s presidential election results. Trump’s campaign has raised objections to broad categories of ballots, including all absentee ballots cast in person.

    The lawsuit is the latest for the campaign that has suffered a series of legal defeats in other states. On Tuesday, Biden’s win in Pennsylvania was certified. That win, giving him Pennsylvania’s 20 electoral votes, had put him over the 270 needed and led The Associated Press to declare Biden the president-elect four days after Election Day. Biden has collected 306 overall electoral votes to Trump’s 232.

    Wisconsin has 10 electoral votes.

    Wisconsin’s recount got off to a slow start last week as elections officials addressed a myriad of complaints from Trump’s attorneys and observers. But as of Tuesday, the work was “very close to being back on schedule” and could be completed as soon as Wednesday, said Brian Rothgery, spokesman for the Milwaukee County Board of Supervisors.

    The recount is about 36% complete in Dane County and only “slightly behind schedule,” said Dane County Clerk Scott McDonell.

    Neither county planned to work on Thanksgiving. They must complete the recount by Dec. 1, the deadline for certifying the vote.

    More Reality:

    Joe Biden gains votes in Wisconsin county after Trump-ordered recount
    Milwaukee recount, which cost Trump campaign $3m, boosts Democratic president-elect days before state must certify result


    A recount in Wisconsin’s largest county demanded by President Donald Trump’s election campaign ended on Friday with the president-elect, Joe Biden, gaining votes.

    After the recount in Milwaukee county, Biden made a net gain of 132 votes, out of nearly 460,000 cast. Overall, the Democrat gained 257 votes to Trump’s 125.


    Trump’s campaign had demanded recounts in two of Wisconsin’s most populous and Democratic-leaning counties, after he lost Wisconsin to Biden by more than 20,000 votes. The two recounts will cost the Trump campaign $3m. Dane county is expected to finish its recount on Sunday.

    Overall, Biden won November’s US presidential election with 306 electoral college votes to Trump’s 232. Biden also leads by more than 6m in the popular vote tally.

    After the recount ended, the Milwaukee county clerk, George Christenson, said: “The recount demonstrates what we already know: that elections in Milwaukee county are fair, transparent, accurate and secure.”

    The Trump campaign is still expected to mount a legal challenge to the overall result in Wisconsin, but time is running out. The state is due to certify its presidential result on Tuesday.

    On Friday, Trump’s legal team suffered yet another defeat when a federal appeals court in Philadelphia rejected the campaign’s latest effort to challenge the state’s election results.

    Trump’s lawyers said they would take the case to the supreme court despite the Philadelphia judges’ assessment that the “campaign’s claims have no merit”.

    Judge Stephanos Bibas wrote for the three-judge panel: “Free, fair elections are the lifeblood of our democracy. Charges of unfairness are serious. But calling an election unfair does not make it so. Charges require specific allegations and then proof. We have neither here.”

    Trump continued to maintain without evidence that there was election fraud in the state, tweeting early on Saturday: “The 1,126,940 votes were created out of thin air. I won Pennsylvania by a lot, perhaps more than anyone will ever know.”

    Meanwhile, Trump’s baseless claims of electoral fraud in Georgia are increasingly worrying his own party. Republicans are concerned that the chaos caused by Trump’s stance and his false comments on the conduct of the election in the key swing state, which Biden won for the Democrats, could hinder his party’s efforts to retain control of the Senate.

    A runoff for the state’s two Senate seats is scheduled for early January and if the Democrats clinch both seats, it will give them control of the upper house as well as the House of Representatives.

    When asked about his previous baseless claims of fraud in Georgia during a Thanksgiving Day press conference, Trump said he was “very worried” about them, saying: “You have a fraudulent system.” He then called the state’s Republican secretary of state, Brad Raffensperger, who has defended the state’s election process, an “enemy of the people”.

    Such attacks have Republicans worried as they seek to motivate Georgia voters to come to the polls in January, volunteer for their Senate campaigns and – perhaps most importantly of all – dig deep into their pockets to pay for the unexpected runoff races.

    In particular Trump’s comments have spurred conspiracy theories that the state’s electoral system is rigged and prompted some of his supporters to make calls for a boycott of the coming vote – something that local Georgia Republicans desperately do not want. “His demonization of Georgia’s entire electoral system is hurting his party’s chances at keeping the Senate,” warned an article published by Politico.
    Last edited by Emil El Zapato, 28th November 2020 at 16:29.
    “El revolucionario: te meteré la bota en el culo"

  15. The Following User Says Thank You to Emil El Zapato For This Useful Post:

    Elen (30th November 2020)

  16. #13
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Fed Up? Japanese Youth and Rising Nationalism

    Thomas Berger, Associate Professor, Department of International Relations, Boston University; Alexis Dudden, Sue and Eugene Mercy Associate Professor of History, Connecticut College; Daqing Yang, Associate Professor of History and International Affairs, The Elliott School of International Affairs, The George Washington University; Toshihiro Nakayama (commentator), Visiting Fellow, Center for Northeast Asia Policy Studies, The Brookings Institution.

    Asia Program


    FED UP? JAPANESE YOUTH AND RISING NATIONALISM

    The consensus of the speakers at this Feb. 22 Asia Program event was that nationalism among Japanese youth is no more extensive than among other age groups, and that this is not a rising trend. Put in its historical context, this is the fifth time since World War II that there has been concern of about rising nationalism in Japan, and at no time in the past have any of these concerns been realized. However, what is new today is that Japanese politicians are much less constrained by the bureaucracy, and they also feel the need to cater to a wider audience and adopt a more populist approach. Furthermore, nationalism amongst Japan's near neighbors, China and South Korea, is growing. Thus, there is always the danger of some small event triggering an overreaction, which would escalate into a "nightmare" scenario that no one would want.

    Daqing Yang led off the discussion with some excellent polling data, noting that love of country has remained fairly constant over the past few decades at a little over 50 percent. In an NHK poll in 2000, more young people said they knew little about World War II than a similar poll conducted in 1980. He also said the polls indicate young people in their twenties have more in common with people in their seventies than they do with their parents' generation. The only polling data which indicated a significant shift in attitude is toward China: a higher percentage of Japanese, including youth, have a lower opinion of China because of the Tiananmen Incident of 1989 and anti-Chinese demonstrations against Japan in recent years. There is also no doubt that the end of the Cold War, unease over the rise of China, and the economic uncertainty of the 1990s have combined to give Japanese youth an increased sense of anxiety, which has created an opportunity to focus on nationalism. However, this has also had the effect of distancing Japanese youth from politics: the young tend to be the least likely to vote.

    Alexis Dudden gave a spirited presentation asserting that without youth and nationalism, there would be no modern Japan. She referred back to the 19th century Meiji Era to note that most of those leaders were in their twenties when they started planning to build Japan into a "modern" nation. She also asserted that in the early 1870s, these leaders carefully planned to take over Korea. They succeeded in doing so in 1910, and then Japanese leaders tried to use similar tactics to conquer China in the 1930s. Dudden advanced the interesting theory that Japanese foreign policy toward China is almost always first "practiced" on Korea. Returning to the topic of Japanese youth, she mentioned that they refer to themselves as the "super-flat" generation. Everything is on the surface; there is no deep meaning to anything, including history. Dudden then moved to a discussion of the politics of apology, asserting that from 1965, the time of normalization between Japan and South Korea, until 1995, there was an official "apology era" which allowed Japan to make "hollow" apologies to South Korea in order to advance political and economic relations. However, these apologies paid no real attention to the victims. Since 1995, these victim groups, such as Korea's sex slaves, have become more politically active. This has produced a backlash, where the Japanese assert that they have apologized enough and are never appreciated, and the victims of Japanese aggression charge Japan with insincere apologies. She said the backlash in Japan often leads to a distortion in history, most exemplified by the popular right-wing cartoonist and novelist Yoshinori Kobayashi, who, Dudden asserted, twists history to appeal to emotions for personal financial gain. Kobayashi's attitude toward the sex slaves, for example, is that they made ten times more money than the average Japanese soldier, thereby benefiting from the war, so why would anyone need to apologize to them?

    Thomas Berger agreed with Professor Yang that there is a certain crisis among Japanese youth. He noted that the economic downturn in Japan in the 1990s was the longest among advanced nations since the depression of the 1930s. Berger stated that concern about rising nationalism has been a constant since World War II, and actually the current concern is the fifth over the last fifty years. Berger said such occurrences always follow the same number of steps. First is an international event (such as North Korea firing a missile over Japan in 1998) which galvanizes concern within Japan. The next step is that people on the right begin to agitate for a revival of patriotism in Japan, and a stronger defense posture. When the right does this, the center and left resist. The result is some adjustment and change, but not all that much. Yet the situation today, especially in the area of Japanese domestic politics, differs somewhat from before. The power of the bureaucracy is weakening, for example, and changes in the Japanese electoral system are producing more populist politicians, who feel they have to appeal to broader audiences. At the same time, there are changes in the domestic political systems in South Korea and China, not simply trumped-up emotionalism, but grass roots nationalism. When such South Korean or Chinese nationalism comes face-to-face with apology fatigue in Japan, where the Japanese feel that the more they apologize, the more they get "spat upon," the results could be troubling. One horrible event, such as the killing of a Chinese tourist by a Japanese fanatic, could set off an escalating political crisis.

    Toshihiro Nakayama agreed that among Japanese youth, those under 30, there is a sense of rising frustration regarding foreign criticism. On the other hand, right-wing flag-raising nationalism leaves a majority of the population, including the youth, absolutely cold. He asserted that the problems of the graduates of the 1990s were not just about economics, but involved a lack of a sense of identity as well. He noted the U.S. has its Declaration of Independence and its Constitution, but in Japan, "there is little to connect us to our past." Therefore, the Yasukuni Shrine, the source of so much international controversy when the prime minister visits there, is in Nakayama's opinion the strongest symbol to connect the Japanese people to their past. He continued that the Yasukuni Shrine is, as a political symbol, more about who the Japanese are than any conscious anti-Korean or anti-Chinese statement. Picking up on Dr. Dudden's remarks about Yoshinori Kobayashi, Nakayama said he's like Rush Limbaugh or Bill O'Reilly in the U.S. Most Japanese youth pay little attention to Kobayashi. Nakayama concluded by asserting that he doubted the current nationalistic trend in Japan has much momentum, and felt that while some sentiment for it exists, there is no real political infrastructure to sustain it.
    “El revolucionario: te meteré la bota en el culo"

  17. The Following User Says Thank You to Emil El Zapato For This Useful Post:

    Elen (30th November 2020)

  18. #14
    Senior Member United States Dreamtimer's Avatar
    Join Date
    7th April 2015
    Location
    Patapsco Valley
    Posts
    14,610
    Thanks
    70,673
    Thanked 62,024 Times in 14,520 Posts
    I have not managed to read much of this yet. It will take time.

  19. The Following User Says Thank You to Dreamtimer For This Useful Post:

    Emil El Zapato (30th November 2020)

  20. #15
    Senior Member Emil El Zapato's Avatar
    Join Date
    3rd April 2017
    Location
    Earth I
    Posts
    12,360
    Thanks
    37,087
    Thanked 43,426 Times in 12,053 Posts
    Hi DT,

    you're not really one of the people that should read it. It's there for the 50 Million+
    “El revolucionario: te meteré la bota en el culo"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •