|
|
JOHN CABOT UNIVERSITY
COURSE CODE: "EXP 1024"
COURSE NAME: "Unravelling the Web of Deception - Detecting and Combating Online Disinformation and Misinformation"
SEMESTER & YEAR:
Fall 2025
|
SYLLABUS
INSTRUCTOR:
John Christopher Fiegener
EMAIL: [email protected]
HOURS:
FRI 2:00 PM - 6:00 PM Course meets on: September 19, October 3, October 17 and November 7
TOTAL NO. OF CONTACT HOURS:
45
CREDITS:
1
PREREQUISITES:
Students can take a maximum of three 1 credit courses within the 120 credit graduation requirement.
OFFICE HOURS:
by appointment
|
|
COURSE DESCRIPTION:
In this course, students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. A focus point will examine the fast-evolving role of AI-powered tools and techniques in the social media information ecosystem, and efforts to regulate AI online activity to protect privacy, slow the spread of harmful disinformation and promote creativity, productivity, and fulfilment. Students will track real-time election cycles, climate and health science coverage, hot wars and conflicts, culture wars, and other issues of interest across social media platforms. Research will focus on disinformation trends in fields such as public health and climate, elections and politics, law, technology, and business affairs. Students will be expected to study and present concepts to promote a healthier information ecosystem.
|
SUMMARY OF COURSE CONTENT:
This course will provide theoretical and practical knowledge to help understand how and why information is manipulated online, who promotes disinformation and how is it different from acts of misinformation, what defines a harmful narrative, and how to verify the factual basis of narratives that may be misleading, false, incredible, unbelievable and accurate. We examine practical innovations at the intersection of artificial intelligence and social media, with an emphasis on promises and threats from AI-enhanced social media, such as a developing ability to micro-target and tailor messages and influence public opinion, manipulate individual users at a scale and pace, and with a quality that is nearly realistic. The ability to create audio-visual deepfake content and the potential impact of generative AI on challenging what we see, hear, and believe to be true will be discussed. Students will research a topic, issue, or area of interest to identify false narratives and manipulation campaigns targeting it, understand the means and reasons behind their spread, describe the harmful impact, and recommend a mitigation strategy. A report will be submitted by the final course, at which time students will be encouraged to share their findings.
|
LEARNING OUTCOMES:
Students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. Students will track real-time political and business developments, measure the impact of online influence operations from state and non-state actors, and develop advanced media literacy proficiency for the future.
|
TEXTBOOK:
|
REQUIRED RESERVED READING:
RECOMMENDED RESERVED READING:
|
GRADING POLICY
-ASSESSMENT METHODS:
| Assignment | Guidelines | Weight |
| pass/fail | To pass the course students are expected to attend every lesson and participate, produce a fact check, design a research topic, produce a written paper, and present findings to the class during the final lesson | |
-ASSESSMENT CRITERIA:
AWork of this quality directly addresses the question or problem raised and provides a coherent argument displaying an extensive knowledge of relevant information or content. This type of work demonstrates the ability to critically evaluate concepts and theory and has an element of novelty and originality. There is clear evidence of a significant amount of reading beyond that required for the course. BThis is highly competent level of performance and directly addresses the question or problem raised.There is a demonstration of some ability to critically evaluatetheory and concepts and relate them to practice. Discussions reflect the student’s own arguments and are not simply a repetition of standard lecture andreference material. The work does not suffer from any major errors or omissions and provides evidence of reading beyond the required assignments. CThis is an acceptable level of performance and provides answers that are clear but limited, reflecting the information offered in the lectures and reference readings. DThis level of performances demonstrates that the student lacks a coherent grasp of the material.Important information is omitted and irrelevant points included.In effect, the student has barely done enough to persuade the instructor that s/he should not fail. FThis work fails to show any knowledge or understanding of the issues raised in the question. Most of the material in the answer is irrelevant.
-ATTENDANCE REQUIREMENTS:
In this course, students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. A focus point will examine the fast-evolving role of AI-powered tools and techniques in the social media information ecosystem, and efforts to regulate AI online activity to protect privacy, slow the spread of harmful disinformation and promote creativity, productivity, and fulfilment. Students will track real-time election cycles, climate and health science coverage, hot wars and conflicts, culture wars, and other issues of interest across social media platforms. Research will focus on disinformation trends in fields such as public health and climate, elections and politics, law, technology, and business affairs. Students will be expected to study and present concepts to promote a healthier information ecosystem.
|
|
|
ACADEMIC HONESTY
As stated in the university catalog, any student who commits an act of academic
dishonesty will receive a failing grade on the work in which the dishonesty occurred.
In addition, acts of academic dishonesty, irrespective of the weight of the assignment,
may result in the student receiving a failing grade in the course. Instances of
academic dishonesty will be reported to the Dean of Academic Affairs. A student
who is reported twice for academic dishonesty is subject to summary dismissal from
the University. In such a case, the Academic Council will then make a recommendation
to the President, who will make the final decision.
|
STUDENTS WITH LEARNING OR OTHER DISABILITIES
John Cabot University does not discriminate on the basis of disability or handicap.
Students with approved accommodations must inform their professors at the beginning
of the term. Please see the website for the complete policy.
|
|
|
SCHEDULE
|
|
|
SUMMARY OF COURSE CONTENT:
Session 1: Friday September 19, 2pm-6pm
Overview: What Is the Problem?
- What is the Web of Deception:
- Why does the web facilitate the spread of false and sometimes harmful narratives?
- What are mis- and disinformation? A brief history of disinformation BC (before computers) and AD (after digital)
- How and why is information manipulated online?
- Who is targeted, and who promotes disinformation?
- What are the degrees of intent, and what is the difference between mis- and disinformation?
- What defines a harmful narrative?
- Examine how false narratives spread and how to measure their harmful impact.
- What are the trends in their dissemination and efforts to combat them?
- What is the impact of various disinformation campaigns, influence operations, scams, and false narratives?
- How does it affect public perception? What is institutionalized disinformation, and what is the impact on trust in institutions?
- Online investigation and verification processes:
- Ways to verify the factual basis of narratives that may be misleading, false, incredible, unbelievable, or accurate. (More details during Course 2 on Open Source Intelligence and investigation techniques)
- Learn inoculation techniques to identify and counter false narratives.
- Develop critical thinking skills to systematically assess source credibility, recognize biases, and differentiate between facts and opinions.
- Evaluate the motives behind disinformation stories.
- Consider the potential harm caused by specific narratives.
- Track real-time political and business developments, assess the influence of online operations by state and non-state actors, and build advanced media literacy skills for the future.
- Follow news updates in politics, technology, business, environmental, health sciences, hot wars, conflicts, culture war issues, and other topics of interest across social media platforms.
Reading (Online resources to be assigned)
Session 2: Friday October 3, 2pm-6pm
Tools and Techniques To Investigate Disinformation
- How to spot misinformation and disinformation
- OSINT (Open Source Intelligence) and tools used to verify and investigate information online
- Online Platforms and online resources: use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context
- Live verification exercises
- Discuss student research topics, report structure, resources, and data collection
Reading (Online resources to be assigned)
Session 3: Friday October 17, 2pm-6pm
AI The Good, the Bad, and the Ugly
- Artificial Intelligence and artificially created and assisted disinformation:
- Rapidly evolving role of AI-powered tools and methods in the social media information ecosystem
- AI can produce false and sometimes harmful content cheaply and widely, making AI an accelerant.
- AI for detecting AI-generated content
- AI to fight disinformation, AI the neutralizer.
- Trends in political disinformation, manipulation, and influence operations using AI:
- Practices and innovations at the intersection of artificial intelligence and social media
- Include promises and threats from AI-enhanced social media, such as the ability to micro-target and tailor messages, influence public opinion, and manipulate individual users at a scale, pace, and with near real quality (NRQ).
- Creating audio-visual deepfake content and the potential impact of generative AI to challenge our perception of reality and how we see, hear, and believe events.
- Psychological factors and brute force intrusion on our mental state and understanding of reality.
- Overcoming language and cultural barriers to share views and experiences could bring us closer together… or not.
- AI tools to combat disinformation
- AI tools to make disinformation
Reading (Online resources to be assigned)
Session 4: Friday November 7, 2pm-6pm
Class presentations and discussions
- Models and methods to contain and mitigate harmful online narratives
- Imagining what a healthy global Information ecosystem looks like?
|
|
|