|
|
JOHN CABOT UNIVERSITY
COURSE CODE: "EXP 1024"
COURSE NAME: "Unravelling the Web of Deception - Detecting and Combating Online Disinformation and Misinformation"
SEMESTER & YEAR:
Fall 2024
|
SYLLABUS
INSTRUCTOR:
John Christopher Fiegener
EMAIL: [email protected]
HOURS:
FRI 2:00PM- 6:00PM Course meets on: September 13, October 4, October 25 and November 8
TOTAL NO. OF CONTACT HOURS:
15
CREDITS:
1
PREREQUISITES:
Students can take a maximum of three 1 credit courses within the 120 credit graduation requirement.
OFFICE HOURS:
|
|
COURSE DESCRIPTION:
In this course, students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. A focus point will examine the fast-evolving role of AI-powered tools and techniques in the social media information ecosystem, and efforts to regulate AI online activity to protect privacy, slow the spread of harmful disinformation and promote creativity, productivity, and fulfilment. Students will track real-time election cycles, climate and health science coverage, hot wars and conflicts, culture wars, and other issues of interest across social media platforms. Research will focus on disinformation trends in fields such as public health and climate, elections and politics, law, technology, and business affairs. Students will be expected to study and present concepts to promote a healthier information ecosystem.
|
SUMMARY OF COURSE CONTENT:
This course will provide theoretical and practical knowledge to help understand how and why information is manipulated online, who promotes disinformation and how is it different from acts of misinformation, what defines a harmful narrative, and how to verify the factual basis of narratives that may be misleading, false, incredible, unbelievable and accurate. We examine practical innovations at the intersection of artificial intelligence and social media, with an emphasis on promises and threats from AI-enhanced social media, such as a developing ability to micro-target and tailor messages and mislead, misinform, persuade or manipulate individual users, at a scale, a pace and with a quality never before known. The ability to create audio-visual deep fake content and the potential impact of generative AI to challenge what we see and hear, and what we believe to be true, will be discussed. Student will research a topic, issue, or area of interest to identify false narratives and manipulation campaigns aimed at it, understand the means and reasons they spread, describe the harmful impact, and recommend a mitigation strategy. A report will be submitted by the final course when students will be encouraged to share their findings.
|
LEARNING OUTCOMES:
Students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. Students will track real-time election cycles, climate and health science coverage, hot wars and conflicts, culture wars, and other issues of interest across social media platforms.
|
TEXTBOOK:
|
REQUIRED RESERVED READING:
RECOMMENDED RESERVED READING:
|
GRADING POLICY
-ASSESSMENT METHODS:
Assignment | Guidelines | Weight |
pass/fail | To pass the course students are expected to attend every lesson and participate, produce a fact check, design a research topic, produce a written paper, and present findings to the class during the final lesson | 100% |
-ASSESSMENT CRITERIA:
AWork of this quality directly addresses the question or problem raised and provides a coherent argument displaying an extensive knowledge of relevant information or content. This type of work demonstrates the ability to critically evaluate concepts and theory and has an element of novelty and originality. There is clear evidence of a significant amount of reading beyond that required for the course. BThis is highly competent level of performance and directly addresses the question or problem raised.There is a demonstration of some ability to critically evaluatetheory and concepts and relate them to practice. Discussions reflect the student’s own arguments and are not simply a repetition of standard lecture andreference material. The work does not suffer from any major errors or omissions and provides evidence of reading beyond the required assignments. CThis is an acceptable level of performance and provides answers that are clear but limited, reflecting the information offered in the lectures and reference readings. DThis level of performances demonstrates that the student lacks a coherent grasp of the material.Important information is omitted and irrelevant points included.In effect, the student has barely done enough to persuade the instructor that s/he should not fail. FThis work fails to show any knowledge or understanding of the issues raised in the question. Most of the material in the answer is irrelevant.
-ATTENDANCE REQUIREMENTS:
In this course, students will recognize and debunk false claims and explore forms of disinformation, misinformation, persuasion, coercion, intimidation, and exploitation commonly encountered online. Students will use open-source intelligence tools to verify text, audio, and video content, learn techniques to assess sources and original context, examine current harmful disinformation trends, and study practices designed to mitigate the effects of harmful false narratives. A focus point will examine the fast-evolving role of AI-powered tools and techniques in the social media information ecosystem, and efforts to regulate AI online activity to protect privacy, slow the spread of harmful disinformation and promote creativity, productivity, and fulfilment. Students will track real-time election cycles, climate and health science coverage, hot wars and conflicts, culture wars, and other issues of interest across social media platforms. Research will focus on disinformation trends in fields such as public health and climate, elections and politics, law, technology, and business affairs. Students will be expected to study and present concepts to promote a healthier information ecosystem.
|
|
ACADEMIC HONESTY
As stated in the university catalog, any student who commits an act of academic
dishonesty will receive a failing grade on the work in which the dishonesty occurred.
In addition, acts of academic dishonesty, irrespective of the weight of the assignment,
may result in the student receiving a failing grade in the course. Instances of
academic dishonesty will be reported to the Dean of Academic Affairs. A student
who is reported twice for academic dishonesty is subject to summary dismissal from
the University. In such a case, the Academic Council will then make a recommendation
to the President, who will make the final decision.
|
STUDENTS WITH LEARNING OR OTHER DISABILITIES
John Cabot University does not discriminate on the basis of disability or handicap.
Students with approved accommodations must inform their professors at the beginning
of the term. Please see the website for the complete policy.
|
|
SCHEDULE
|
|
Session 1: September 13, 2024, 2:00PM to 6:00PM
-Define course objective, discuss term references, a brief history of internet lies and half-truths -Media literacy and the basics of online investigation and verification processes
-Platform Guidelines and EU and US legislative and legal efforts to address online disinformation -How and why dis-misinformation circulates
-Who uses it, who makes it, and who benefits from harmful false narratives
-How to measure the impact of harmful false narratives
-Case studies
-Reading (Online resources to be assigned)
Session 2: October 4, 2024, 2:00PM to 6:00PM
-OSINT (Open Source Intelligence) and tools used to verify and investigate information online -Online Platforms and online resources
-Live verification exercises
-How to spot misinformation and disinformation
-Case studies
-Discuss student research topics, report structure, resources, and data collection
-Reading (Online resources to be assigned)
Session 3: October 25, 2024, 2:00PM to 6:00PM
-Artificial intelligence; riding the wave of AI assisted disinformation to election 2024.
The course focuses on the use of AI tools to create and counter online misinformation. It specifically looks at state-sponsored and individual approaches to using AI resources for disinformation campaigns that can impact the US Presidential election process in 2024.
With 11 days to go before the election, we will examine top trends in political disinformation, manipulation, and influence operations and analyze their impact.Artificial Intelligence and the coming wave of artificially created and assisted disinformation
-AI tools to combat disinformation
-AI tools to make disinformation
-Case studies
-Reading (Online resources to be assigned)
Session 4: November 8, 2024, 9:00AM to 1:00PM
-Class presentations and discussions
-Models and methods to contain and mitigate harmful online narratives
-Imagining what a healthy global Information ecosystem looks like?
|
Session | Session Focus | Reading Assignment | Other Assignment | Meeting Place/Exam Dates |
Friday, September 13 2:00PM to 6:00PM | Overview, what is the problem? | Online reading material | | |
Friday October 4 2:00PM to 6:00PM | OSINT and verification tools and techniques | | | |
Friday October 25. 2:00PM to 6:00PM | AI tools to combat and make disinformation | | | |
|