Group LeaderDepartmentProject DescriptionMeeting Times
Dr. Kevin MillsPhilosophyData Ownership and Artificial Intelligence: Data has become digital gold, as it is the raw material that makes artificial intelligence possible (perhaps it will turn out to be fool’s gold, but set this aside). This has spawned a sort of gold rush, where companies with valuable data increasingly lock it down and try to monetize it. But a lot of valuable data was user-generated (e.g. answers on Stack Exchange or posts on Twitter or Reddit), sometimes took considerable effort to produce, and was often submitted to platforms with the expectation that it would be made freely available to everybody. This situation raises a host of interesting ethical questions.

Examples include:
-Should companies be allowed to monetize user-generated content? On what terms?
-Should people be allowed to scrape publicly available data to train AI models? Are there limits on this?
-What kinds of rules regarding data ownership would help AI reach its maximum potential? Should this be our aim?
-What laws currently govern data ownership? Are these laws good ones?

This SERC scholars group is a weekly reading group that will take up these or related questions. Much of our work will be philosophical and foundational, and will explore the ethical underpinnings of data ownership and intellectual property. Enrolled scholars will be expected to: (i) spend a few hours each week doing readings (we can decide what to read together); (ii) attend weekly meetings to discuss these readings; and (iii) either assist in the production of a public-facing report, or participate in a public-facing debate on these matters (we will decide amongst ourselves exactly what deliverable we would like to produce; nobody will be forced to debate publicly unless they want to).
In person, Mondays, 1-2 pm
Dr. Nikki StevensDUSPBig Data: As data is being created with greater variety, in larger volume, and with more velocity — it is becoming Big Data. But what exactly is “Big Data” and what kind of influence is it having? Is having massive amounts of data about people ethical? Can we use it to improve people’s quality of life or does it increase our exposure to violence? This group will survey Big Data’s influence at individual, community, and national scales. Together, we will think through Big Data (and its computational cousins, AI and ML) and its role in contemporary life. Topics will include the use of Big Data in contexts as diverse as criminal justice, marketing and advertising, and healthcare. Participants will develop projects investigating Big Data, which could range from philosophical inquiry to computational projects that use Big Data in innovative ways.In person, Mondays, 3:30-4:30pm
Dr. Walter Gerych & Dr. Amir ReisizadehEECS, CSAIL, & LIDSHow Fair are Generative Models?: Generative models such as Large Language Models (LLMs) have revolutionized various sectors including healthcare, the criminal justice system, and finance offering unparalleled capabilities in data analysis, decision-making, and personalized services. In healthcare, LLMs can assist in diagnosing diseases and recommending treatments; in the justice system, they can help predict crime patterns and suggest fair sentencing; in finance, they can analyze market trends and assist in credit scoring. However, deploying generative models in these critical domains necessitates a rigorous emphasis on demographic fairness. Ensuring that these models operate without bias toward any demographic group is crucial to uphold ethical standards and public trust. Unfair biases in generative models can lead to significant negative consequences, such as misdiagnoses in healthcare, unjust sentencing in the justice system, and discriminatory lending practices in finance. Therefore, incorporating fairness into the development and implementation of LLMs and other generative models is not only a technical requirement but also a moral imperative to ensure equitable and just outcomes for all individuals, regardless of their demographic backgrounds. In this project, we aim to study the fairness characteristics of generative models across different tasks and propose remedies to mitigate their bias and unfairness embedded through the pre-training data. We foresee at least two main subgroups within this project: one focusing on the bias of generative models in clinical settings, and one group focusing on studying fairness characteristics of LLMs used for general classification tasks and developing fairness mitigation remedies.In person, Mondays, 5-6pm
Dr. Karim NaderPhilosophyGamification: Gamification is the process of using game design strategies to encourage people to reach some set goal. If you got points for doing laundry or a reward for cleaning the dishes, you’d be more motivated to do your chores. This is why some people believe that we should gamify some important aspects of our lives: your Apple watch turns fitness into a game to get you moving, and your employer creates fun incentives to make you more productive. There is a risk that gamification comes with an oversimplification of richer values. Fitness is not all about step count, after all, so we’d be wrong to pursue a certain number of steps, thinking it is sufficient for physical health. Can we preserve the motivational power of gamification while avoiding its downsides? That’s what we will aim to answer.In person, Tuesdays, 3-4pm
Dr. Lula ChenResearch Director, MIT GOV/LABGenerative AI & Democracy: We live in an era where technology has changed the ways that we interact in a democracy, and generative AI accelerates that change. With generative AI, there are more and more ways to improve democracy, but also similar ways that can threaten democracy. This working group will give students an opportunity to delve deeper into how generative AI can impact democracy, in ways that can strengthen democracy and that are socially and ethically responsible. We will cover several topics related to generative AI and democracy (information/misinformation, elections, government responsiveness/decision-making, online deliberation, etc) and discuss how generative AI can be used to foster a healthy democracy.

Students will then work together on a group project to study how generative AI can be used to improve democracy in a socially and ethically responsible way. The first semester will be used to familiarize students with the topic and to design a project. The second semester will be used to implement that project. These projects will be determined in consultation with the students, and can include working on an online deliberation platform with MIT GOV/LAB.

This group is also a unique opportunity to bring together diverse perspectives and ideas. Participants in this group will include undergraduate (or graduate) students at MIT, HBCUs, tribal colleges, and minority serving institutions. We will have a weekly virtual meeting.
Virtual, Tuesdays, 3-4pm
Dr. Michal MasnyPhilosophyDeepfakes: Epistemology, Ethics, and Politics: According to a recent survey by YouGov (2023), the spread of misleading deepfakes is the single biggest concern about the use of AI among Americans. This SERC Scholar Reading Group will explore the ethical, political, and epistemological dimensions of this concern and examine how it can be mitigated. Discussion topics will include: non-consensual deepfake pornography; how deepfakes can be used to influence electoral processes; the psychology of misinformation; and the efficacy of legal, technological, and educational countermeasures.

In the fall semester, we will meet weekly to discuss recent journal articles, newspaper stories, podcasts, and case studies. In the spring semester, we will split into smaller groups to pursue projects guided by student interests. These might include research papers contributing to philosophical, psychological, and computer science literatures; reports assessing the efficacy of selected countermeasures; or resources aimed at educating the public about the potential misuses of deepfakes.
In person, Tuesdays, 4-5pm
Dr. Michelle SpektorHASTSSurveillance: Surveillance has always been part of everyday life, but innovations in AI and other emerging technologies have shifted the ways in which governments, corporations, and our own communities can track us. How did we get here, and what can and should we do about it?

This group examines the social, political, and ethical implications of surveillance technologies. In the fall, participants will master key theories and global histories of surveillance, while the spring semester will cover methods for researching surveillance’s human impacts. Topics include specific technologies like ID cards, facial recognition, and spyware, and their roles in policing, government bureaucracy, healthcare, and other arenas. Participants will develop individual projects on surveillance technologies of their choice, alongside tackling big questions about the consequences of surveillance for privacy, discrimination, and political power, how surveillance technologies should be regulated and designed, and whether or not they should be implemented at all.
In person, Wednesdays, 2-3pm