I am a Professor of European History at the School of European Languages, Culture and Society of University College London (UCL) and, since 2010, an Associate Director of the UCL Centre for Digital Humanities. My research interests encompass various aspects of contemporary history, with a current focus on the history of early twentieth-century internationalism and on digital approaches to historical inquiry. I am a seminar convenor at the Institute for Historical Research (IHR), part of the University of London’s School of Advanced Study (SAS), and a co-founder of the Richard Deswarte Prize for Digital History. Recent publications include Pieter Geyl and Britain: Encounters, Controversies, Impact (University of London Press, 2022; co-edited with Stijn van Rossem) and: The European Unity League: Sir Max Waechter and the idea of Europe, 1904–1924 (London: Bloomsbury, 2025, forthcoming).
Lu Liu is a doctoral student in the Department of Information Studies, UCL. Her research focuses on developing a Gold Standard Corpus (GSC) for assessing the capabilities of Named Entity Recognition (NER) in the long 18th century texts (from late 17th to early 19th). This research aims to compare different NER approaches and drive progress in historical NER.
Dr. Andreas Vlachidis is Associate Professor in Information Science at UCL’s department of information studies, teaching modules in Information Science Technology and in Natural Language Processing and Text Analysis. He has a strong track record in managing multidisciplinary research projects and fostering cross-disciplinary collaborations. As Technical Lead and Co-I of the multimillion UKRI-funded Sloane Lab, heled efforts in data unification, aggregation, and knowledge base development. Currently, as PI of the AHRC-DFG MeDoraH project, he leads UK-Germany research on oral history archival sources, semantics, and Digital Humanities through knowledge graphs, network analysis, and ontology-driven methods. His interdisciplinary expertise spans Information Science, Digital Humanities, and Computer Science, with a research focus on Text Analysis, Information Extraction, Semantic Data Modelling, and Knowledge Bases.
I'm an assistant professor, researcher at Peking University, China. My research focuses on the impact of digital platforms, big data, and artificial intelligence on the society.
My current research explores the intersection of technology and society, focusing on digital media platforms, algorithmic divides, and the comparative study of internet use and information practices across different regions. With a DPhil from the University of Oxford, my work spans both theoretical and applied research, aiming to understand and address the evolving digital inequalities in our increasingly connected world.
I teach courses on media and society, data storytelling, and the frontiers of computational social science, where I integrate my research insights into the classroom. I am committed to advancing our understanding of the digital landscape and its implications for society, particularly in the context of China and other Global South regions.
When I am not teaching or writing, you will find me in ballet/pilate class, or sipping a nice cup of loose green tea.
Emeritus Professor of Digital Humanities at the Department of Information Studies, University College London (UCL) and Visiting Professor at the Department of Information Management, Peking University (PKU).
Professor Mahony played a key role in establishing and developing the graduate programme in Digital Humanities at UCL. He served as Programme Director from its inception in 2010 until 2017 when he became Director of the UCL Centre for Digital Humanities, a position he held until his retirement in 2020. He has lectured extensively and published widely on education and pedagogy within the field of Digital Humanities. His research interests include digital humanities, education, communication, information studies, digital storytelling, equality, diversity, and inclusion (EDI), and the open agenda.
接着,来自清华大学的金兼斌教授深入探讨了生成式 AI 的可信性与事实核查问题。金兼斌教授阐述了科学传播中的生成式 AI 可信性与事实核查研究,讨论了生成式人工智能的可信度研究、AI 谄媚现象及 AI 生成内容的事实核查相关问题,进行了较为系统的探讨与前瞻。他指出尽管 AI 能通过整理信息缓解信息过载,通过交互式见底降低学习门槛来促进公众对科学的理解与参与,但其“幻觉”、放大科学偏见以及“AI 谄媚”等问题都带来了巨大的可信度挑战。他进一步阐述了通过改进算法和平台治理来减少这些倾向的必要性。
之后,来自苏州大学科技传播研究中心的王国燕教授结合自己的使用体验和科研经验,从科技情报、科研赋能、科学教育等角度给出 AI 赋能科学传播领域的生动实例。此外,她关注到技术进步带来的知识过载、偏见、可解释性问题以及人际沟通替代效应。她特别讨论了“欺骗性人工智能”的风险,并展望了未来具身智能、人机交互等领域的潜在风险及应对思路。
最后,来自北京大学信息管理系的周庆山教授则从宏观层面探讨了生成式AI 带来的系统性挑战和“范式革命”,并对人机关系的未来进行前瞻。他认为,科技传播需要应对这一革命性的变化,并提倡学术共同体对科学传播中的生成式人工智能应用达成共识与规范。他特别关注到 AI 时代的信息弱势群体,如老年人、残疾人和科学素养较低的人群,在辨别人工智能生成内容方面可能面临的“知识失能”问题。
在互动讨论环节,与会嘉宾围绕着 AI 时代的科学研究方法、生成式人工智能在科研中的可信性、大众传播与科学传播、开放科学与数据开源等议题展开热烈讨论。讨论涉及了“混合研究方法”的信度和效度问题、如何证明“Prompt Engineering”作为一种科学方法的可行性、科技传播与大众传播在术语使用上的差异,及科学传播叙事的重要性。
本次研讨会不仅全面剖析了 AI 在科技传播领域的应用,更为学术界应对新时代变革提供了宝贵的见解与前瞻性思考。