CSE Distinguished Seminar Series

Guest Talks 2020


Upcoming Lectures




Previous Talks



Titus Winters - September 30


Shifting Left: Cost vs Fidelity and Emerging Truths 


Abstract: DevOps practitioners regularly talk about “shifting left” - thinking about scalability, security, QA, Ops, etc, earlier in the software engineering process. Doing so allows issues to be addressed earlier, and leads to more stable software, lower costs, and a higher developer velocity. This talk will show that the shift-left mentality is perhaps more fundamental: “shifting-left” encapsulates a fundamental tradeoff between fidelity in defect finding and defect cost, both for defects that are found and those that slip through. Your process is most effective when you’ve shifted-left defect finding and product fitness practices in such a way to ensure that most releases are quick, easy, non-events. When viewed as a currency, this concept ties final product fitness with standard engineering workflow practices, and “shifting left” emerges as more than a buzzword - it’s a fundamental recognition of a real truth in Software Engineering as a discipline. 

Bio:Titus is a Senior Staff Software Engineer at Google, where he has worked since 2010. At Google, he is the library lead for Google’s C++ codebase: 250 million lines of code that will be edited by 12K distinct engineers in a month. He served several years as the chair of the subcommittee for the design of the C++ standard library. For the last 9 years, Titus and his teams have been organizing, maintaining, and evolving the foundational components of Google’s C++ codebase using modern automation and tooling. Along the way he has started several Google projects that are believed to be in the top 10 largest refactorings in human history.  That unique scale and perspective has informed all of his thinking on the care and feeding of software systems. His most recent project is the book “Software Engineering at Google” (aka “The Flamingo Book”),  published by O’Reilly in early 2020. 





Johannes Schöning - October 14


The Importance of HCI Perspectives for Next‐generation Spatial User Interfaces


Abstract: Catastrophic incidents associated with GPS devices and other personal navigation technologies are all too common: A tourist drives his rental car across a beach and directly into the Atlantic Ocean, a person in Belgium intending to drive to a nearby train station ends up in Croatia, a family traveling on a dirt road gets stranded for four days in the Australian outback. Often we blame those accidents on human error, but as HCI researchers, we have a deep understanding that humans make mistakes and it’s our responsibility as HCI researchers to analyse the failures and improve the technological design to minimise the chances for human error. 
In my talk, I give an overview of how we design, develop and evaluate the next generation of such spatial user interfaces with a lens of HCI. I will outline our approaches to help people navigate, perceive and interact with space from personal navigation technologies to climbing in virtual worlds. In my talk I will present the spectrum of our work bridging the fields human-computer interaction (HCI), geographic information science and ubiquitous interface technologies.

Bio: I am a Lichtenberg Professor and Professor of Human-Computer Interaction (HCI) at the University of Bremen in Germany.  In Bremen I am the co-director of the Bremen Spatial Cognition Center (BSCC) and co-chairman of the TZI (Technologie-Zentrum Informatik und Informationstechnik). Before I was a visiting lecturer at UCL, UK, helping to setup the Intel Collaborative Research Institute for Sustainable Cities and had a faculty position at Hasselt University, Belgium. In addition, I am a visiting professor at the Interactive Technologies Institute, Portugal. Previously, I worked in Saarbrücken, where I was a senior consultant at the German Research Centre for Artificial Intelligence (DFKI). During my time at DFKI, I received a PhD in computer science at Saarland University (2010), which was supported by the Deutsche Telekom Labs in Berlin. I obtained my Master’s degree in Geoinformatics at the University of Münster at the Institute for Geoinformatics (2007). www.johannesschoening.de





Mark Billinghurst - October 21


Towards Empathic Computing: Next-Generation Collaborative Technologies


Abstract: In this presentation, I review next generation technologies for collaboration in a post-COVID world. People have been researching collaborative tools for many years, but the global pandemic forced large numbers of people to work from home for the first time. The experience of long term use of video conferencing and other collaborative tools soon highlighted the limitations of current technology. I review some of the lessons learned from this widespread use and discuss how new approaches to collaboration using AR and VR could overcome some of the limitations of desktop tools. In particular use of AR and VR can enable more natural ways to work together remotely, while new directions such as Empathic Computing can enable new ways to work together. As remote working becomes the new normal, there are opportunities and directions for future work which will also be discussed. 

Bio: Professor Mark Billinghurst is the Director of the Empathic Computing Laboratory at the University of South Australia, and the University of Auckland. A pioneer in the fields of Augmented and Virtual Reality, Professor Billinghurst has been researching AR and VR for over 25 years, publishing more than 550 research papers on topics such as Collaborative AR and VR, Multimodal Interfaces, Mobile AR, and Empathic Computing. In 2013 he was elected as a Fellow of the Royal Society of New Zealand, and in 2019 was given the ISMAR Career Impact Award in recognition for lifetime contribution to AR research and commercialization.





Saurabh Bagchi - October 28


Dependability: Meet Data Analytics


Abstract: We live in a data-driven world as everyone around has been telling us for some time. Everything is generating data, in volumes and at high rates, from the sensors embedded in our physical spaces to the large number of machines in data centers which are being monitored for a wide variety of metrics. The question that we pose is: Can all this data be used for improving the dependability of computing systems?
Dependability is the property that a computing system continues to provide its functionality despite the introduction of faults, either accidental faults (design defects, environmental effects, etc.) or maliciously introduced faults (security attacks, external or internal). We have been addressing the dependability challenge through large-scale data analytics applied end-to-end from the small (networked embedded systems, mobile and wearable devices) [e.g., NeurIPS-20, Sensys-20, UsenixSec-20, NDSS-20, DSN-19, UsenixSec-18, S&P-17] to the large (edge and cloud systems, distributed machine learning clusters) [e.g., DSN-20, UsenixATC-20, UsenixATC-19, ICS-19, TDSC-18]. In this talk, I will first give a high-level view of how data analytics has been brought to bear on dependability challenges, and key insights arising from work done by the technical community broadly. Then I will do a deep dive into the problem of configuring complex systems to meet dependability and performance requirements, using data-driven decisions. The first detailed item is in the small: how to perform analytics on streaming video close to the source of the data, such as on an embedded or mobile device, while providing performance guarantees. The second is in the large: how to reconfigure clustered NoSQL databases in the face of changing workloads while preserving availability.

Bio: Saurabh Bagchi is a Professor in the School of Electrical and Computer Engineering and the Department of Computer Science at Purdue University in West Lafayette, Indiana. He is the founding Director of a university-wide resiliency center at Purdue called CRISP (2017-present) and co-lead on the WHIN center for IoT testbeds for digital agriculture and advanced manufacturing. He is the recipient of the Alexander von Humboldt Research Award (2018), an Adobe Research award (2017), the AT&T Labs VURI Award (2016), the Google Faculty Award (2015), and the IBM Faculty Award (2014). He serves on the IEEE Computer Society Board of Governors and is a member of the International Federation for Information Processing (IFIP). Saurabh's research interest is in distributed systems and dependable computing. He is proudest of the 21 PhD and about 50 Masters students who have graduated from his research group and who are in various stages of building wonderful careers in industry or academia. In his group, he and his students have far too much fun building and breaking real systems for the greater good. Saurabh received his MS and PhD degrees from the University of Illinois at Urbana-Champaign and his BS degree from the Indian Institute of Technology Kharagpur, all in Computer Science.






Paul Groth - November 4


The Challenge of Constructive Data Search


Abstract: A central challenge in our modern information environment is how to find data and unify it from a multitude of diverse sources. The problem of data discovery remains a challenging activity in particular for researchers. I will present our work looking at how researchers go about searching and evaluating data. This is based on in-depth social science inquiry and a unique survey of over 1600 researchers. Based on these insights, I will outline the challenge of constructive data search - building datasets on the fly from multiple sources. Finally, I will discuss our work on the automatic construction of integrated data in the form of knowledge graphs.

Bio: Paul Groth is Professor of Algorithmic Data Science at the University of Amsterdam where he leads the Intelligent Data Engineering Lab (INDElab). He holds a Ph.D. in Computer Science from the University of Southampton (2007) and has done research at the University of Southern California, the Vrije Universiteit Amsterdam and Elsevier Labs. His research focuses on intelligent systems for dealing with large amounts of diverse contextualized knowledge with a particular focus on web and science applications. This includes research in data provenance, data integration and knowledge sharing. Previously, Paul led the design of a number of large scale data integration and knowledge graph construction efforts in the biomedical domain. Paul was co-chair of the W3C Provenance Working Group that created a standard for provenance interchange. He has also contributed to the emergence of community initiatives to build a better scholarly ecosystem including altmetrics and the FAIR data principles.Paul is co-author of “Provenance: an Introduction to PROV” and “The Semantic Web Primer: 3rd Edition” as well as numerous academic articles. He blogs at http://thinklinks.wordpress.com.




Susan Holmes - November 11


Zoom: https://videoconf-colibri.zoom.us/j/949837545


Using the Space of Phylogenetic trees: Computational and Mathematical solutions to Biological problems


Abstract: Phylogenetic trees are important in the study of the evolution of diseases: the human microbiome, HIV and Covid-19 being just a few examples. The mathematical construction of a space of all trees enables the computation of "average" trees in the sense of Frechet. This space has the property of being negatively curved (CAT0) and some of its mathematical properties have consequences for the algorithms we use for combining trees and data, my talk will provide an overview of how the mathematical results can help statistical inference on biological problems that use phylogenies and some pointers to results and open problems in the area.


Bio: Professor of Statistics, Stanford University. Trained in the French School of Data Analysis (Analyse des Données) in the 1980’s, Professor Holmes is a Data Scientists specialized in exploring and visualizing complex biological data. She is interested in integrating the information provided by phylogenetic trees, community interaction graphs and metabolic networks with sequencing data and clinical covariates in biological contexts such as immune system and cancer, resilience and biomarker detection in the human microbiome and drug resistance in HIV. The methods she develops use computational statistics,  nonparametric computer intensive methods such as the bootstrap and MCMC to draw inferences about many complex biological phenomena and are made available as open source projects in Bioconductor and R. She  teaches many courses in Statistics and Bioinformatics to biologists and mathematicians and has written a book with Wolfgang Huber that is freely available online: at http://bios221.stanford.edu/book/ . More information: http://statweb.stanford.edu/~susan/




Henderik Proper - November 18


Zoom: https://videoconf-colibri.zoom.us/j/949837545


Domain Modelling - Understanding the things we talk about


Abstract: Whenever we study, or reflect about, complex phenomena such as buildings, information systems, business models, organisations, etc, we tend to use an abstraction of the actual phenomenon. Such abstractions enable us to focus on those properties of the phenomenon at hand, which matter in relation to some purpose we may have (e.g. to understand, to assess, or to change the phenomenon). Such abstractions, `stand model for' (with regards to the relevant properties) the actual phenomenon, and are therefore generally regarded as (domain) models.
In general, a domain model provides an explicit (i.e. human understandable) representation of the structure and semantics of selected aspects of some domain of interest. Depending on the application context, the domain models may e.g. take the form of data models, semantic models, system dynamics models, information models, enterprise models, domain ontologies, or knowledge graphs. Each time, these models are used to explicitly capture domain knowledge. In other words, domain models allow us to clarify and understand the things we talk, and reason about.
As such, domain models are used / useful in many more application contexts. For instance, in present day society, we can observe a clear increase in the role / use of knowledge-intensive computing technologies, including (explainable) AI, data science & modelling, and digital twins. The application, and operational use, of these technologies also requires relevant domain knowledge to be captured in terms of domain models (for example in terms of domain ontologies, or knowledge graphs).
In this webinar, I will discuss some of the general foundations of domain modelling, including (1) the notion of model itself, (2) what it means to create a model, as well as (3) the role of modelling languages. Based on this, I will then explore some of the key research challenges with regards to domain modelling (applicable across different application contexts), including human-model-interaction, collaborative modelling, optimising the return of modelling effort (RoME), and the tension between standardising modelling languages and the need for purpose specific languages.

Over the past years, my research and industrial work has allowed me to study the use of domain models in the context of enterprise engineering and architecting. During the webinar we will, therefore, also pay specific attention to the challenges in this application area for domain modelling.

Bio: Prof.dr. Henderik A. Proper, Erik to friends, is an FNR PEARL Laureate, and Head of Academic Affairs at the Luxembourg Institute of Science and Technology (LIST) in Luxembourg, and senior research manager within its IT for Innovative Services (ITIS) department. He also holds an adjunct chair in Computer Science at the University of Luxembourg.
Erik has a mixed background, covering a variety of roles in both academia and industry. His professional passion is the further development of the field of enterprise engineering, and enterprise modelling in particular. His long experience in teaching and coaching a wide variety of people enables him to involve and engage others in this development. He has co-authored several journal papers, conference publications and books. His main research interests include enterprise engineering and enterprise modelling, which includes enterprise architecture, systems theory, business/IT alignment and conceptual modelling
Erik received his Master's degree from the University of Nijmegen, The Netherlands in May 1990, and received his PhD (with distinction) from the same University in April 1994. In his Doctoral thesis he developed a theory for conceptual modelling of evolving application domains, yielding a formal specification of evolving information systems.
After receiving his PhD, Erik became a senior research fellow at the Computer Science Department of the University of Queensland, Brisbane, Australia. During that period he also conducted research in the Asymetrix Research Lab at that University for Asymetrix Corp, Seattle, Washington. In 1995 he became a lecturer at the School of Information Systems from the Queensland University of Technology, Brisbane, Australia. During this period he was also seconded as a senior researcher to the Distributed Systems Technology Centre (DSTC), a Cooperative Research Centres funded by the Australian government.
From 1997 to 2001, Erik worked in industry. First as a consultant at Origin, Amsterdam, The Netherlands, and later as a research consultant and principal scientist at the Ordina Institute for Research and Innovation, Gouda, The Netherlands.
In June 2001, Erik returned to academia, where he became an adjunct Professor at the Radboud University Nijmegen. In September 2002, Erik obtained a full-time Professorship position at the Radboud University Nijmegen.
In January of 2008, he went back to combining industry and academia, by combining his Professorship with consulting and innovation at Capgemini, with the aim of more tightly combining his theoretical and practical work. Finally, in May 2010 Erik moved to the Luxembourg Institute of Science and Technology as a PEARL chair, while also continuing his chair at the Radboud University Nijmegen in the Netherlands. As of June 2017, Erik holds an adjunct chair at the University of Luxembourg.




 

Cathy Mulligan - November 25 - CANCELLED


Zoom: https://videoconf-colibri.zoom.us/j/949837545

Blockchain and Europe’s Common, Decentralised Future


Abstract: Since the dawn of the computing era, we have experienced fluctuations between centralised and decentralised computation - firstly with mainframes, then with PCs through to cloud computing and finally to smart phones and IoT devices. For the vast majority of computing’s life, computational capacity has been located within the realms of the corporate sphere – hidden behind large capital investments and firewalls. As of 2004, however, two separate but deeply intertwined things occurred – firstly, the world saw the emergence of true Open APIs – APIs that permitted anyone with an internet connection to access data (e.g. on Facebook, or Twitter) and secondly, the same amount of computing power that took mankind to the moon was placed into the hands of end-users, not just companies. The end result of those two things was blockchain – a new form of decentralisation that challenges not just our notions of centralised / decentralised computing – but the very foundations of our economy itself. Whole new forms of business models are enabled by Blockchain – from decentralised data marketplaces to decentralised food production. Indeed, as the next wave of decentralisation descends upon us – in the form of AI on the Edge and AI on the device, blockchain will play a critical role in the emergence of a new European economy – a truly digital economy. More importantly, blockchain can help us build a European economy that is environmentally, economically and socially sustainable. This talk will outline the interdisciplinary path that Europe needs to follow in order to achieve this as our world comes to terms with its past and faces its uncertain future – together but decentralised.

Bio: Dr Cathy Mulligan has over 25 years' experience in technology across both industry & academia; she is an Honorary Researcher at UCL, visiting researcher at Imperial College & VP/Region CTO for North & West Europe at Fujitsu. Prior to joining Fujitsu, she was Co-Director of the Imperial College Centre for Cryptocurrency Research and Engineering at Imperial College, where she helped develop over 45 proofs of concept around Blockchain. Cathy was an advisor to several governmental reports including for the UK's Chief Scientific Advisor report on blockchain for government. Within her work at UCL, Cathy leads the work around "DataNet" - a blockchain-based infrastructure designed to rebalance and redistribute the geo-political balance of technology across the world. She has a strong interest & track record in using research to influence policy in different arenas, including high-level policy discussions at various levels across governments, NGOs, UN, OECD & EU around various digital technologies including 5G, blockchain & IoT. She is the author of 7 technology books (academic press). She is a member of the World Economic Forum’s Data Policy Global Future Council and was a founding member of the World Economic Forum's Blockchain Council. Cathy has a strong commitment to sustainability and equitable access to technology, evidenced by her being a panel member of the UNSG's High Level Panel on Digital Cooperation. She received her Masters and PhD from the University of Cambridge and her BSc (Hons 1) from UNSW, Australia.