Keynote Speakers

Sven Giesselbach
Fraunhofer Institute - IAIS, Germany

Topics

Foundation Models, Large Language Models, Deep Learning

Biography

Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). His team develops solutions in the areas of medical, legal and general document understanding which in their core build upon (large) pre-trained language models. Sven Giesselbach is also part of the Lamarr Institute and the OpenGPT-X project in which he investigates various aspects of Foundation Models. Based on his project experience of more than 25 natural language understanding projects he studies the effect of Foundation Models on the execution of Natural Language Understanding projects and the novel challenges and requirements which arise with them. He has published several papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. Most recently he co-authored a book on “Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media” which will be published by Springer Nature.

Gerhard Paaß, Sven Giesselbach, Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media, Springer, May, 2023

https://link.springer.com/book/9783031231896

Talk



Topics

Large Language Models, Deep Learning, Natural Language Processing

Biography

Vivek Natarajan is a Research Scientist at Google leading research at the intersection of large language models (LLMs) and biomedicine. In particular, Vivek is the lead researcher behind Med-PaLM and Med-PaLM 2, which were the first AI systems to obtain passing and expert level scores on US Medical License exam questions respectively. Med-PaLM was recently published in Nature and has been featured in The Scientific American, Wall Street Journal, The Economist, STAT News, CNBC, Forbes, New Scientist among others. More recently, Vivek also led the development of Med-PaLM M, the first demonstration of a generalist biomedical AI system.

Over the years, Vivek’s research has been published in well-regarded journals and conferences like Nature, Nature Medicine, Nature Biomedical Engineering, JMLR, CVPR, ICCV and NeurIPS. It also forms the basis for several regulated medical device products under clinical trials at Google, including the NHS AI award winning breast cancer detection system Mammo Reader and the skin condition classification system DermAssist.

Prior to Google, Vivek worked on multimodal assistant systems at Facebook AI Research and published award winning research, was granted multiple patents and deployed AI models to products at scale with hundreds of millions of users.

Talk



Panos Pardalos
University of Florida, USA

Topics

Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences

Biography

Panos Pardalos was born in Drosato (Mezilo) Argitheas  in 1954 and graduated from Athens University (Department of Mathematics).  He received  his  PhD  (Computer and Information Sciences) from the University of Minnesota.  He  is a Distinguished Emeritus Professor  in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.

Panos  Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos  Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”

Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.

Panos Pardalos is also a Member of several  Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos

Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil,  Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland,  Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.

Talk



Raniero Romagnoli
Almawave, Italy

Biography

Raniero Romagnoli is currently CTO of Almawave, that he joined in 2011, with the responsibility of defining and implementing the company’s technology strategy, with a special focus on R&D labs, helping Almawave create and evolve its products and solutions, that are based on proprietary Natural Language Processing technology to leverage speech and text information and communications in order to govern processes and improve both self and assisted engagement with users. Before joining Almawave Raniero worked for 2 years in RSA and before that Raniero worked for Hewlett Packard, for almost 10 years, in different technology and divisions, covering roles both in Product Management and R&D in intelligent support systems area. Raniero has a broad experience in the artificial intelligence field, starting from his research activities in the late ’90s on Machine Learning and Neural Networks for image processing, than in the security space, and since he joined Almawave in the field of speech and text analysis.

Talk



Johannes Schmidt-Hieber

Topics

Mathematics of Artificial Neural Networks, Biological Neural Networks, Deep Learning

Biography

Johannes Schmidt-Hieber was born in Freiburg im Breisgau, Germany, in 1984. He received the master’s degree from the University of Göttingen, Germany, in 2007, and the joint Ph.D. degree from the University of Göttingen and the University of Bern, Switzerland, in 2010.,His Ph.D. degree was followed by two one-year post-doctoral visits at Vrije Universiteit Amsterdam, The Netherlands, and ENSAE, Paris, France. From 2014 to 2018, he was an Assistant Professor at the University of Leiden. Since 2018, he has been a Full Professor at the University of Twente, The Netherlands. His research interests include mathematical statistics, including nonparametric Bayes and statistical theory for deep neural networks. He serves as an Associate Editor for the Annals of Statistics, Bernoulli, and Information and Inference.

The Prof. Schmidt-Hieber’s ERC CoG grant has been selected by the ERC as one of four highlighted projects.

Talk



Michal Valko
Meta Paris, France
 

Topics

fine-tuning LLMs, LLMs, RL with human feedback

Biography

Michal is a principal llama engineer at Meta Paris, tenured researcher at Inria, and the lecturer of the master course Graphs in Machine Learning at l’ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, word models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on large large models (LMMs), in particular providing algorithmic solutions for their scalable fine-tuning and alignment. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and was a postdoc of Rémi Munos before getting a permanent position at Inria in 2012 and starting Google DeepMind Paris in 2018.

Talk