Alle Vorträge finden jeweils 17:00 Uhr in SR 3325 am Ernst-Abbe-Platz 2 statt!
Robots with Human-Level Intelligence: Is It Possible? Dangerous? Inevitable?
Intelligence is of great importance in our society and culture. But intelligence also remains mysterious and is notoriously difficult to reproduce in technological artifacts. So far, every attempt to produce an intelligent machine—for example: Deep Blue, Watson, or AlphaGo—has brought about technological progress but has failed to significantly advance our understanding of intelligence. Maybe all of these machines were missing something? Maybe they were missing a body? And maybe therefore our next attempt to build a truly intelligent machine should take the form of a robot? I will argue why I believe this is a good idea and show our attempts of moving towards this goal. However, rather than shooting for mankind's most advanced intellectual challenges, such as the games of chess and go, I will advocate to first replicate the surprising abilities of cockatoos. And cockatoo robots sound so much less dangerous than the apocalyptic robot portrayed in Hollywood movies.
Vorbeugen statt Neustarten: Was kann man gegen Software Aging machen?
Software aging is a phenomenon of gradual performance degradation of running software processes due to resource or memory leaks, accumulation of rounding errors, or other types of corrupted process state. The underlying software defects typically manifest after a long incubation time, and are frequently discovered only in a production scenario. Consequently, such problems can incur substantial follow-up costs, including decreased availability and performance of production systems, need for workarounds (e.g. controlled restarts), and high debugging effort for localizing the causes.
In this talk we first present results of an empirical study of more than 400 software aging issues found in 11 Apache Foundations (Java) projects. We discuss several surprising findings, including prevalence of seemingly easy-to-fix defects like incorrect exception handling, and the fact that most defects are found by manual code review. Our findings lead to several tangible implications on more effective prevention and diagnosis of software aging.
In the second part we focus on the challenge how leak-related defects can be found during software testing processes widely deployed in industry today. The inherent problem is that software tests should run as fast as possible, contradicting the long incubation time of visible failures. To address this difficulty we propose an approach for automated leak detection by comparing the memory allocation behavior of successive software versions under development. This approach comes in two flavors: to detect that new memory leaks could have been introduced at all, we compare overall memory usage of two software versions and deploy statistical anomaly detection methods. In the second variant we compare the behavior of each code location which allocates heap memory. The latter version can pinpoint the root causes of potential memory leaks, but leads to an increased testing overhead and requires good code coverage by existing test code.
Mining cohort data to meet medical research demands
Data mining methods are increasingly used in medical applications, for example to analyze electronic health records (EHR). Can EHR be used for medical research — and how? I start this talk with one example application, where millions of EHR have been used for a retrospective study of a rare disease. Then, I turn to the more traditional contexts of data analysis in medical research, starting with learning on epidemiological data.
Epidemiological studies encompass a modest number of participants, for which a very large number of variables are recorded. Such data are painstakingly collected, often over several years. They are properly preprared and excellently described. Starting with the simple task of classifying cohort participants with respect to a specific outcome, I elaborate on the emerging challenges and on increasingly complex mining solutions that become part of the knowledge discovery workflow. I also present first results on constraint-based learning for dimensionality reduction and classification.