WS 2017/18

PlakatDLS_WS_2017-18

Alle Vorträge finden jeweils 17:00 Uhr in SR 3325 am Ernst-Abbe-Platz 2 statt!

show Content 18.10.2017 Hans-Peter Seidel (MPI Saarbrücken)

3D Image Analysis and Synthesis – The World inside the Computer

DLS_WS_17-18_SeidelDuring the last three decades computer graphics has established itself as a core discipline within computer science and information technology. Two decades ago, most digital content was textual. Today it has expanded to include audio, images, video, and a variety of graphical representations. New and emerging technologies such as multimedia, social networks, digital television, digital photography and the rapid development of new sensing devices, telecommunication and telepresence, virtual reality, or 3D-internet further indicate the potential of computer graphics in the years to come.
Typical for the field is the coincidence of very large data sets with the demand for fast, and possibly interactive, high quality visual feedback. Furthermore, the user should be able to interact with the environment in a natural and intuitive way.

In order to address the challenges mentioned above, a new and more integrated scientific view of computer graphics is required. In contrast to the classical approach to computer graphics which takes as input a scene model -- consisting of a set of light sources, a set of objects (specified by their shape and material properties), and a camera -- and uses simulation to compute an image, we like to take the more integrated view of `3D Image Analysis and Synthesis’ for our research.
We consider the whole pipeline from data acquisition, over data processing to rendering in our work. In our opinion, this point of view is necessary in order to exploit the capabilities and perspectives of modern hardware, both on the input (sensors, scanners, digital photography, digital video) and output (graphics hardware, multiple platforms) side. Our vision and long term goal is the development of methods and tools to efficiently handle the huge amount of data during the acquisition process, to extract structure and meaning from the abundance of digital data, and to turn this into graphical representations that facilitate further processing, rendering, and interaction.

In this presentation I will highlight some of our ongoing research by means of examples.  Topics covered include 3D reconstruction and digital geometry processing, shape analysis and shape design, motion and performance capture, and 3D video processing.

show Content 20.10.2017 Hannes Mehnert (University of Cambridge, UK)

PacmanWem müssen wir beim Benutzen von Software vertrauen? Möglichkeiten zur radikalen Verkleinerung der „Trusted Computing Base“

ACHTUNG!

Andere Zeit anderer Ort!

Freitag, 20.10.2017 - 16:00 Uhr

Carl-Zeiss-Straße 3 · Hörsaal 3

 Weitere Informationen finden Sie hier.

 

show Content 13.12.2017 Erhard Rahm (Universität Leipzig)

Scalable graph analytics

DLS_WS_17-18_RahmMany big data applications in business and science require the management and analysis of huge amounts of graph data. Suitable systems to manage and to analyze such graph data should meet a number of challenging requirements including support for an expressive graph data model, powerful query and graph mining capabilities, ease of use as well as high performance and scalability. In this talk, we discuss current system approaches for management and analysis of  ”big graph data”. In particular, we introduce a new research framework called Gradoop, developed at the German Big Data center ScaDS, that is built on the so-called Extended Property Graph Data Model with dedicated support for analyzing not only single graphs but also collections of graphs. We also discuss current and future research challenges.

Bio Erhard Rahm is full professor for databases at the computer science institute of the University of Leipzig, Germany. His current research focusses on Big Data, graph analytics and data integration. He has authored several books and more than 200 peer-reviewed journal and conference publications. His research has been awarded several times, in particular with the renowned 10-year best-paper award of the conference series VLDB (Very Large Databases) and the Influential Paper Award of the conference series ICDE (Int. Conf. on Data Engineering). Prof. Rahm is co-director of the German center of excellence on Big Data ScaDS (competence center for SCAlable Data services and Solutions) Dresden/Leipzig.

 

show Content 17.1.2018 Ulrich Meyer (Goethe Universität, Frankfurt am Main)

Algorithm Engineering for very large graphs

DLS_WS_17-18_MeyerVery large graphs arise naturally in such diverse domains as social networks, web search, computational biology, machine learning and more. Even simple tasks like traversing these graphs become challenging once they cannot be stored in the main memory of one machine. If the graph is stored in external memory, then many algorithms that perform very well on graphs that are stored in internal memory, become inefficient because of the large number of I/Os they incur. In order to alleviate the I/O bottleneck, many external memory graph traversal algorithms have been designed with provable worst-case guarantees. In the talk I highlight some techniques used in the design and engineering of such algorithms and survey the state-of-the-art in I/O-efficient graph traversal algorithms. I will also report on recent work concerning the generation of massive scale free networks like social networks, protein-protein interaction networks or semantic networks under resource constraints.

 

 

 

show Content 31.01.2018 Paolo Bientinesi (RWTH Aachen)

Teaching computers linear algebra

DLS_WS_17-18_BientinesiIn the mid 1950s, the computing world was revolutionized by the advent of "The IBM Mathematical Formula Translating System" (FORTRAN), a program--nowadays universally recognized as the first complete compiler--that allowed scientists to express calculations in a "high-level", portable language. Both FORTRAN and C were, and still are, much better solutions than computer-specific code, but they still require users to reduce their mathematical formulas to scalar computations. Indeed, computers only operate on scalars and small arrays, while scientists operate with vectors, matrices and higher-dimensional objects. In the past 60 years there has been tremendous progress in the world of programming languages and compilers, and many languages and libraries (Matlab, Julia, Armadillo, Eigen, ...) now make it possible to code directly in terms of matrices; however in terms of efficiency, these solutions are still far from what human experts achieve. In a nutshell, none of these tools know linear algebra well enough to compete with humans. In this talk I present the Linear Algebra Mapping Problem (LAMP), that is, how to efficiently compute linear algebra expressions from a set of available building blocks, and the compiler Linnea, our initial solution to the problem.