Supercomputing has advanced over the years to an indispensable, strategic key technology for theoretical and experimental research as well as for industrial modelling, prototyping and product optimization. However, the concurrency of supercomputers has increased on all levels and will soon reach tens of millions of threads using hundreds of Petabytes of memory. Supercomputing and Computational Science and Engineering (CSE) must aggressively tackle these challenges on the hardware, software and algorithmic level to continue the dramatic advances through advanced and optimized simulation models, algorithms codes and tools as well as competitive supercomputer systems. In particular the “Supercomputing” part of the KCIST topic focusses on application scaling, code optimization, advanced algorithms, novel multi-core architectures, model-driven parallel design, data integration and scientific visualisation.
The vast amounts of data that are produced through the varieties of complex, large-scale experiments, detectors, observations, measurements, surveys and simulations in nearly all fields of science and engineering pose new challenges. Similar challenges exist in several domains in industry: large-scale databases, mission critical and confidential corporate data or internet economy. Extracting knowledge from the ever increasing volume, velocity and variety of Big Data leads to demanding requirements for storage, networking and analysis capacities. What is more, the capabilities and functionalities of modern data management and analysis technologies, methods, algorithms and tools have to keep up with these growing demands. In particular the “Big Data” part of the KCIST topic focusses on data management, data security, efficient algorithms and data structures, data analytics, classification and outlier detection and data visualization.