Options
Kropf, Peter
Résultat de la recherche
A Roadmap for Research in Sustainable Ultrascale Systems
2018, Sousa, Leonel, Kropf, Peter, Kuonen, Pierre, Prodan, Radu, Trinh, Tuan Anh, Carreto, Jesus
The COST Action IC1305 (NESUS) proposes in this research roadmap research objectives and twelve associated recommendations, which in combination, can help bring about the notable changes required to make true the existence of sustainable ultrascale computing systems. Moreover, they are useful for industry and stakeholders to define a path towards ultrascale systems.
Methodological Approach to Data-Centric Cloudific- ation of Scientific Iterative Workflows
2016-12-14, Kropf, Peter
The computational complexity and the constantly increas- ing amount of input data for scientific computing models is threatening their scalability. In addition, this is leading towards more data-intensive scientific computing, thus rising the need to combine techniques and in- frastructures from the HPC and big data worlds. This paper presents a methodological approach to cloudify generalist iterative scientific work- flows, with a focus on improving data locality and preserving perfor- mance. To evaluate this methodology, it was applied to an hydrologi- cal simulator, EnKF-HGS. The design was implemented using Apache Spark, and assessed in a local cluster and in Amazon Elastic Compute Cloud (EC2) against the original version to evaluate performance and scalability.
Applications for ultrascale computing
2015, Kropf, Peter
Studies of complex physical and engineering systems, represented by multi-scale and multi-physics computer simulations have an increasing demand for computing power, especially when the simulations of realistic problems are considered. This demand is driven by the increasing size and complexity of the studied systems or the time constraints. Ultrascale computing systems offer a possible solution to this problem. Future ultrascale systems will be large-scale complex computing systems combining technologies from high performance computing, distributed systems, big data, and cloud computing. Thus, the challenge of developing and programming complex algorithms on these systems is twofold. Firstly, the complex algorithms have to be either developed from scratch, or redesigned in order to yield high performance, while retaining correct functional behaviour. Secondly, ultrascale computing systems impose a number of non-functional cross-cutting concerns, such as fault tolerance or energy consumption, which can significantly impact the deployment of applications on large complex systems. This article discusses the state-of-the-art of programming for current and future large scale systems with an emphasis on complex applications. We derive a number of programming and execution support requirements by studying several computing applications that the authors are currently developing and discuss their potential and necessary upgrades for ultrascale execution.
Introduction: Integrated Computer-Aided Engineering
2013, Kropf, Peter
This special issue is based on the 15th InternationalConference on Computer Supported Cooperative Workin Design (CSCWD 2011) held in Lausanne, Switzer-land, on June 8–11, 2011. CSCWD is a series of annualinternational conferences (http://www.cscwd.org/) or-ganized by the IEEE SMC Technical Committee onComputer Supported Cooperative Work in Design toprovide a forum for researchers and practitioners in-volved in different but related domains to confront re-search results and discuss key problems in the designof complex artifacts. The scope includes the researchand developmentfieldsof collaboration technologiesand their applications to the design of processes, prod-ucts, systems, and services in industries and societies. From about 130 papers presented in technical ses-sions at the conference, authors of the twenty six pa-pers considered to be the most innovative and originalin terms of collaboration technologies and engineeringapplications were invited to submit “substantiallyex-tended and updated manuscripts with additional origi-nal computational materials based on their most recentresearch”for possible publication in this issue. It wasalso noted “theoverlap between the new submissionand the paper published in the conference proceed-ings should not be more than 50%.”Each submittedextended manuscript was reviewed subsequently by 4to 7 reviewers using the journal review form. The sixmanuscripts included in this issue are those that passedthrough two rounds of the journal’s rigorous review process successfully.
Efficient Broadcasting Algorithm in Harary-like Networks}
2017-8-1, Bhabak, Puspal, Harutyunyan, Hovhannes, Kropf, Peter
In this paper, we analyze the properties of Harary graphs and some derivatives with respect to the achievable performance of communication within network structures based on these graphs. In particular we defined Cordal-Haray graphs on n nodes which can be constructed for any even n for any odd degree between 3 and 2[log n] - 1. We also present a simple algorithm for fast message broadcasting in this network. Our analysis show that when nodes of a Cordal-Harary Graph have logarithmic degree then the broadcasting time will be as small as [log n] which is the minimum possible value for a network on n nodes. All this properties show that Cordal-Harary is a very good network architecture for parallel processing.
Lessons Learned from Applying Big Data Paradigms to a Large Scale Scientific Workflow
2016-11-14, Kropf, Peter
The increasing amount of data related to the execution of scientific workflows has raised awareness of their shift towards parallel data-intensive problems. In this paper, we deliver our experience with combining the traditional high-performance computing and grid-based approaches for scientific workflows, with Big Data analytics paradigms. Our goal was to assess and discuss the suitability of such data-intensive-oriented mechanisms for production-ready workflows, especially in terms of scalability, focusing on a key element in the Big Data ecosystem: the data-centric programming model. Hence, we reproduced the functionality of a MPI-based iterative workflow from the hydrology domain, EnKF-HGS, using the Spark data analysis framework. We conducted experiments on a local cluster, and we relied on our results to discuss promising directions for further research.
Network Performance of the JBoss Application Server
2013-10-22, Benothman, Nabil, Clere, Jean-Frederic, Schiller, Eryk, Kropf, Peter, Maucherat, Remy
JBoss Application Server (AS) uses java.io and the Apache Portable Runtime (APR) project to provide its HTTP connectors. Due to new features of upcoming specifications of the Java Enterprise Edition (Java EE), the existing connectors shall be replaced by modern non blocking Input/Outputs (I/Os). In this study, we review some modern I/O frameworks such as NIO.2 introduced by Java SE 7 and XNIO3 developed by JBoss. We compare their network performance by running a series of stress tests on client-server applications of limited functionality. As a result, we select NIO.2 as the most appropriate framework to specify and implement a new JBoss connector. Finally, we compare our newly implemented Java connector against the existing APR-based one by means of network performance measures.
A LRAAM-based Partial Order Function for Ontology Matching in the Context of Service Discovery
2017-6-14, Ludolph, Hendrik, Babin, Gilbert, Kropf, Peter
The demand for Software as a Service is heavily increasing in the era of Cloud. With this demand comes a proliferation of third-party service offerings to fulfill it. It thus becomes crucial for organizations to find and select the right services to be integrated into their existing tool landscapes. Ideally, this is done automatically and continuously. The objective is to always provide the best possible support to changing business needs. In this paper, we explore an artificial neural network implementation, an LRAAM, as the specific oracle to control the selection process. We implemented a proof of concept and conducted experiments to explore the validity of the approach. We show that our implementation of the LRAAM performs correctly under specific parameters. We also identify limitations in using LRAAM in this context.
Cloudification of a Legacy Hydrological Simulator using Apache Spark
2016-9-14, Kropf, Peter, Lapin, Andrei, Carretero, Jesus, Caíno-Lores, Silvina
The field of hydrology usually relies on complex multiphysics systems and data collected from geographically distributed sensors in order to obtain good quality predictions and analysis of how wa- ter moves through the environment. Nowadays, the computational resources needed to run such com- plex simulators, and the increasing size of datasets related to the models have arisen an interest to- wards distributed infrastructures like clouds. This paper presents the results of applying a cloudifica- tion methodology to a legacy hydrological simulator (HydroGeoSphere), wrapped with an ensemble Kal- man filter. This work describes how the methodology was applied, the particularities of its implementation and configuration for the Apache Spark iterative map- reduce platform, and the results of an evaluation in a commodity cluster against an MPI implementation of the simulator.
SCSC '13: Proceedings of the 2013 Summer Computer Simulation Conference
2013, Kropf, Peter, Bruzzone, Agostino, Solis, Adriano