Voici les éléments 1 - 10 sur 15
- PublicationAccès libreHave a Seat on the ErasureBench: Easy Evaluation of Erasure Coding Libraries for Distributed Storage SystemsWe present ErasureBench, an open-source framework to test and benchmark erasure coding implementations for distributed storage systems under realistic conditions. ErasureBench automatically instantiates and scales a cluster of storage nodes, and can seamlessly leverage existing failure traces. As a first example, we use ErasureBench to compare three coding implementations: a (10,4) Reed-Solomon (RS) code, a (10,6,5) locally repairable code (LRC), and a partition of the data source in ten pieces without error-correction. Our experiments show that LRC and RS codes require the same repair throughput when used with small storage nodes, since cluster and network management traffic dominate at this regime. With large storage nodes, read and write traffic increases and our experiments confirm the theoretical and practical tradeoffs between the storage overhead and repair bandwidth of RS and LRC codes.
- PublicationAccès libreSGX-FS: Hardening a File System in User-Space with Intel SGX
- PublicationAccès libreSafeFS: A Modular Architecture for Secure User-Space File Systems (One FUSE to rule them all)
- PublicationAccès libreBlock placement strategies for fault-resilient distributed tuple spaces: an experimental study(: Springer, 2017-6-19)
;Barbi, Roberta ;Buravlev, Vitaly ;Antares Mezzina, ClaudioThe tuple space abstraction provides an easy-to-use programming paradigm for distributed applications. Intuitively, it behaves like a distributed shared memory, where applications write and read entries (tuples). When deployed over a wide area network, the tuple space needs to efficiently cope with faults of links and nodes. Erasure coding techniques are increasingly popular to deal with such catastrophic events, in particular due to their storage efficiency with respect to replication. When a client writes a tuple into the system, this is first striped into k blocks and encoded into 𝑛>𝑘 blocks, in a fault-redundant manner. Then, any k out of the n blocks are sufficient to reconstruct and read the tuple. This paper presents several strategies to place those blocks across the set of nodes of a wide area network, that all together form the tuple space. We present the performance trade-offs of different placement strategies by means of simulations and a Python implementation of a distributed tuple space. Our results reveal important differences in the efficiency of the different strategies, for example in terms of block fetching latency, and that having some knowledge of the underlying network graph topology is highly beneficial.
- PublicationAccès libreBlockchain-Based Metadata Protection for Archival Systems
- PublicationAccès libreTopology-aware protocols, tools and applications for large-scale distributed systemsLarge-scale distributed systems offer scalable solutions to the ever increasing demand of efficient, online services. Examples of such services include data dissemination, group and membership management, distributed indexing and storage, data streaming, etc. The internal mechanisms of these large-scale systems rely on cooperation among thousands of host machines, deployed at geographically distant sites. The cooperation is typically implemented by message-passing (MP). Pragmatically speaking, MP consists is the exchange of sequences of Bytes through physical and logical routing layers. The physical and logical interconnections between the hosts, i.e., their topology, define the routes of the messages. These topologies consistently affect the routing behaviors of the application-level messages. They expose physical properties (i.e., delays, available bandwidth, loss rate, etc.) as well as dynamic characteristics (number of hops, connectivity, contention on the specific link, failure of the end nodes, etc.). The proper design of distributed systems requires taking into account the underlying topologies.
This thesis presents protocols, tools and applications that consider adapting to the routing topology substrate as a key design aspect for large-scale distributed systems.
First, we address the problem of creating anonymous and confidential communication channels on large scale networks. These networks make the design of such confidential communication systems challenging under many perspectives: their scale, the unpredictable crashes of nodes, the inability to establish direct node-to-node communication channels, etc. We present Whisper, a protocol and its possible applications to establish anonymous and confidential communication channels targeting such challenging network topology conditions.
Then, we observe the need to easily evaluate distributed systems under varying network topology conditions. As a matter of fact, despite the vast literature on the topic, we still lack an integrated tool for topology emulation that is easy-to-use, scalable, featuring multi-user support, concurrent deployments, non-dedicated access, and platform portability. This thesis contributes SplayNet, an integrated tool to support rapid development and evaluation of distributed systems under different network topology conditions.
Finally, this thesis presents Brisa and LayStream, respectively a data-dissemination protocol and a video-streaming application. These two protocols share the common goal of providing reliable dissemination protocols for large-scale networks. Brisa efficiently organizes the nodes to quickly react to failures in the underlying routing topology or nodes. LayStream presents the lesson learnt in supporting a demanding distributed system, such as video streaming, on top of principled composition of gossip protocols.
- PublicationAccès libre
- PublicationAccès libreSAFETHINGS: Data Security by Design in the IoTDespite years of research and the long-lasting promise of pervasiveness of an "Internet of Things", it is only recently that a truly convincing number of connected things have been deployed in the wild. New services are now being built on top of these things and allow to realize the IoT vision.However, integration of things in complex and interconnected systems is still only in the hands of their manufacturers and of Cloud providers supporting IoT integration platforms. Several issues associated with data privacy arise from this situation. Not only do users need to trust manufacturers and IoT platforms for handling their data, but integration between heterogeneous platforms is still only incipient.In this position paper, we chart a new IoT architecture, SAFETHINGS, that aims at enabling data privacy by design, and that we believe can serve as the foundation for a more comprehensive IoT integration. The SAFETHINGS architecture is based on two simple but powerful conceptual component families, the cleansers and blenders, that allow data owners to get back the control of IoT data and its processing.
- PublicationAccès libreFaaSdom: A Benchmark Suite for Serverless ComputingServerless computing has become a major trend among cloud providers. With serverless computing, developers fully delegate the task of managing the servers, dynamically allocating the required resources, as well as handling availability and fault-tolerance matters to the cloud provider. In doing so, developers can solely focus on the application logic of their software, which is then deployed and completely managed in the cloud. Despite its increasing popularity, not much is known regarding the actual system performance achievable on the currently available serverless platforms. Specifically, it is cumbersome to benchmark such systems in a language- or runtime-independent manner. Instead, one must resort to a full application deployment, to later take informed decisions on the most convenient solution along several dimensions, including performance and economic costs. FaaSdom is a modular architecture and proof-of-concept implementation of a benchmark suite for serverless computing platforms. It currently supports the current mainstream serverless cloud providers (i.e., AWS, Azure, Google, IBM), a large set of benchmark tests and a variety of implementation languages. The suite fully automatizes the deployment, execution and clean-up of such tests, providing insights (including historical) on the performance observed by serverless applications. FaaSdom also integrates a model to estimate budget costs for deployments across the supported providers. FaaSdom is open-source and available at https://github.com/ bschitter/benchmark-suite-serverless-computing.
- PublicationAccès libreOn the Cost of Safe Storage for Public Clouds: an Experimental EvaluationCloud-based storage services such as Dropbox, Google Drive and OneDrive are increasingly popular for storing enterprise data, and they have already become the de facto choice for cloud-based backup of hundreds of millions of regular users. Drawn by the wide range of services they provide, no upfront costs and 24/7 availability across all personal devices, customers are well-aware of the benefits that these solutions can bring. However, most users tend to forget-or worse ignore-some of the main drawbacks of such cloud-based services, namely in terms of privacy. Data entrusted to these providers can be leaked by hackers, disclosed upon request from a governmental agency's subpoena, or even accessed directly by the storage providers (e.g., for commercial benefits). While there exist solutions to prevent or alleviate these problems, they typically require direct intervention from the clients, like encrypting their data before storing it, and reduce the benefits provided such as easily sharing data between users. This practical experience report studies a wide range of security mechanisms that can be used atop standard cloud-based storage services. We present the details of our evaluation testbed and discuss the design choices that have driven its implementation. We evaluate several state-of-the-art techniques with varying security guarantees responding to user-assigned security and privacy criteria. Our results reveal the various trade-offs of the different techniques by means of representative workloads on top of industry-grade storage services.