Dr. Marco Serafini

mserafini

I am a Scientist at the Qatar Computing Research Institute, where I work at the intersection of database systems and distributed systems. Previously I was a postdoc at the Yahoo! Research Barcelona lab (R.I.P.).


Current Projects

I am currently involved in three projects:

Arabesque: A system for distributed graph mining

Graph mining algorithms such as frequent subgraph mining, motif counting, or finding dense subgraphs, are powerful tools to extract interesting subgraphs from a large graph. They are very important for areas such as social networks, semantic web, and bioinformatics. However, they have high computational complexity, which is mainly due to the need for enumerating a huge number of subgraphs of the input graph. Arabesque is a system that helps programmers implement graph mining algorithms by abstracting away subgraph enumeration. It offers a simple functional API where application basically just have to specify a boolean filter function, which takes a subgraph as input and returns whether the subgraph is interesting. Arabesque also transparently distributes and optimizes the enumeration process across multiple servers running Hadoop. It keeps all its state in memory in order to avoid the cost of random accesses to disk.

Arabesque is an open source system running on top of Giraph and (soon) Spark. Check out the project homepage at arabesque.io

Papers:

Database elasticity (Accordion, E-Store, Clay)

Cloud computing platforms gives the opportunity to reduce the hardware cost of running a database management system by dynamically add and remove servers from a distributed cluster according to load changes. In order to leverage this flexibility, applications need to be designed to be elastic, i.e., to transfer data and computation to and from servers whenever it is needed. Elastic databases abstract away the complexity of elasticity from applications. They are able to detect changes in load on-line and devise a data and load migration plan that can accomodate these changes. I have worked on three systems for database elasticity: Accordion, which partitions the database in small blocks and transfers them as needed based on online monitoring information, E-Store, which uses a two-tiered approach to identify and move hot tuples at very high granularity, and Clay, which extends E-Store to support arbitrary transactional access patterns involving multiple tuples.

The project was a collaboration with Michael Stonebraker’s group at MIT.

Papers:

Load balancing in distributed stream processing systems

Distributed stream processing system allow running user-defined operators on an input stream of key-value pairs. Operators can also produce tuples that form the input stream for other operators. Each input stream is partitioned on multiple sub-streams based on the keys of the tuples. For scalability, sub-streams can be assigned to different physical servers running the same operator. Input streams can be very skewed so it is important to balance load carefully by assigning the right sub-streams to the right operators. However, it is often impossible to predict which key will be more expensive than others. In this project, we identify online mechanisms to detect hot keys and balance load even in presence of very high skew. We first used a technique called partial key grouping (aka “the power of both choices”), which involves replicating each operator instance on two servers and selecting the right replica dynamically on a tuple-by-tuple basis. The technique has been integrated in the main release of Apache Storm. Our second technique deals with settings with a large number of servers, where balancing load is more difficult and more than two choices may be needed.

Papers:


Previous Projects

ASC model (@ Yahoo!/QCRI)

Prior to joining QCRI, Marco worked at Yahoo! Research in Barcelona where he studied the problem of protecting large scale distributed systems from hardware data corruption, which is expected to becoming more and more frequent in future generations of CPUs because of reduced geometry and voltage levels. Unreliable commodity servers are typically used to store data that is critical from an infrastructure viewpoint (e.g., configuration metadata), personal viewpoint (e.g., emails), or financial viewpoint (e.g., ad clicks and billing information). Critical systems like Google’s Mesa, for example, detect corruption errors through application-specific replication techniques. I proposed a new formal model for Arbitrary State Corruption (ASC), which captures the essential aspects of data corruption faults in an application-independent manner. I used the model to develop PASC, a runtime library that can provably detect data corruption errors in any event-based distributed system without using replication. SEI makes PASC more practical by supporting multiple threads, substantially eliminating memory overhead, and making it easier to transparently harden existing applications.

Papers:

Zookeeper replication protocol (@ Yahoo!)

Zookeeper is a popular open-source system use to store metadata and to coordinate distributed systems. It is used by many products and open-source projects. While at Yahoo! Research, I also worked on the Zookeeper Atomic Broadcast (Zab) protocol, which is at the heart of Zookeeper. People often have a hard time understanding the difference between active replication protocols like Paxos, which exchange operations, and passive ones like Zab, which exchange state updates. I introduced the notion of barrier to provide a simple discrimination criteria.

Papers:

Data placement in social networking systems (@ Yahoo!)

At Yahoo! I also developed social piggybacking, a technique that uses graph mining techniques on social graphs to increase the throughput of social event feed systems.

Papers:

Dealing with severe faults in replicated systems (@ TUD)

I received my PhD at TU Darmstadt. My thesis introduced the concept of Eventual Linearizability, a correctness condition for systems that are usually strongly consistent and degrade consistency only when necessary. It also introduced Scrooge, a fast Byzantine fault tolerant algorithm running on a small set of replicas.

Papers:


Professional Service

I have been or will be PC member of VLDB 2018, ICDE 2018, SOSP 2017, VLDB 2017, Eurosys 2017, WWW 2017, DSN 2017, IC2E 2017, APSys 2017, Eurosys 2016, ICDCS 2016, ICDE 2016, SRDS 2106, IC2E 2016, SIGMOD 2016 demo, ICDCS 2015, WWW 2015, SRDS 2015, OPODIS 2014, SSS 2014, Middleware 2012 (Industry track), EDCC 2012, SOFSEM 2011.

I was the co-PC chair of the PaPoC 2015 workshop (Principles and Practice of Consistency for Distributed Data), colocated with Eurosys 2015.

Awards

My PhD thesis was nominated for the “Best PhD Thesis of 2010” by the German, Swiss and Austrian Computer Science societies and the German chapter of ACM for PhD thesis titled “Efficient and Low-Cost Fault-Tolerance for Web-Based Systems”.