Three more papers published by Systems Group on Temporal Joins and Privacy Preserving Computation

2023/11/29

Find our work at SSTD 2023, CloudDB 2023, TrustKDD 2023.

A New Primitive for Processing Temporal Joins

Authors: Meghdad Mirabi, Leila Fathi, Anton Dignös, Johann Gamper, and Carsten Binnig

This paper presents the extended temporal aligner as a temporal primitive, and proposes a set of reduction rules that employ this primitive to convert a temporal join operator to its non-temporal equivalent. The rules cover all types of temporal joins, including inner join, outer joins, and anti-join. Preliminary experimental results demonstrate that the integration of the extended temporal aligner and the reduction rules can efficiently process temporal join queries.

QFilter: Towards Fine-Grained Access Control for Aggregation Query Processing over Secret Shared Data

Authors: Meghdad Mirabi, Carsten Binnig

This paper presents QFilter, a privacy-preserving and communication-efficient solution that integrates an Attribute-Based Access Control (ABAC) model into query processing. QFilter enables the specification and enforcement of fine-grained access control policies tailored to secret-shared data. It can process aggregation SQL queries, including “count,” “sum,” and “avg” functions, with both conjunctive (using “AND”) and disjunctive (using “OR”) equality query conditions, without the need for inter-server communication. QFilter is secure against honest-but-curious adversaries, and preliminary experiments illustrate its applicability for preserving privacy in query processing over secret-shared data, especially at the tuple-level access control with the lowest overhead.

SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning

Authors: Meghdad Mirabi, Ren ́e Klaus Nikiel, Carsten Binnig

This paper introduces SafeML, a distributed machine learning framework that can address privacy and Byzantine robustness concerns during model training. It employs secret sharing and data masking techniques to secure all computations while also utilizing computational redundancy and robust confirmation methods to prevent Byzantine nodes from negatively affecting model updates at each iteration of model training. The theoretical analysis and preliminary experimental results demonstrate the security and correctness of SafeML for model training.