Machine learning models are everywhere nowadays; but only few of them are transparent in how they work internally. To remedy this, local explanations aim to show users how and why learned models produce a certain output for a given input (data sample). However, most existing approaches for local explainability of learned models are oriented around images or text data and, thus, cannot leverage the structure and properties of tabular data. Therefore, in this paper we present Quest, a new framework for generating explanations that are a better fit for tabular data. The main idea is to create explanations in the form of relational predicates (called queries hereafter) that approximate the behavior of a classifier around the given sample. In an initial evaluation, we show anecdotally how Quest can be used on a real-world tabular data set compared to existing approaches that can be applied on tabular data.