paper-BagRelationalPDBsAreHard/intro.tex

102 lines
15 KiB
TeX

%root: main.tex
\AH{I need help not being redundant...}
\section{Introduction}
\AR{The para below has some text that is too coloquial and should not be in a paper, e.g. ``giant pile of monomials" or ``folks".}
Most implementations of modern probabilistic databases (PDBs) view the annotation polynomials as a giant pile of monomials in disujunctive normal form (DNF). Most folks have considered bag PDBs as easy, and due to the almost all theoretical framework of PDBs being in set semantics, few have considered bag PDBs. However, there is a subtle, but easliy missed advantage in the bag semantic setting, that expectation can push through addition, making the computation easier than the oversimplified view of the polynomial being in its expanded sum of products (SOP) form. There is not a lot of existing work in bag PDBs per se, however this work seeks to unite previous work in factorized databases with theoretical guarantees when used in computations over bag PDBs, which have not been extensively studied in the literature. We give theoretical results for computing the expectation of a bag polynomial, while introducing a linear time approximation algorithm for computing the expecation of a bag PDB tuple.
\AR{The para above does not quite seem to follow the outline we discussed for the first para. Here is what I thought we had discussed (but it is possible I'm mis-remembering). Here is how you can pharse this, line by line:
(Line 1): State modern DBs are based on bag semantics.
(Line 2): But most implementations of PDBs are for set semantics, where the annotation polynomial is represented in DNF form and balh problems are \#P-hard.
(Line 3): However, for the equivalent to DNF representation for bags, blah problems can be computed in linear time.
(Line 4): In this work we show that if we use better representation like factorized DBs for annotation polynomial then the complexity landscape becomes much more nuanced.
}
In practice, modern production databases, e.g., Postgres, Oracle, etc. use bag semantics. In contrast, as noted above, most implementations of PDBs are built in the setting of set semantics,\AR{The stuff so far with minor modifications can function as the first two lines of the first para.} and this contributes to slow computation time \AR{Why did you put this comment on slow computation time? What is it buying you? It seems like you are jumping ahead over here.}. In both settings it is the case that each tuple is annotated with a polynomial, which describes the tuples contributing to the given tuple's presence in the output. While in set semantics, the output polynomial is predominantly viewed as the probability\AR{The annotation polynomial does {\bf NOT} give the probability of a tuple-- it just says whether a tuple is present in a world or not. Only when you take the expecation of the polynomial do you get a probability value.} that its associated tuple exists or not, in the bags setting the polynomial is an encoding of the multiplicity of the tuple in the output. Note that in general, as we allude later one, the polynomial can also represent set semantics, access levels, and other encodings.\AR{How does the last statement help? In this paper we are interested in set and bag semantics: why are you bring up other possibilities that have nothing to do with the main message of the paper?} \AR{I'll pause here now (except for two comments on next page) since it would be quicker to give comments during the meeting but here is something to ponder about, which might make things simpler to present. I would recommend that you introduce the theoretical problem {\bf right after} the first para. Once you define the problem, what you are trying to describe in words in this para can be done much more succinctly with relevant notation. If you want to be bit more gentle then perhaps start off with an example annotation polynomial and talk about that example-- this latter one might be better for PODS. But {\em ideally}, you would like an example query that is hard in set semantics but easy in bag in SoP representation but also hard with more succinct representation. Side Q-- is our hard query that we use for triangle counting etc. \#P-hard in the set semantics? If so that would be a great example to use throughout the intro.} In bag semantics, the polynomial is composed of $+$ and $\times$ operators, with constants from the set $\mathbb{N}$ and variables from the set of variables $\vct{X}$. Should we attempt to make computations, e.g. expectation, over the output polynomial, the naive algorithm cannot hope to do better than linear time in the size of the polynomial. However, in the set semantics setting, when e.g., computing the expectation (probability) of the output polynomial given values for each variable in the polynomial's set of variables $\vct{X}$, this problem is \#P-hard. %of the output polynomial of a result tuple for an arbitrary query. In contrast, in the bag setting, one cannot generate a result better than linear time in the size of the polynomial.
There is limited work and results in the area of bag semantic PDBs. This work seeks to leverage prior work in factorized databases (e.g. Olteanu et. al.)~\cite{DBLP:conf/tapp/Zavodny11} with PDB implementations to improve efficient computation over output polynomials, with theoretical guarantees. \AR{I know what you are trying to say in the rest of the para but it can be easily interpreted to be saying something that is {\bf false}. It is always the case that the compressed form of the polynomial always evaluates to the same value as the extended SoP form for any value. So the expcted value of compressed poly is {\em always the same} as expected value of the SoP forms. What you are trying to get here if when you "push" in the expectations. Again the latter is very hard to describe in words. But this would be much easier to state once you have the notation in place. Or if you have a runnign example.} When considering PDBs in the bag setting a subtelty arises that is easily overlooked due to the \textit{oversimplification} of PDBs in the set setting, i.e., in set semantics expectation doesn't have linearity over disjunction, and a consequence of this is that it is not true in the general case that a compressed polynomial has an equivalent expectation to its DNF form. In the bag PDB setting, however, expectation does enjoy linearity over addition, and the expectation of a compressed polynomial and its equivalent SOP are indeed the same.
For almost all modern PDB implementations, an output polynomial is only ever considered in its expanded SOP form.
\BG{I don't think this statement holds up to scrutiny. For instance, ProvSQL uses circuits.}
Any computation over the polynomial then cannot hope to be in less than linear in the number of monomials, which is known to be exponential in the size of the input in the general case.
The landscape of bags changes when we think of annotation polynomials in a compressed form rather than as traditionally implemented in DNF. The implementation of PDBs has followed a course of producing output tuple annotation polynomials as a
giant pile of monomials, aka DNF. This is seen in many implementations, including MayBMS, MystiQ, GProM, Orion, etc.), ~\cite{DBLP:conf/icde/AntovaKO07a}, ~\cite{DBLP:conf/sigmod/BoulosDMMRS05}, ~\cite{AF18}, ~\cite{DBLP:conf/sigmod/SinghMMPHS08}, all of which use an
encoding that is essentially an enumeration through all the monomials in the DNF. \AR{I don't think a PODS reader would care much about the next few sentences. Compress the argument into a sentence. If you cannot, perhaps it is not worth putting in?} The reason for this is because of the customary fixed data
size rule for attributes in the classical approach to building DBs. Such an approach allows practitioners to know how big a tuple will be,
and thus pave the way for several optimizations. The goal is to avoid the situation where a field might get too big. However, annotations
break this convention, since, e.g., a projection can produce an annotation that is arbitrarily greater in the size of the data. Other RA operators, such as join
grow the annotations unboundedly as well, albeit at a lesser rate. As a result, the aforementioned PDBs want to avoid creating arbitrarily sized
fields, and the strategy has been to take the provenance polynomial, flatten it into individual monomials, storing each individual monomial in
a table. This restriction carries with it $O(n^2)$ run time in the size of the input tables to materialize the monomials. Obviously, such an approach
disallows doing anything clever. For those PDBs that do allow factorized polynomials, e.g., Sprout ~\cite{DBLP:conf/icde/OlteanuHK10}, they assume such an encoding in the set semantics. With compressed encodings, the problem in bag semantics is actually hard in a non-obvious way.
It turns out, that should an implementation allow for a compressed form of the polynomial, that computations over the polynomial can be done in better than linear runtime, again, linear in the number of monomials of the expanded SOP form. While it is true that the runtime in the number of monomials versus runtime in the size of the polynomial is the same when the polynomial is given in SOP form, the runtimes are not the same when we allow for compressed versions of the polynomial as input to the desired computation. While the naive algorithm to compute the expectation of a polynomial with respective probability values for all variables in $\vct{X}$ is to generate all the monomials and compute each of their probabilities, factorized polynomials in the bag setting allow us to compute the probability in the number of terms (with their corresponding probability values substituted in for their respective variables) that make up the compressed representation. For clarity, the probability we are considering is whether or not the tuple exists in the input DB, or in other words, the input to arbitrary query $Q$ is a set PDB. Note that our scheme takes the \textit{output polynomial} generated by the query over the input DB as its input.
As implied above, we define hard to be anything greater than linear in the number of monomials when the polynomial is in SOP form. In this work, we show, that computing the expectation over the output polynomial for even the query class of $CQ$, which allow for only projections and joins, over a $\ti$ where all tuples have probability $\prob$ is hard in the general case. However, allowing for compressed versions of the polynomial paves the way for an approximation algorithm that performs in linear time with $\epsilon/\delta$ guarantees. %Also, while implied in the preceeding, in this work, the input size to the approximation algorithm is considered to be the query polynomial as opposed to the input database.
The richness of the problem we explore gives us a lower and upper bound in the compressed form of the polynomial, and its size in SOP form, specifically the range [compressed, SOP]. In approximating the expectation, an expression tree is used to model the query output polynomial, which indeed allows polyomials in compressed form.
\paragraph{Problem Definition/Known Results/Our Results/Our Techniques}
This work addresses the problem of performing computations over the output query polynomial efficiently. We specifically focus on computing the
expectation over the polynomial that is the result of a query over a PDB. This is a problem where, to the best of our knowledge, there has not
been a lot of study. Our results show that the problem is hard (superlinear) in the general case via a reduction to known hardness results
in the field of graph theory. Further we introduce a linear approximation time algorithm with guaranteed confidence bounds. We then prove the
claimed runtime and confidence bounds. The algorithm accepts an expression tree which models the output polynomial, samples uniformly from the
expression tree, and then outputs an approximation within the claimed bounds in the claimed runtime.
\paragraph{Interesting Mathematical Contributions}
This work shows an equivalence between the polynomial $\poly$ and $\rpoly$, where $\rpoly$ is the polynomial $\poly$ such that all
exponents $e > 1$ are set to $1$ across all variables over all monomials. The equivalence is realized when $\vct{X}$ is in $\{0, 1\}^\numvar$.
This setting then allows for yet another equivalence, where we prove that $\rpoly(\prob,\ldots, \prob)$ is indeed $\expct\pbox{\poly(\vct{X})}$.
This realization facilitates the building of an algorithm which approximates $\rpoly(\prob,\ldots, \prob)$ and in turn the expectation of
$\poly(\vct{X})$.
Another interesting result in this work is the reduction of the computation of $\rpoly(\prob,\ldots, \prob)$ to finding the number of
3-paths, 3-matchings, and triangles of an arbitrary graph, a problem that is known to be superlinear in the general case, which is, by our definition
hard. We show in Thm 2.1 that the exact computation of $\rpoly(\prob, \ldots, \prob)$ is indeed hard. We finally propose and prove
an approximation algorithm of $\rpoly(\prob,\ldots, \prob)$, a linear time algorithm with guaranteed $\epsilon/\delta$ bounds. The algorithm
leverages the efficiency of compressed polynomial input by taking in an expression tree of the output polynomial, which allows for factorized
forms of the polynomial to be input and efficiently sampled from. One subtlety that comes up in the discussion of the algorithm is that the input
of the algorithm is the output polynomial of the query as opposed to the input DB of the query. This then implies that our results are linear
in the size of the output polynomial rather than the input DB of the query, a polynomial that might be greater or lesser than the input depending
on the structure of the query.
\section{Outline of the rest of the paper}
\begin{enumerate}
\item Background Knowledge and Notation
\begin{enumerate}
\item Review notation for PDBs
\item Review the use of semirings as generating output polynomials
\item Review the translation of semiring operators to RA operators
\item Polynomial formulation and notation
\end{enumerate}
\item Reduction to hardness results in graph theory
\begin{enumerate}
\item $\rpoly$ and its equivalence to $\expct\pbox{\poly}$ when $\vct{X} \in \{0, 1\}^\numvar$
\item Results for SOP polynomial
\item Results for compressed version of polynomial
\item ~\cref{lem:const-p} proof
\end{enumerate}
\item Approximation Algorithm
\begin{enumerate}
\item Description of the Algorithm
\item Theoretical guarantees
\item Will we have time to tackle BIDB?
\begin{enumerate}
\item If so, experiments on BIDBs?
\end{enumerate}
\end{enumerate}
\item Future Work
\item Conclusion
\end{enumerate}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End: