76 lines
5.7 KiB
TeX
76 lines
5.7 KiB
TeX
%!TEX root=./main.tex
|
|
|
|
\subsection{Relationship to Deterministic Query Runtimes}\label{sec:gen}
|
|
We formalize our claim from \Cref{sec:intro} that a linear approximation algorithm for our problem implies that PDB queries (under bag semantics) can be answered (approximately) in the same runtime as deterministic queries under reasonable assumptions.
|
|
Lastly, we generalize our result for expectation to other moments.
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
|
|
%\revision{
|
|
%\subsection{Cost Model, Query Plans, and Runtime}
|
|
%As in the introduction, we could consider polynomials to be represented as an expression tree.
|
|
%However, they do not capture many of the compressed polynomial representations that we can get from query processing algorithms on bags, including the recent work on worst-case optimal join algorithms~\cite{ngo-survey,skew}, factorized databases~\cite{factorized-db}, and FAQ~\cite{DBLP:conf/pods/KhamisNR16}. Intuitively, the main reason is that an expression tree does not allow for `sharing' of intermediate results, which is crucial for these algorithms (and other query processing methods as well).
|
|
%}
|
|
%
|
|
%\label{sec:circuits}
|
|
|
|
\mypar{The cost model}
|
|
%\label{sec:cost-model}
|
|
So far our analysis of \Cref{prob:intro-stmt} has been in terms of the size of the lineage circuits.
|
|
We now show that this model corresponds to the behavior of a deterministic database by proving that for any \raPlus query $\query$, we can construct a compressed circuit for $\poly$ and \bi $\pdb$ of size and runtime linear in that of a general class of query processing algorithms for the same query $\query$ on $\pdb$'s \dbbaseName $\dbbase$.
|
|
% Note that by definition, there exists a linear relationship between input sizes $|\pxdb|$ and $|\dbbase|$ (i.e., $\exists c, \db \in \pxdb$ s.t. $\abs{\pxdb} \leq c \cdot \abs{\db})$).
|
|
% \footnote{This is a reasonable assumption because each block of a \bi represents entities with uncertain attributes.
|
|
% In practice there is often a limited number of alternatives for each block (e.g., which of five conflicting data sources to trust). Note that all \tis trivially fulfill this condition (i.e., $c = 1$).}
|
|
%That is for \bis that fulfill this restriction approximating the expectation of results of SPJU queries is only has a constant factor overhead over deterministic query processing (using one of the algorithms for which we prove the claim).
|
|
% with the same complexity as it would take to evaluate the query on a deterministic \emph{bag} database of the same size as the input PDB.
|
|
We adopt a minimalistic compute-bound model of query evaluation drawn from the worst-case optimal join literature~\cite{skew,ngo-survey}.
|
|
|
|
%
|
|
\noindent\resizebox{1\linewidth}{!}{
|
|
\begin{minipage}{1.0\linewidth}
|
|
\begin{align*}
|
|
\qruntime{R,\dbbase} & = |\dbbase.R| &
|
|
\qruntime{\sigma Q, \dbbase} & = \qruntime{Q,\dbbase} &
|
|
\qruntime{\pi Q, \dbbase} & = \qruntime{Q,\dbbase} + \abs{Q(D)}
|
|
\end{align*}\\[-15mm]
|
|
\begin{align*}
|
|
\qruntime{Q \cup Q', \dbbase} & = \qruntime{Q, \dbbase} + \qruntime{Q', \dbbase} +\abs{Q(D)}+\abs{Q'(D)} \\
|
|
\qruntime{Q_1 \bowtie \ldots \bowtie Q_n, \dbbase} & = \qruntime{Q_1, \dbbase} + \ldots + \qruntime{Q_n,\dbbase} + \abs{Q_1(D) \bowtie \ldots \bowtie Q_n(D)}
|
|
\end{align*}
|
|
\end{minipage}
|
|
}\\
|
|
|
|
Under this model a query $Q$ evaluated over database $\dbbase$ has runtime $O(\qruntime{Q,\dbbase})$.
|
|
We assume that full table scans are used for every base relation access. We can model index scans by treating an index scan query $\sigma_\theta(R)$ as a base relation.
|
|
|
|
It can be verified that worst-case optimal join algorithms~\cite{skew,ngo-survey}, as well as query evaluation via factorized databases~\cite{factorized-db}\AR{See my comment on element on whether we should include this ref or not.} (and work on FAQs~\cite{DBLP:conf/pods/KhamisNR16}) can be modeled as select-union-project-join queries (though the size of these queries is data dependent).\footnote{This claim can be verified by e.g. simply looking at the {\em Generic-Join} algorithm in~\cite{skew} and {\em factorize} algorithm in~\cite{factorized-db}.} It can be verified that the above cost model on the corresponding $\raPlus$ join queries correctly captures their runtime.
|
|
%
|
|
%We now make a simple observation on the above cost model:
|
|
%\begin{proposition}
|
|
%\label{prop:queries-need-to-output-tuples}
|
|
%The runtime $\qruntime{Q}$ of any query $Q$ is at least $|Q|$
|
|
%\end{proposition}
|
|
%
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
%
|
|
We are now ready to formally state our claim with respect to \Cref{prob:intro-stmt}:
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
\begin{Corollary}\label{cor:cost-model}
|
|
Given an $\raPlus$ query $\query$ over a \ti $\pdb$ with \dbbaseName $\dbbase$, we can compute a $(1\pm\eps)$-approximation of the expectation for each output tuple in $\query(\pdb)$ with probability at least $1-\delta$ in time
|
|
%
|
|
\[
|
|
O_k\left(\frac 1{\eps^2}\cdot\qruntime{Q,\dbbase}\cdot \log{\frac{1}{\conf}}\cdot \log(n)\right)
|
|
\]
|
|
\end{Corollary}
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
|
|
%%% Local Variables:
|
|
%%% mode: latex
|
|
%%% TeX-master: "main"
|
|
%%% End:
|