paper-BagRelationalPDBsAreHard/circuits-model-runtime.tex

81 lines
6.3 KiB
TeX
Raw Normal View History

2020-12-14 23:21:03 -05:00
%!TEX root=./main.tex
2021-09-15 23:51:29 -04:00
\subsection{Relationship to Deterministic Query Runtimes}\label{sec:gen}
2021-09-18 00:55:37 -04:00
%We formalize our claim from \Cref{sec:intro} that a linear approximation algorithm for our problem implies that PDB queries (under bag semantics) can be answered (approximately) in the same runtime as deterministic queries under reasonable assumptions.
%Lastly, we generalize our result for expectation to other moments.
2020-12-16 21:34:26 -05:00
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2020-12-14 23:21:03 -05:00
2020-12-17 00:02:07 -05:00
2021-04-08 15:02:40 -04:00
%\revision{
%\subsection{Cost Model, Query Plans, and Runtime}
%As in the introduction, we could consider polynomials to be represented as an expression tree.
%However, they do not capture many of the compressed polynomial representations that we can get from query processing algorithms on bags, including the recent work on worst-case optimal join algorithms~\cite{ngo-survey,skew}, factorized databases~\cite{factorized-db}, and FAQ~\cite{DBLP:conf/pods/KhamisNR16}. Intuitively, the main reason is that an expression tree does not allow for `sharing' of intermediate results, which is crucial for these algorithms (and other query processing methods as well).
%}
2020-12-19 12:59:27 -05:00
%
2021-04-08 15:02:40 -04:00
%\label{sec:circuits}
2020-12-17 00:02:07 -05:00
2021-09-18 00:55:37 -04:00
%\mypar{The cost model}
2021-04-09 00:14:02 -04:00
%\label{sec:cost-model}
2021-09-18 00:55:37 -04:00
%So far our analysis of \Cref{prob:intro-stmt} has been in terms of the size of the lineage circuits.
%We now show that this model corresponds to the behavior of a deterministic database by proving that for any \raPlus query $\query$, we can construct a compressed circuit for $\poly$ and \bi $\pdb$ of size and runtime linear in that of a general class of query processing algorithms for the same query $\query$ on $\pdb$'s \dbbaseName $\dbbase$.
% Note that by definition, there exists a linear relationship between input sizes $|\pxdb|$ and $|\dbbase|$ (i.e., $\exists c, \db \in \pxdb$ s.t. $\abs{\pxdb} \leq c \cdot \abs{\db})$).
% \footnote{This is a reasonable assumption because each block of a \bi represents entities with uncertain attributes.
% In practice there is often a limited number of alternatives for each block (e.g., which of five conflicting data sources to trust). Note that all \tis trivially fulfill this condition (i.e., $c = 1$).}
%That is for \bis that fulfill this restriction approximating the expectation of results of SPJU queries is only has a constant factor overhead over deterministic query processing (using one of the algorithms for which we prove the claim).
2020-12-20 17:19:07 -05:00
% with the same complexity as it would take to evaluate the query on a deterministic \emph{bag} database of the same size as the input PDB.
2021-09-18 00:55:37 -04:00
We adopt a minimalistic compute-bound model of query evaluation drawn from the worst-case optimal join literature~\cite{skew,ngo-survey} to define $\qruntime{\cdot,\cdot}$.\AR{Recursive definition needs to change based on what Oliver needs. Also I think in the definition betlow would be better to replace all $\dbbase$ with $D$.}
2021-09-14 14:41:14 -04:00
2020-12-20 14:43:03 -05:00
%
\noindent\resizebox{1\linewidth}{!}{
\begin{minipage}{1.0\linewidth}
2021-04-08 22:30:03 -04:00
\begin{align*}
\qruntime{R,\dbbase} & = |\dbbase.R| &
\qruntime{\sigma Q, \dbbase} & = \qruntime{Q,\dbbase} &
\qruntime{\pi Q, \dbbase} & = \qruntime{Q,\dbbase} + \abs{Q(D)}
2021-04-08 22:30:03 -04:00
\end{align*}\\[-15mm]
2020-12-14 23:21:03 -05:00
\begin{align*}
\qruntime{Q \cup Q', \dbbase} & = \qruntime{Q, \dbbase} + \qruntime{Q', \dbbase} +\abs{Q(D)}+\abs{Q'(D)} \\
\qruntime{Q_1 \bowtie \ldots \bowtie Q_n, \dbbase} & = \qruntime{Q_1, \dbbase} + \ldots + \qruntime{Q_n,\dbbase} + \abs{Q_1(D) \bowtie \ldots \bowtie Q_n(D)}
2020-12-14 23:21:03 -05:00
\end{align*}
2020-12-20 14:43:03 -05:00
\end{minipage}
}\\
Under this model a query $Q$ evaluated over database $\dbbase$ has runtime $O(\qruntime{Q,\dbbase})$.
2020-12-20 14:43:03 -05:00
We assume that full table scans are used for every base relation access. We can model index scans by treating an index scan query $\sigma_\theta(R)$ as a base relation.
2020-12-14 23:21:03 -05:00
2021-09-18 00:55:37 -04:00
It can be verified that worst-case optimal join algorithms~\cite{skew,ngo-survey}, as well as query evaluation via factorized databases~\cite{factorized-db}
%\AR{See my comment on element on whether we should include this ref or not.}
(and work on FAQs~\cite{DBLP:conf/pods/KhamisNR16}) can be modeled as $\raPlus$ queries (though the size of these queries is data dependent).\footnote{This claim can be verified by e.g. simply looking at the {\em Generic-Join} algorithm in~\cite{skew} and {\em factorize} algorithm in~\cite{factorized-db}.} It can be verified that the above cost model on the corresponding $\raPlus$ join queries correctly captures their runtime.
More specifically \Cref{lem:circ-model-runtime} and \Cref{to-be-decided} show that for any $\raPlus$ query $\query$ and $\dbbase$, there exists a circuit $\circuit$ such that $\timeOf{\abbrStepOne}(Q,\dbbase,\circuit)$ and $|\circuit$ are both $O(\qruntime{Q, \dbbase})$. Recall we assumed these two bounds when we moved from \Cref{prob:big-o-joint-steps} to \Cref{prob:intro-stmt}.
2021-04-10 14:35:38 -04:00
%
2020-12-17 01:32:08 -05:00
%We now make a simple observation on the above cost model:
%\begin{proposition}
%\label{prop:queries-need-to-output-tuples}
%The runtime $\qruntime{Q}$ of any query $Q$ is at least $|Q|$
%\end{proposition}
2021-04-10 14:35:38 -04:00
%
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2021-04-10 14:35:38 -04:00
%
2021-09-18 00:55:37 -04:00
%We are now ready to formally state our claim with respect to \Cref{prob:intro-stmt}:
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2021-09-18 00:55:37 -04:00
%\begin{Corollary}\label{cor:cost-model}
% Given an $\raPlus$ query $\query$ over a \ti $\pdb$ with \dbbaseName $\dbbase$, we can compute a $(1\pm\eps)$-approximation of the expectation for each output tuple in $\query(\pdb)$ with probability at least $1-\delta$ in time
2020-12-20 14:43:03 -05:00
%
2021-09-18 00:55:37 -04:00
% \[
% O_k\left(\frac 1{\eps^2}\cdot\qruntime{Q,\dbbase}\cdot \log{\frac{1}{\conf}}\cdot \log(n)\right)
% \]
%\end{Corollary}
%Atri: The above is no longer needed
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2020-12-16 21:34:26 -05:00
2020-12-20 14:43:03 -05:00
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2020-12-20 01:16:52 -05:00
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End: