paper-BagRelationalPDBsAreHard/approx_alg.tex

307 lines
28 KiB
TeX
Raw Normal View History

%root: main.tex
\section{$1 \pm \epsilon$ Approximation Algorithm}
%\AH{I am attempting to rewrite this section mostly from scratch. This will involve taking 'baby' steps towards the goals we spoke of on Friday 080720 as well as throughout the following week on chat channel.}
%
%\AH{\textbf{BEGIN}: Old stuff.}
%
%
%\begin{proof}
%
%Let us now show a sampling scheme which can run in $O\left(|\poly|\cdot k\right)$ per sample.
%
%First, consider when $\poly$ is already an SOP of pure products. In this case, sampling is trivial, and one would sample from the $\setsize$ terms with probability proportional to the product of probabilitites for each variable in the sampled monomial.
%
%Second, consider when $\poly$ has a POS form with a product width of $k$. In this case, we can view $\poly$ as an expression tree, where the leaves represent the individual values of each factor. The leaves are joined together by either a $\times$ or $+$ internal node, and so on, until we reach the root, which is joining the $k$-$\times$ nodes.
%
%Then for each $\times$ node, we multiply its subtree values, while for each $+$ node, we pick one of its children with probability proportional to the product of probabilities across its variables.
%
%\AH{I think I mean to say a probability proportional to the number of elements in it's given subtree.}
%
%The above sampling scheme is in $O\left(|\poly|\cdot k\right)$ time then, since we have for either case, that at most the scheme would perform within a factor of the $|\poly|$ operations, and those operations are repeated the product width of $k$ times.
%
%Thus, it is the case, that we can approximate $\rpoly(\prob_1,\ldots, \prob_n)$ within the claimed confidence bounds and computation time, thus proving the lemma.\AH{State why.}
%
%\AH{Discuss how we have that $\rpoly \geq O(\setsize)$. Discuss that we need $b-a$ to be small.}
%\end{proof}
%
%\qed
%\AH{{\bf END:} Old Stuff}
Unless explicitly stated otherwise, when speaking of a polynomial, it is assumed that the polynomial is of the standard monomial basis, i.e., a polynomial whose monomials are not only in SOP form, but one whose non-distinct monomials have been collapsed into one distinct monomial, with its corresponding coefficient accurately reflecting the number of monomials combined.
Before proceeding, some useful notation.
2020-08-12 17:41:09 -04:00
2020-08-25 11:18:08 -04:00
\AH{Note, that at present, we are undecided on the data structure for $\polytree$; this iteration is considering binary tree.}
\begin{Definition}[Expression Tree]\label{def:express-tree}
2020-08-25 11:18:08 -04:00
An expression tree $\vari{\polytree}$ is a binary %an ADT logically viewed as an n-ary
tree, whose internal nodes are from the set $\{+, \times\}$, with leaf nodes being either numerical coefficients or variables. The members of $\vari{\polytree}$ are \vari{type}, \vari{val}, \vari{partial}, \vari{children}, and \vari{weight}, where \vari{type} is the type of value stored in node $\vari{\polytree}$, \vari{val} is the value stored in node $\vari{\polytree}$, \vari{partial} is the sum of $\vari{\polytree}$'s coefficients , \vari{children} is the a list of $\vari{\polytree}$'s children, and \vari{weight} is the probability of $\vari{\polytree}$ being sampled.
\end{Definition}
2020-08-07 13:04:18 -04:00
2020-08-17 13:52:18 -04:00
Note that $\polytree$ is the expression tree corresponding to a general polynomial $\poly$, and is therefore generally \textit{not} in the standard monomial basis.
\begin{Definition}[Poly]\label{def:poly-func}
Denote $poly(\polytree)$ to be the function that takes as input expression tree $\polytree$ and outputs the polynomial of the same factored form as corresponding to expression tree $\polytree$.
\end{Definition}
2020-08-06 15:02:37 -04:00
\begin{Definition}[Expression Tree Set]\label{def:express-tree-set}$\expresstree{\smb}$ is the set of all possible expression trees each of whose corresponding polynomial in the standard monomial basis is $\smb$.
\end{Definition}
2020-08-12 17:41:09 -04:00
2020-08-22 15:47:56 -04:00
Note that \cref{def:express-tree-set} implies that $\polytree \in \expresstree{\smb}$.
\begin{Definition}[Expanded T]\label{def:expand-tree}
$\expandtree$ is the pure SOP expansion of $\polytree$, where non-distinct monomials are not combined.
\end{Definition}
To illustrate \cref{def:expand-tree} with an example, consider when $poly(\polytree)$ is $(x + 2y)(2x - y)$. (For preciseness, note that $\polytree$ would use a $+$ node to model the second factor, while storing a child coefficient of $-1$ for the variable $y$. The subtree $\polytree_S$ would be $+(\times(2, x), \times(-1, y))$, and one can see that $poly(\polytree_S)$ is indeed equivlent to $(2x - y)$). The pure expansion then is $2x^2 - xy + 4xy - 2y^2 = \expandtree$.
\begin{Definition}[Positive T]\label{def:positive-tree}
Let $\abstree$ denote the resulting expression tree when each coefficient $c_i$ in $\polytree$ is exchanged with its absolute value $|c_i|$, and then the resulting $\polytree'$ converted to the expression tree that directly models the standard monomial basis of $poly(\polytree')$.%, with a root node $+$, whose children consist of all the monomials in the standard monomial basis of $poly(\polytree')$.
\end{Definition}
2020-08-12 17:41:09 -04:00
2020-08-17 13:52:18 -04:00
Using the same polynomial from the above example, $poly(\abstree) = (x + 2y)(2x + y) = 2x^2 +xy +4xy + 2y^2 = 2x^2 + 5xy + 2y^2$.
2020-08-12 17:41:09 -04:00
2020-08-25 11:18:08 -04:00
In the subsequent subsections we lay the groundwork to prove the following theorem.
2020-08-22 15:47:56 -04:00
\begin{Theorem}\label{lem:approx-alg}
For any query polynomial $\poly(\vct{X})$, an approximation of $\rpoly(\prob_1,\ldots, \prob_n)$ can be computed in $O\left(|\poly|\cdot k \frac{\log\frac{1}{\conf}}{\error^2}\right)$, within $1 \pm \error$ multiplicative error with probability $\geq 1 - \conf$, where $k$ denotes the product width of $\poly$.
\end{Theorem}
\subsection{Approximating $\rpoly$}
\subsubsection{Description}
2020-08-25 11:18:08 -04:00
Algorithm ~\ref{alg:mon-sam} approximates $\rpoly$ by employing some auxiliary methods on its input $\polytree$, sampling $\polytree$ $\ceil{\frac{2\log{\frac{2}{\delta}}}{\epsilon^2}}$ times, and then outputting an estimate of $\rpoly$ within a multiplicative error of $1 \pm \epsilon$ with a probability of $1 - \delta$.
\subsubsection{Psuedo Code}
\begin{algorithm}[H]
\caption{$\approxq$($\inputextree$, $\vct{p}$, $\conf$, $\error$)}
\label{alg:mon-sam}
\begin{algorithmic}[1]
2020-08-25 11:18:08 -04:00
\Require \inputextree: Binary Expression Tree
\Require $\vct{p}$: Vector
\Require $\conf$: Real
\Require $\error$: Real
\Ensure \vari{acc}: Real
\State $\accum \gets 0$
\State $\numsamp \gets \ceil{\frac{2 \log{\frac{2}{\conf}}}{\error^2}}$
2020-08-25 11:18:08 -04:00
\State $(\vari{\inputextree}_\vari{mod}, \vari{size}) \gets $ \onepass($\inputextree$)\Comment{\onepass \;and \sampmon \;defined subsequently}
\For{\vari{i} \text{ in } $\left[1\text{ to }\numsamp\right]$}\Comment{Perform the required number of samples}
2020-08-25 11:18:08 -04:00
\State $(\vari{Y}_\vari{i}, \vari{c}_\vari{i}) \gets $ \sampmon($\inputextree_\vari{mod}$)
\State $\vari{temp} \gets 1$
\For{$\vari{x}_{\vari{j}}$ \text{ in } $\vari{Y}_{\vari{i}}$}
\State \vari{temp} $\gets$ \vari{temp} $\times \; \vari{\prob}_\vari{j}$ \Comment{$\vari{p}_\vari{j}$ is the probability of $\vari{x}_\vari{j}$ from input $\vct{p}$}
\EndFor
\State \vari{temp} $\gets$ \vari{temp} $\times\; \vari{c}_\vari{i}$
\State $\accum \gets \accum + \vari{temp}$\Comment{Store the sum over all samples}
\EndFor
2020-08-25 11:18:08 -04:00
\State $\vari{acc} \gets \vari{acc} \times \frac{\vari{size}}{\numsamp}$
\State \Return \vari{acc}
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness}
2020-08-14 12:03:26 -04:00
\begin{Lemma}\label{lem:mon-samp}
2020-08-25 11:18:08 -04:00
Algorithm \ref{alg:mon-sam} outputs an estimate of $\rpoly(\prob,\ldots, \prob)$ within a mulitplicative $\left(1 \pm \error\right)\cdot\rpoly(\prob,\ldots, \prob)$ error with probability $1 - \conf$, in $O\left(\frac{\log{\frac{1}{\conf}}}{\error^2} \cdot \text{need to finish}\right)$ time.
\end{Lemma}
%Before the proof, a brief summary of the sample scheme is necessary. Regardless of the $\polytree$, note that when one samples with a weighted distribution corresponding to the coefficients in $poly(\expandtree)$, it is the same as uniformly sampling over all individual terms of the equivalent polynomial whose terms have coefficients in the set $\{-1, 1\}$, i.e. collapsed monomials are decoupled. Following this reasoning, algorithim ~\ref{alg:one-pass} computes such a weighted distribution and algorithm ~\ref{alg:sample} produces samples accordingly. As a result, from here on, we can consider our sampling scheme to be uniform.
2020-08-17 13:52:18 -04:00
%each of the $k$ product terms is sampled from individually, where the final output sample is sampled with a probability that is proportional to its coefficient in $\expandtree$. Note, that This is performed by \cref{alg:sample} and its correctness will be argued momentarily. For now it suffices to note that the sampling scheme samples from each of the $k$ products in a POS using a weighted distribution equivalent to sampling uniformly over all monomials.
\begin{proof}[Proof of Lemma \ref{lem:mon-samp}]
2020-08-17 13:52:18 -04:00
2020-08-22 15:47:56 -04:00
Consider $\expandtree$ and let $c_i$ be the coefficient of the $i^{th}$ monomial and $\distinctvars_i$ be the number of distinct variables appearing in the $i^{th}$ monomial. As will be seen, the sampling scheme samples each term $t$ in $\expandtree$ with probability $\frac{|c_i|}{\abstree(1,\ldots, 1)}$. Call this sampling scheme $\mathcal{S}$. Now consider $\rpoly$ and note that $\coeffitem{i}$ is the value of the $i^{th}$ monomial term in $\rpoly(\prob_1,\ldots, \prob_n)$. Let $m$ be the number of terms in $\expandtree$ and $\coeffset$ to be the set $\{c_1,\ldots, c_m\}.$
2020-08-14 12:03:26 -04:00
2020-08-25 11:18:08 -04:00
Consider now a set of $\samplesize$ random variables $\vct{\randvar}$, where each $\randvar_i$ is distributed as described above. Then for random variable $\randvar_i$, it is the case that $\expct\pbox{\randvar_i} = \sum_{i = 1}^{\setsize}\frac{c_i \cdot \prob^{\distinctvars_i}}{\sum_{i = 1}^{\setsize}|c_i|} = \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$. Let $\hoeffest = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\randvar_i$. It is also true that
2020-08-14 12:03:26 -04:00
\[\expct\pbox{\hoeffest} = \expct\pbox{ \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\randvar_i} = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\expct\pbox{\randvar_i} = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\frac{1}{\setsize}\sum_{j = 1}^{\setsize}\frac{c'_i \cdot \prob^{\distinctvars}}{\setsize} = \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}.\]
2020-08-22 15:47:56 -04:00
\begin{Lemma}\label{lem:hoeff-est}
2020-08-25 11:18:08 -04:00
Given $\samplesize$ random variables $\vct{\randvar}$ with distribution $\mathcal{S}$ over expression tree $\polytree$, an additive $\error' \cdot \abstree(1,\ldots, 1)$ error bounds with $N \geq \frac{2\log{\frac{2}{\conf}}}{\error^2}$ samples.
2020-08-22 15:47:56 -04:00
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{lem:hoeff-est}]
2020-08-14 12:03:26 -04:00
Given the range $[-1, 1]$ for every $\randvar_i$ in $\vct{\randvar}$, by Hoeffding, it is the case that $P\pbox{~\left| \hoeffest - \expct\pbox{\hoeffest} ~\right| \geq \error} \leq 2\exp{-\frac{2\samplesize^2\error^2}{2^2 \samplesize}} \leq \conf$.
2020-08-14 12:03:26 -04:00
Solving for the number of samples $\samplesize$ we get
\begin{align}
&\conf \geq 2\exp{-\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-1}\\
&\frac{\conf}{2} \geq \exp{-\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-2}\\
&\frac{2}{\conf} \leq \exp{\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-3}\\
&\log{\frac{2}{\conf}} \leq \frac{2\samplesize^2\error^2}{4\samplesize}\label{eq:hoeff-4}\\
&\log{\frac{2}{\conf}} \leq \frac{\samplesize\error^2}{2}\label{eq:hoeff-5}\\
&\frac{2\log{\frac{2}{\conf}}}{\error^2} \leq \samplesize.\label{eq:hoeff-6}
\end{align}
Equation \cref{eq:hoeff-1} results computing the sum in the denominator of the exponential. Equation \cref{eq:hoeff-2} is the result of dividing both sides by $2$. Equation \cref{eq:hoeff-3} follows from taking the reciprocal of both sides, and noting that such an operation flips the inequality sign. We then derive \cref{eq:hoeff-4} by the taking the base $e$ log of both sides, and \cref{eq:hoeff-5} results from reducing common factors. We arrive at the final result of \cref{eq:hoeff-6} by simply multiplying both sides by the reciprocal of the RHS fraction without the $\samplesize$ factor.
2020-08-25 11:18:08 -04:00
By Hoeffding we obtain the number of samples necessary to acheive the claimed additive error bounds.
2020-08-22 15:47:56 -04:00
\end{proof}
\qed
\begin{Corollary}\label{cor:adj-err}
2020-08-25 11:18:08 -04:00
Setting $\error = \error' \cdot \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$ achieves $1 \pm \epsilon$ multiplicative error bounds.
2020-08-22 15:47:56 -04:00
\end{Corollary}
\begin{proof}[Proof of Corollary \ref{cor:adj-err}]
2020-08-25 11:18:08 -04:00
Since it is the case that we have $\error' \cdot \abstree(1,\ldots, 1)$ additive error, one can set $\error = \error' \cdot \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$, yielding a multiplicative error proportional to $\rpoly(\prob,\ldots, \prob)$.
2020-08-22 15:47:56 -04:00
\end{proof}
\qed
Note that Hoeffding is assuming the sum of random variables be divided by the number of variables. Also see that to properly estimate $\rpoly$, it is necessary to multiply by the number of monomials in $\rpoly$, i.e. $\abstree(1,\ldots, 1)$. Therefore it is the case that $\frac{acc}{N}$ gives the estimate of one monomial, and multiplying by $\abstree(1,\ldots, 1)$ yields the estimate of $\rpoly(\prob,\ldots, \prob)$. This concludes the proof of lemma ~\ref{lem:mon-samp}.
2020-08-14 12:03:26 -04:00
\end{proof}
2020-08-25 11:18:08 -04:00
\qed
2020-08-13 20:54:06 -04:00
\subsubsection{Run-time Analysis}
2020-08-19 16:28:29 -04:00
First, algorithm ~\ref{alg:mon-sam} calls \textsc{OnePass} which takes $O(|T|)$ time. Then it calls \textsc{Sample} $\frac{\log{\frac{1}{\conf}}}{\error^2}$ times, with a constant operation for each call. This gives and overall runtime of $O(\frac{\log{\frac{1}{\conf}}}{\error^2}\cdot|\polytree|)$.
2020-08-17 13:52:18 -04:00
\subsection{OnePass Algorithm}
\subsubsection{Description}
Auxiliary algorithm ~\ref{alg:one-pass} has the responsibility of computing the weighted distribution over $\expandtree$. This consists of two parts. Computing the sum over the absolute values of each coefficient ($\abstree(1,\ldots, 1)$), and computing the distribution over each monomial term of $\expandtree$, without ever materializing $\expandtree$.
2020-08-17 17:12:25 -04:00
Algorithm ~\ref{alg:one-pass} takes $\polytree$ as input, modifying $\polytree$ in place with the appropriate weight distribution across all nodes, and finally returning $\abstree(1,\ldots, 1)$. For concreteness, consider the example when $poly(\polytree) = (x_1 + x_2)(x_1 - x_2) + x_2^2$. The expression tree $\polytree$ would then be $+\left(\times\left(+\left(x_1, x_2\right), +\left(x_1, -x_2\right)\right), \times\left(y, y\right)\right)$.
\AH{A tree diagram would work much better, but to do that it appears that I need to spend time learning the tikz package, which I haven't had time for yet.}
2020-08-17 17:12:25 -04:00
To compute $\abstree(1,\ldots, 1)$, algorithm ~\ref{alg:one-pass} makes a bottom-up traversal of $\polytree$ and performs the following computations. For a leaf node whose value is a coefficient, the value is saved. When a $+$ node is visited, the coefficient values of its children are summed. Finally, for the case of a $\times$ node, the coefficient values of the children are multiplied. The algorithm returns the total value upon termination.
2020-08-20 12:21:51 -04:00
%\AH{I've had a slight conflict here. The algorithm need not return $\abstree(1,\ldots, 1)$ upon completion, as $\polytree$ has been correctly annotated. But this requires a few more lines of pseudo code, and there is the concern of 'over complicating' things for the reader.}
Algorithm ~\ref{alg:one-pass} computes the weighted probability distribution in the same bottom-up traversal. When a leaf node is encountered, its value is saved if it is a coefficient. When a $\times$ node is visited, the coefficient values of its children are multiplied, recording that value at the $\times$ node. When a $+$ node is visited, the algorithm computes and saves the relative probabilities of each one of its children. This is done by taking the sum of its children's coefficient absolute values, and for each child, dividing the child's coefficient absolute value by that sum. Lastly, the partial value of its subtree coefficients is stored at the $+$ node. Upon termination, all appropriate nodes have been annotated accordingly.
2020-08-17 17:12:25 -04:00
For the running example, after one pass, \cref{alg:one-pass} would have learned to sample the two children of the root $+$ node with $P\left(\times\left(+\left(x_1, x_2\right), +\left(x_1, -x_2\right)\right)\right) = \frac{4}{5}$ and $P\left(\times\left(x_2, x_2\right)\right) = \frac{1}{5}$. Similarly, the two inner $+$ nodes of the root's left child, call them $+_1$ and $+_2$, using $l$ for left child and $r$ for right child are $P_{+_1}(l) = P_{+_1}(r) = P_{+_2}(l) = P_{+_2}(r) = \frac{1}{2}$. Note that in this example, the sampling probabilities for the children of each inner $+$ node are equal to one another because both parents have the same number of children, and, in each case, the children of each parent $+$ node share the same $|c_i|$.
The following pseudo code assumes that $\polytree$ has the following members. $\polytree.val$ holds the value stored by $\polytree$, $\polytree.children$ contains all children of $\polytree$, $\polytree.weight$ is the probability of choosing $\polytree$, and $\polytree.partial$ is the coefficient of $\polytree$. As in the recursive nature of trees, a child of $\polytree$ is assumed to be an expression tree itself. The function $isnum(\cdot)$ returns true if the value is numeric.
2020-08-17 17:12:25 -04:00
\AH{{\bf Next:}
2020-08-17 13:52:18 -04:00
5) Prove correctness for all algos.
6) Don't forget to do run-time analysis.}
\subsubsection{Psuedo Code}
2020-08-17 13:52:18 -04:00
\begin{algorithm}[h!]
\caption{\onepass$(\inputextree)$}
\label{alg:one-pass}
\begin{algorithmic}[1]
2020-08-25 11:18:08 -04:00
\Require \inputextree: Binary Expression Tree
\Ensure \inputextree: Binary Expression Tree
\Ensure \vari{sum}: Real
\State $\vari{sum} \gets 1$
\If{$\inputextree.\vari{type} = +$}
\State $\accum \gets 0$
\For{$child$ in $\inputextree.\vari{children}$}\Comment{Sum up all children coefficients}
2020-08-25 11:18:08 -04:00
\State $(\vari{T}, \vari{s}) \gets \onepass(child)$
\State $\accum \gets \accum + \vari{s}$
\EndFor
\State $\inputextree.\vari{partial} \gets \accum$
\For{$child$ in $\inputextree.\vari{children}$}\Comment{Record distributions for each child}
\State $child.\vari{weight} \gets \frac{\vari{child.partial}}{\inputextree.\vari{partial}}$
\EndFor
2020-08-25 11:18:08 -04:00
\State $\vari{sum} \gets \inputextree.\vari{partial}$
\State \Return (\inputextree, \vari{sum})
\ElsIf{$\inputextree.\vari{type} = \times$}
\State $\accum \gets 1$
2020-08-25 11:18:08 -04:00
\For{$child \text{ in } \inputextree.\vari{children}$}\Comment{Compute the product of all children coefficients}
\State $(\vari{T}, \vari{s}) \gets \onepass(child)$
\State $\accum \gets \accum \times \vari{s}$
\EndFor
\State $\inputextree.\vari{partial}\gets \accum$
2020-08-25 11:18:08 -04:00
\State $\vari{sum} \gets \inputextree.\vari{partial}$
\State \Return (\inputextree, \vari{sum})
\ElsIf{$\inputextree.\vari{type} = numeric$}\Comment{Base case}
\State $\vari{sum} \gets |\inputextree.\vari{val}|$
\State \Return (\inputextree, \vari{sum})
\Else
2020-08-25 11:18:08 -04:00
\State \Return (\inputextree, \vari{sum})
\EndIf
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness of Algorithm ~\ref{alg:one-pass}}
\begin{Lemma}\label{lem:one-pass}
Algorithm ~\ref{alg:one-pass} correctly computes $\abstree(1,\ldots, 1)$ for each subtree $S$ of $\polytree$. For the children of $+$ nodes, it correctly computes the weighted distribution $\frac{|c_S|}{|T_S|(1,\ldots, 1)}$ across each child. All computations are performed in one traversal.
\end{Lemma}
\begin{proof}[Proof of Lemma ~\ref{lem:one-pass}]
Use proof by structural induction over the depth $d$ of the binary tree $\polytree$.
For the base case, $d = 0$, it is the case that the root node is a leaf and therefore by definition ~\ref{def:express-tree} must be a variable or coefficient. When it is a variable, \textsc{OnePass} returns $1$, and we have that $\abstree(1,\ldots, 1) = 1$ which is correct. When the root is a coefficient, the absolute value of the coefficient is returned, which is indeed $\abstree(1,\ldots, 1)$. Since the root node cannot be a $+$ node, this proves the base case.
Let the inductive hypothesis be the assumption that for $d \leq k \geq 0$, lemma ~\ref{lem:one-pass} is true for algorithm ~\ref{alg:one-pass}.
Now prove that lemma ~\ref{lem:one-pass} holds for $k + 1$. Notice that the root of $\polytree$ has at most two children, $\polytree_L$ and $\polytree_R$. Note also, that for each child, it is the case that $d = k$, since we have a maximal path from the root to each child of $1$. Then, by inductive hypothesis, lemma ~\ref{lem:one-pass} holds for each existing child, and we are left with two possibilities for the root node. The first case is when the root node is a $+$ node. When this happens, algorithm ~\ref{alg:one-pass} computes $\abstree(1,\ldots, 1) = |T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)$ which is correct. For the distribution of the children of $+$, algorithm ~\ref{alg:one-pass} computes $P(\polytree_i) = \frac{|T_i|(1,\ldots, 1)}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$ which is indeed the case. The second case is when the root is a $\times$ node. Algorithm ~\ref{alg:one-pass} then computes the product of the subtree partial values, $|T_L|(1,\ldots, 1) \times |T_R|(1,\ldots, 1)$ which indeed equals $\abstree(1,\ldots, 1)$.
Since algorithm ~\ref{alg:one-pass} completes exactly one traversal, computing these values from the bottom up, it is the case that all subtree values are computed, and this completes the proof.
\end{proof}
\qed
\subsubsection{Run-time Analysis}
2020-08-19 16:28:29 -04:00
The runtime for \textsc{OnePass} is fairly straight forward. The algorithm visits each node of $\polytree$ one time, with a constant number of operations with each visit, leading to a runtime of $O(|\polytree|)$.
\AH{A constant number of operations for the binary case. I'm not sure that we can say the same for the n-ary case?}
2020-08-19 16:28:29 -04:00
\AH{Technically, should I be using $\Theta$ instead of big-O in the above?}
2020-08-17 13:52:18 -04:00
\subsection{Sample Algorithm}
Algorithm ~\ref{alg:sample} takes $\polytree$ as input and produces a sample $\randvar_i$ according to the weighted distribution computed by \textsc{OnePass}. While one cannot compute $\expandtree$ in time better than $O(N^k)$, the algorithm, similar to \textsc{OnePass}, uses a technique on $\polytree$ which produces a sample from $\expandtree$ without ever materializing $\expandtree$.
2020-08-17 13:52:18 -04:00
2020-08-20 12:21:51 -04:00
Algorithm ~\ref{alg:sample} selects a monomial from $\expandtree$ by the following top-down traversal. For a parent $+$ node, a subtree is chosen over the previously computed weighted sampling distribution. When a parent $\times$ node is visited, the monomials sampled from its subtrees are combined into one monomial. For the case of a parent node with children that are leaf nodes, if the parent is a $\times$ node, then each leaf node is returned, with the coefficient reduced to either $\{-1, 1\}$ depending on its sign. If the parent node is a $+$ node, then one of the chidlren is sampled as discussed previously. The algorithm concludes outputting $sign(c_i)\cdot\prob^{d_i}$. The pseudo code uses $isdist(\cdot)$ to mean a function that takes a single variable from $\vct{X}$ as input and using a $O(1)$ time hash structure for lookup, outputs whether or not we have seen this variable while computing the current sample.
2020-08-17 13:52:18 -04:00
\subsubsection{Pseudo Code}
\begin{algorithm}
\caption{\sampmon(\inputextree)}
\label{alg:sample}
\begin{algorithmic}[1]
2020-08-25 11:18:08 -04:00
\Require \inputextree: Binary Expression Tree
\Ensure \vari{vars}: TreeSet
\Ensure \vari{sgn}: Integer in $\{-1, 1\}$
\State $\vari{vars} \gets new$ $TreeSet()$
2020-08-25 11:18:08 -04:00
\State $\vari{sgn} \gets 1$
\If{$\inputextree.\vari{type} = +$}\Comment{Sample at every $+$ node}
\State $\inputextree_{\vari{samp}} \gets$ Sample from left ($\inputextree_{\vari{L}}$) and right ($\inputextree_{\vari{R}}$) w.p. $\frac{\vari{c}_\vari{L}}{|\inputextree_{\vari{L}}|(1,\ldots, 1)}$ and $\frac{\vari{c}_{\vari{R}}}{|\inputextree_{\vari{R}}|(1,\ldots, 1)}$
2020-08-25 11:18:08 -04:00
\State $(\vari{v}, \vari{s}) \gets \sampmon(\inputextree_{\vari{samp}})$
\State $\vari{vars} \gets \vari{vars} \;\cup \;\vari{v}$
2020-08-25 11:18:08 -04:00
\State $\vari{sgn} \gets \vari{sgn} \times \vari{s}$
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\inputextree.\vari{type} = \times$}\Comment{Multiply the sampled values of all subtree children}
\For {$child$ in $\inputextree.\vari{children}$}
2020-08-25 11:18:08 -04:00
\State $(\vari{v}, \vari{s}) \gets \sampmon(child)$
\State $\vari{vars} \gets \vari{vars} \cup \vari{v}$
2020-08-25 11:18:08 -04:00
\State $\vari{sgn} \gets \vari{sgn} \times \vari{s}$
\EndFor
2020-08-25 11:18:08 -04:00
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\inputextree.\vari{type} = numeric$}\Comment{The leaf is a sgnicient}
\State $\vari{sgn} \gets \vari{sgn} \times sign(\inputextree.\vari{val})$
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\inputextree.\vari{type} = var$}
\State $\vari{vars} \gets \vari{vars} \; \cup \; \{\;\inputextree.\vari{val}\;\}$\Comment{Add the variable to the set}
2020-08-25 11:18:08 -04:00
\State $\Return~(\vari{vars}, \vari{sgn})$
\EndIf
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness of Algorithm ~\ref{alg:sample}}
\begin{Lemma}\label{lem:sample}
Algorithm ~\ref{alg:sample} correctly samples the $i^{th}$ monomial from $\rpoly(\prob,\ldots, \prob)$ with a probability of $\frac{|c_i|}{\abstree(1,\ldots, 1)}$.
\end{Lemma}
\begin{proof}[Proof of Lemma ~\ref{lem:sample}]
2020-08-25 11:18:08 -04:00
Prove by structural induction on the depth $d$ of $\polytree$. For the base case $d = 0$, by definition ~\ref{def:express-tree} we know that the root has to be either a coefficient or a variable. When the root is a variable $x$, we have the fact that $P(\randvar_i = x) = 1$, the algorithm correctly returns $1 \times x$, upholding correctness. When the root is a coefficient, it returns $sign(c_i) \times 1$. For $|c_i| \leq 1$, $P(\randvar_i = c_i) = 1$, and correctness follows as the algorithm returns $sign(c_i) \times 1$. When $|c_i| \geq 2$, $P(|\randvar_i| = 1) = \frac{1}{|c_i|}$, and $sign(c_i) \times 1$ yields a properly weighted sampling for the case when $|c_i| \geq 2$.
2020-08-19 16:28:29 -04:00
For the inductive hypothesis, assume that for $d \leq k \geq 0$ lemma ~\ref{lem:sample} is true.
2020-08-25 11:18:08 -04:00
Prove now, that when $d = k + 1$ lemma ~\ref{lem:sample} holds. It is the case that the root of $\polytree$ has up to two children $\polytree_L$ and $\polytree_R$. Since we have a maximal path of 1 from the root to either child, we know that by inductive hypothesis, $\polytree_L$ and $\polytree_R$ are both depth $d = k$, and lemma ~\ref{lem:sample} holds for either of them and the probabilities computed on the sampled subgraph of nodes visited in $\polytree_L$ and $\polytree_R$ are therefore correct.
2020-08-25 11:18:08 -04:00
Then the root has to be either a $+$ or $\times$ node. When it is the former, algorithm ~\ref{alg:sample} will sample from $\polytree_L$ and $\polytree_R$ according to their computed weights in algorithm ~\ref{alg:one-pass}. the call to $WeightedSample$ over both subtrees will return either of the two subtrees with probability proportional to the distribution computed by \textsc{OnePass}, which is precisely $P(T_L) = \frac{|c_L|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$ and $P(T_R) = \frac{|c_R|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$. By inductive hypothesis, we know that $|c_L|$ and $|c_R|$ are correct, and combined with the fact that $|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1) = \abstree(1,\ldots, 1)$, since the algorithm makes a call to $WeightedSample$, this then proves the inductive step for the case when the root of $\polytree$ is $+$.
For the case when the root is a $\times$ node, it is the case that both subtrees compose together one monomial, thus no more sampling is necessary, and the algorithm correctly returns the product of the sample output for existing subtrees. This behavior is correct since it is the equivalent of the weights precomputed by \textsc{OnePass} for a $\times$ node, where we select both subtrees of the node with probability $\frac{|c_L| \cdot |c_R|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$. This concludes the proof.
\end{proof}
\qed
2020-08-19 16:28:29 -04:00
\subsubsection{Run-time Analysis}
2020-08-19 16:28:29 -04:00
Algorithm ~\ref{alg:sample} has a runtime between $\Omega(depth(\polytree))$ and $O(|\polytree|)$.
\AH{I am concerned that I am not approaching this either correctly or the way Atri is viewing this. There is no $k$ product width factor in this analysis. Please let me know how I can make this better.}
Note that for the case that all internal nodes were $\times$ nodes, every node would be visited with a constant number of operations, yielding $O(|\polytree|)$ time. For the case of when all internal nodes are $+$ nodes, then we have a runtime of $\Omega(depth(\polytree))$, since for each node, the algorithm would sample one of its two children, until it reached a leaf.