paper-BagRelationalPDBsAreHard/approx_alg.tex

332 lines
31 KiB
TeX

%root: main.tex
\section{$1 \pm \epsilon$ Approximation Algorithm}
%\AH{I am attempting to rewrite this section mostly from scratch. This will involve taking 'baby' steps towards the goals we spoke of on Friday 080720 as well as throughout the following week on chat channel.}
%
%\AH{\textbf{BEGIN}: Old stuff.}
%
%
%\begin{proof}
%
%Let us now show a sampling scheme which can run in $O\left(|\poly|\cdot k\right)$ per sample.
%
%First, consider when $\poly$ is already an SOP of pure products. In this case, sampling is trivial, and one would sample from the $\setsize$ terms with probability proportional to the product of probabilitites for each variable in the sampled monomial.
%
%Second, consider when $\poly$ has a POS form with a product width of $k$. In this case, we can view $\poly$ as an expression tree, where the leaves represent the individual values of each factor. The leaves are joined together by either a $\times$ or $+$ internal node, and so on, until we reach the root, which is joining the $k$-$\times$ nodes.
%
%Then for each $\times$ node, we multiply its subtree values, while for each $+$ node, we pick one of its children with probability proportional to the product of probabilities across its variables.
%
%\AH{I think I mean to say a probability proportional to the number of elements in it's given subtree.}
%
%The above sampling scheme is in $O\left(|\poly|\cdot k\right)$ time then, since we have for either case, that at most the scheme would perform within a factor of the $|\poly|$ operations, and those operations are repeated the product width of $k$ times.
%
%Thus, it is the case, that we can approximate $\rpoly(\prob_1,\ldots, \prob_n)$ within the claimed confidence bounds and computation time, thus proving the lemma.\AH{State why.}
%
%\AH{Discuss how we have that $\rpoly \geq O(\setsize)$. Discuss that we need $b-a$ to be small.}
%\end{proof}
%
%\qed
%\AH{{\bf END:} Old Stuff}
Before proceeding, some useful definitions and notation.
\begin{Definition}[Polynomial]\label{def:polynomial}
The expression $\poly(\vct{X})$ is a polynomial if it satisfies the standard mathematical definition of polynomial, and additionally is in the standard monomial basis.
\end{Definition}
To clarify defintion ~\ref{def:polynomial}, a polynomial in the standard monomial basis is one whose monomials are in SOP form, and whose non-distinct monomials have been collapsed into one distinct monomial, with its corresponding coefficient accurately reflecting the number of monomials combined.
\begin{Definition}[Expression Tree]\label{def:express-tree}
An expression tree $\etree$ is a binary %an ADT logically viewed as an n-ary
tree, whose internal nodes are from the set $\{+, \times\}$, with leaf nodes being either numerical coefficients or variables. The members of $\etree$ are \vari{type}, \vari{val}, \vari{partial}, \vari{children}, and \vari{weight}, where \vari{type} is the type of value stored in node $\etree$, \vari{val} is the value stored in node $\etree$, \vari{partial} is the sum of $\etree$'s coefficients , \vari{children} is the a list of $\etree$'s children, and \vari{weight} is the probability of $\etree$ being sampled.
\end{Definition}
Note that $\etree$ encodes an expression generally \textit{not} in the standard monomial basis.
\begin{Definition}[poly$(\cdot)$]\label{def:poly-func}
Denote $poly(\etree)$ to be the function that takes as input expression tree $\etree$ and outputs its corresponding polynomial.
\end{Definition}
\begin{Definition}[Expression Tree Set]\label{def:express-tree-set}$\etreeset{\smb}$ is the set of all possible expression trees each of whose corresponding polynomial in the standard monomial basis is $\smb$.
\end{Definition}
Note that \cref{def:express-tree-set} implies that $\etree \in \etreeset{\smb}$.
\begin{Definition}[Expanded T]\label{def:expand-tree}
$\expandtree$ is the pure SOP expansion of $\etree$, where non-distinct monomials are not combined after the product operation. The logical view of \expandtree ~is a list of tuples $(c_i, v_i)$, where $c_i$ is the coefficient and $v_i$ the set of variables of the $i^{th}$ monomial.\end{Definition}
To illustrate \cref{def:expand-tree} with an example, consider the product $(x + 2y)(2x - y)$ and its expression tree $\etree$. The pure expansion of the product is $2x^2 - xy + 4xy - 2y^2 = \expandtree$, logically viewed as $[(2, x^2), (-1, xy), (4, xy), (-2, y^2)]$. (For preciseness, note that $\etree$ would use a $+$ node to model the second factor ($\etree_\vari{R}$), while storing a child coefficient of $-1$ for the variable $y$. The subtree $\etree_\vari{R}$ would be $+(\times(2, x), \times(-1, y))$, and one can see that $\etree_R$ is indeed equivlent to $(2x - y)$).
\begin{Definition}[Positive T]\label{def:positive-tree}
Let $\abstree$ denote the resulting expression tree when each coefficient $c_i$ in $\etree$ is exchanged with its absolute value $|c_i|$.
\end{Definition}
Using the same polynomial from the above example, $poly(\abstree) = (x + 2y)(2x + y) = 2x^2 +xy +4xy + 2y^2 = 2x^2 + 5xy + 2y^2$.
\begin{Definition}[Evaluation]\label{def:exp-poly-eval}
Given an expression tree $\etree$ and polynomial $poly(\etree)$, the evaluation of both expressions at a point $\vct{X}$ such that all variables $X_i$ are bound to specific values, consists of substituting all variables in $\etree$ and $poly(\etree)$ with the values of $\vct{X}$. Note that $\etree(\vct{X}) = poly(\etree)(\vct{X})$.
\end{Definition}
In the subsequent subsections we lay the groundwork to prove the following theorem.
\begin{Theorem}\label{lem:approx-alg}
For any query polynomial $\poly(\vct{X})$, an approximation of $\rpoly(\prob_1,\ldots, \prob_n)$ can be computed in $O\left(|\poly|\cdot k \frac{\log\frac{1}{\conf}}{\error^2}\right)$, within $1 \pm \error$ multiplicative error with probability $\geq 1 - \conf$, where $k$ denotes the product width of $\poly$.
\end{Theorem}
\subsection{Approximating $\rpoly$}
\subsubsection{Description}
Algorithm ~\ref{alg:mon-sam} approximates $\rpoly$ by employing some auxiliary methods on its input $\etree$, sampling $\etree$ $\ceil{\frac{2\log{\frac{2}{\delta}}}{\epsilon^2}}$ times, and then outputting an estimate of $\rpoly$ within a multiplicative error of $1 \pm \epsilon$ with a probability of $1 - \delta$.
\subsubsection{Psuedo Code}
\begin{algorithm}[H]
\caption{$\approxq$($\etree$, $\vct{p}$, $\conf$, $\error$)}
\label{alg:mon-sam}
\begin{algorithmic}[1]
\Require \etree: Binary Expression Tree
\Require $\vct{p}$: Vector
\Require $\conf$: Real
\Require $\error$: Real
\Ensure \vari{acc}: Real
\State $\accum \gets 0$
\State $\numsamp \gets \ceil{\frac{2 \log{\frac{2}{\conf}}}{\error^2}}$
\State $(\vari{\etree}_\vari{mod}, \vari{size}) \gets $ \onepass($\etree$)\Comment{\onepass \;and \sampmon \;defined subsequently}
\For{\vari{i} \text{ in } $\left[1\text{ to }\numsamp\right]$}\Comment{Perform the required number of samples}
\State $(\vari{Y}_\vari{i}, \vari{c}_\vari{i}) \gets $ \sampmon($\etree_\vari{mod}$)
\State $\vari{temp} \gets 1$
\For{$\vari{x}_{\vari{j}}$ \text{ in } $\vari{Y}_{\vari{i}}$}
\State \vari{temp} $\gets$ \vari{temp} $\times \; \vari{\prob}_\vari{j}$ \Comment{$\vari{p}_\vari{j}$ is the probability of $\vari{x}_\vari{j}$ from input $\vct{p}$}
\EndFor
\State \vari{temp} $\gets$ \vari{temp} $\times\; \vari{c}_\vari{i}$
\State $\accum \gets \accum + \vari{temp}$\Comment{Store the sum over all samples}
\EndFor
\State $\vari{acc} \gets \vari{acc} \times \frac{\vari{size}}{\numsamp}$
\State \Return \vari{acc}
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness}
\begin{Lemma}\label{lem:mon-samp}
Algorithm \ref{alg:mon-sam} outputs an estimate of $\rpoly(\prob,\ldots, \prob)$ within a mulitplicative $\left(1 \pm \error\right)\cdot\rpoly(\prob,\ldots, \prob)$ error with probability $1 - \conf$, in $O\left(\frac{\log{\frac{1}{\conf}}}{\error^2} \cdot \text{need to finish}\right)$ time.
\end{Lemma}
%Before the proof, a brief summary of the sample scheme is necessary. Regardless of the $\etree$, note that when one samples with a weighted distribution corresponding to the coefficients in $poly(\expandtree)$, it is the same as uniformly sampling over all individual terms of the equivalent polynomial whose terms have coefficients in the set $\{-1, 1\}$, i.e. collapsed monomials are decoupled. Following this reasoning, algorithim ~\ref{alg:one-pass} computes such a weighted distribution and algorithm ~\ref{alg:sample} produces samples accordingly. As a result, from here on, we can consider our sampling scheme to be uniform.
%each of the $k$ product terms is sampled from individually, where the final output sample is sampled with a probability that is proportional to its coefficient in $\expandtree$. Note, that This is performed by \cref{alg:sample} and its correctness will be argued momentarily. For now it suffices to note that the sampling scheme samples from each of the $k$ products in a POS using a weighted distribution equivalent to sampling uniformly over all monomials.
\begin{proof}[Proof of Lemma \ref{lem:mon-samp}]
Consider $\expandtree$ and let $c_i$ be the coefficient of the $i^{th}$ monomial and $\distinctvars_i$ be the number of distinct variables appearing in the $i^{th}$ monomial. As will be seen, the sampling scheme samples each term $t$ in $\expandtree$ with probability $\frac{|c_i|}{\abstree(1,\ldots, 1)}$. Call this sampling scheme $\mathcal{S}$. Now consider $\rpoly$ and note that $\coeffitem{i}$ is the value of the $i^{th}$ monomial term in $\rpoly(\prob_1,\ldots, \prob_n)$. Let $m$ be the number of terms in $\expandtree$ and $\coeffset$ to be the set $\{c_1,\ldots, c_m\}.$
Consider now a set of $\samplesize$ random variables $\vct{\randvar}$, where each $\randvar_i$ is distributed as described above. Then for random variable $\randvar_i$, it is the case that $\expct\pbox{\randvar_i} = \sum_{i = 1}^{\setsize}\frac{c_i \cdot \prob^{\distinctvars_i}}{\sum_{i = 1}^{\setsize}|c_i|} = \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$. Let $\hoeffest = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\randvar_i$. It is also true that
\[\expct\pbox{\hoeffest} = \expct\pbox{ \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\randvar_i} = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\expct\pbox{\randvar_i} = \frac{1}{\samplesize}\sum_{i = 1}^{\samplesize}\frac{1}{\setsize}\sum_{j = 1}^{\setsize}\frac{c_i \cdot \prob^{\distinctvars}}{\setsize} = \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}.\]
\begin{Lemma}\label{lem:hoeff-est}
Given $\samplesize$ random variables $\vct{\randvar}$ with distribution $\mathcal{S}$ over expression tree $\etree$, an additive $\error' \cdot \abstree(1,\ldots, 1)$ error bounds with $N \geq \frac{2\log{\frac{2}{\conf}}}{\error^2}$ samples.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{lem:hoeff-est}]
Given the range $[-1, 1]$ for every $\randvar_i$ in $\vct{\randvar}$, by Hoeffding, it is the case that $P\pbox{~\left| \hoeffest - \expct\pbox{\hoeffest} ~\right| \geq \error} \leq 2\exp{-\frac{2\samplesize^2\error^2}{2^2 \samplesize}} \leq \conf$.
Solving for the number of samples $\samplesize$ we get
\begin{align}
&\conf \geq 2\exp{-\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-1}\\
&\frac{\conf}{2} \geq \exp{-\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-2}\\
&\frac{2}{\conf} \leq \exp{\frac{2\samplesize^2\error^2}{4\samplesize}}\label{eq:hoeff-3}\\
&\log{\frac{2}{\conf}} \leq \frac{2\samplesize^2\error^2}{4\samplesize}\label{eq:hoeff-4}\\
&\log{\frac{2}{\conf}} \leq \frac{\samplesize\error^2}{2}\label{eq:hoeff-5}\\
&\frac{2\log{\frac{2}{\conf}}}{\error^2} \leq \samplesize.\label{eq:hoeff-6}
\end{align}
Equation \cref{eq:hoeff-1} results computing the sum in the denominator of the exponential. Equation \cref{eq:hoeff-2} is the result of dividing both sides by $2$. Equation \cref{eq:hoeff-3} follows from taking the reciprocal of both sides, and noting that such an operation flips the inequality sign. We then derive \cref{eq:hoeff-4} by the taking the base $e$ log of both sides, and \cref{eq:hoeff-5} results from reducing common factors. We arrive at the final result of \cref{eq:hoeff-6} by simply multiplying both sides by the reciprocal of the RHS fraction without the $\samplesize$ factor.
By Hoeffding we obtain the number of samples necessary to acheive the claimed additive error bounds.
\end{proof}
\qed
\begin{Corollary}\label{cor:adj-err}
Setting $\error = \error' \cdot \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$ achieves $1 \pm \epsilon$ multiplicative error bounds.
\end{Corollary}
\begin{proof}[Proof of Corollary \ref{cor:adj-err}]
Since it is the case that we have $\error' \cdot \abstree(1,\ldots, 1)$ additive error, one can set $\error = \error' \cdot \frac{\rpoly(\prob,\ldots, \prob)}{\abstree(1,\ldots, 1)}$, yielding a multiplicative error proportional to $\rpoly(\prob,\ldots, \prob)$.
\end{proof}
\qed
Note that Hoeffding is assuming the sum of random variables be divided by the number of variables. Also see that to properly estimate $\rpoly$, it is necessary to multiply by the number of monomials in $\rpoly$, i.e. $\abstree(1,\ldots, 1)$. Therefore it is the case that $\frac{acc}{N}$ gives the estimate of one monomial, and multiplying by $\abstree(1,\ldots, 1)$ yields the estimate of $\rpoly(\prob,\ldots, \prob)$. This concludes the proof of lemma ~\ref{lem:mon-samp}.
\end{proof}
\qed
\subsubsection{Run-time Analysis}
First, algorithm ~\ref{alg:mon-sam} calls \textsc{OnePass} which takes $O(|\etree|)$ time. Then it calls \textsc{Sample} $\frac{\log{\frac{1}{\conf}}}{\error^2}$ times, with a constant operation for each call. This gives and overall runtime of $O(\frac{\log{\frac{1}{\conf}}}{\error^2}\cdot|\etree|)$.
\subsection{OnePass Algorithm}
\subsubsection{Description}
Auxiliary algorithm ~\ref{alg:one-pass} has the responsibility of computing the weighted distribution over $\expandtree$. This consists of two parts. Computing the sum over the absolute values of each coefficient ($\abstree(1,\ldots, 1)$), and computing the distribution over each monomial term of $\expandtree$, without ever materializing $\expandtree$.
Algorithm ~\ref{alg:one-pass} takes $\etree$ as input, modifying $\etree$ in place with the appropriate weight distribution across all nodes, and finally returning $\abstree(1,\ldots, 1)$. For concreteness, consider the example when $poly(\etree) = (x_1 + x_2)(x_1 - x_2) + x_2^2$. The expression tree $\etree$ would then be $+\left(\times\left(+\left(x_1, x_2\right), +\left(x_1, -x_2\right)\right), \times\left(y, y\right)\right)$.
\AH{A tree diagram would work much better, but to do that it appears that I need to spend time learning the tikz package, which I haven't had time for yet.}
To compute $\abstree(1,\ldots, 1)$, algorithm ~\ref{alg:one-pass} makes a bottom-up traversal of $\etree$ and performs the following computations. For a leaf node whose value is a coefficient, the value is saved. When a $+$ node is visited, the coefficient values of its children are summed. Finally, for the case of a $\times$ node, the coefficient values of the children are multiplied. The algorithm returns the total value upon termination.
%\AH{I've had a slight conflict here. The algorithm need not return $\abstree(1,\ldots, 1)$ upon completion, as $\etree$ has been correctly annotated. But this requires a few more lines of pseudo code, and there is the concern of 'over complicating' things for the reader.}
Algorithm ~\ref{alg:one-pass} computes the weighted probability distribution in the same bottom-up traversal. When a leaf node is encountered, its value is saved if it is a coefficient. When a $\times$ node is visited, the coefficient values of its children are multiplied, recording that value at the $\times$ node. When a $+$ node is visited, the algorithm computes and saves the relative probabilities of each one of its children. This is done by taking the sum of its children's coefficient absolute values, and for each child, dividing the child's coefficient absolute value by that sum. Lastly, the partial value of its subtree coefficients is stored at the $+$ node. Upon termination, all appropriate nodes have been annotated accordingly.
For the running example, after one pass, \cref{alg:one-pass} would have learned to sample the two children of the root $+$ node with $P\left(\times\left(+\left(x_1, x_2\right), +\left(x_1, -x_2\right)\right)\right) = \frac{4}{5}$ and $P\left(\times\left(x_2, x_2\right)\right) = \frac{1}{5}$. Similarly, the two inner $+$ nodes of the root's left child, call them $+_1$ and $+_2$, using $l$ for left child and $r$ for right child are $P_{+_1}(l) = P_{+_1}(r) = P_{+_2}(l) = P_{+_2}(r) = \frac{1}{2}$. Note that in this example, the sampling probabilities for the children of each inner $+$ node are equal to one another because both parents have the same number of children, and, in each case, the children of each parent $+$ node share the same $|c_i|$.
The following pseudo code assumes that $\etree$ has the following members. $\etree.val$ holds the value stored by $\etree$, $\etree.children$ contains all children of $\etree$, $\etree.weight$ is the probability of choosing $\etree$, and $\etree.partial$ is the coefficient of $\etree$. As in the recursive nature of trees, a child of $\etree$ is assumed to be an expression tree itself. The function $isnum(\cdot)$ returns true if the value is numeric.
\AH{{\bf Next:}
5) Prove correctness for all algos.
6) Don't forget to do run-time analysis.}
\subsubsection{Psuedo Code}
\begin{algorithm}[h!]
\caption{\onepass$(\etree)$}
\label{alg:one-pass}
\begin{algorithmic}[1]
\Require \etree: Binary Expression Tree
\Ensure \etree: Binary Expression Tree
\Ensure \vari{sum}: Real
\State $\vari{sum} \gets 1$
\If{$\etree.\vari{type} = +$}
\State $\accum \gets 0$
\For{$child$ in $\etree.\vari{children}$}\Comment{Sum up all children coefficients}
\State $(\vari{T}, \vari{s}) \gets \onepass(child)$
\State $\accum \gets \accum + \vari{s}$
\EndFor
\State $\etree.\vari{partial} \gets \accum$
\For{$child$ in $\etree.\vari{children}$}\Comment{Record distributions for each child}
\State $child.\vari{weight} \gets \frac{\vari{child.partial}}{\etree.\vari{partial}}$
\EndFor
\State $\vari{sum} \gets \etree.\vari{partial}$
\State \Return (\etree, \vari{sum})
\ElsIf{$\etree.\vari{type} = \times$}
\State $\accum \gets 1$
\For{$child \text{ in } \etree.\vari{children}$}\Comment{Compute the product of all children coefficients}
\State $(\vari{T}, \vari{s}) \gets \onepass(child)$
\State $\accum \gets \accum \times \vari{s}$
\EndFor
\State $\etree.\vari{partial}\gets \accum$
\State $\vari{sum} \gets \etree.\vari{partial}$
\State \Return (\etree, \vari{sum})
\ElsIf{$\etree.\vari{type} = numeric$}\Comment{Base case}
\State $\vari{sum} \gets |\etree.\vari{val}|$
\State \Return (\etree, \vari{sum})
\Else
\State \Return (\etree, \vari{sum})
\EndIf
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness of Algorithm ~\ref{alg:one-pass}}
\begin{Lemma}\label{lem:one-pass}
Algorithm ~\ref{alg:one-pass} correctly computes $\abstree(1,\ldots, 1)$ for each subtree $S$ of $\etree$. For the children of $+$ nodes, it correctly computes the weighted distribution $\frac{|c_S|}{|T_S|(1,\ldots, 1)}$ across each child. All computations are performed in one traversal.
\end{Lemma}
\begin{proof}[Proof of Lemma ~\ref{lem:one-pass}]
Use proof by structural induction over the depth $d$ of the binary tree $\etree$.
For the base case, $d = 0$, it is the case that the root node is a leaf and therefore by definition ~\ref{def:express-tree} must be a variable or coefficient. When it is a variable, \textsc{OnePass} returns $1$, and we have that $\abstree(1,\ldots, 1) = 1$ which is correct. When the root is a coefficient, the absolute value of the coefficient is returned, which is indeed $\abstree(1,\ldots, 1)$. Since the root node cannot be a $+$ node, this proves the base case.
Let the inductive hypothesis be the assumption that for $d \leq k \geq 0$, lemma ~\ref{lem:one-pass} is true for algorithm ~\ref{alg:one-pass}.
Now prove that lemma ~\ref{lem:one-pass} holds for $k + 1$. Notice that the root of $\etree$ has at most two children, $\etree_L$ and $\etree_R$. Note also, that for each child, it is the case that $d = k$, since we have a maximal path from the root to each child of $1$. Then, by inductive hypothesis, lemma ~\ref{lem:one-pass} holds for each existing child, and we are left with two possibilities for the root node. The first case is when the root node is a $+$ node. When this happens, algorithm ~\ref{alg:one-pass} computes $\abstree(1,\ldots, 1) = |T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)$ which is correct. For the distribution of the children of $+$, algorithm ~\ref{alg:one-pass} computes $P(\etree_i) = \frac{|T_i|(1,\ldots, 1)}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$ which is indeed the case. The second case is when the root is a $\times$ node. Algorithm ~\ref{alg:one-pass} then computes the product of the subtree partial values, $|T_L|(1,\ldots, 1) \times |T_R|(1,\ldots, 1)$ which indeed equals $\abstree(1,\ldots, 1)$.
Since algorithm ~\ref{alg:one-pass} completes exactly one traversal, computing these values from the bottom up, it is the case that all subtree values are computed, and this completes the proof.
\end{proof}
\qed
\subsubsection{Run-time Analysis}
The runtime for \textsc{OnePass} is fairly straight forward. The algorithm visits each node of $\etree$ one time, with a constant number of operations with each visit, leading to a runtime of $O(|\etree|)$.
\AH{A constant number of operations for the binary case. I'm not sure that we can say the same for the n-ary case?}
\AH{Technically, should I be using $\Theta$ instead of big-O in the above?}
\subsection{Sample Algorithm}
Algorithm ~\ref{alg:sample} takes $\etree$ as input and produces a sample $\randvar_i$ according to the weighted distribution computed by \textsc{OnePass}. While one cannot compute $\expandtree$ in time better than $O(N^k)$, the algorithm, similar to \textsc{OnePass}, uses a technique on $\etree$ which produces a sample from $\expandtree$ without ever materializing $\expandtree$.
Algorithm ~\ref{alg:sample} selects a monomial from $\expandtree$ by the following top-down traversal. For a parent $+$ node, a subtree is chosen over the previously computed weighted sampling distribution. When a parent $\times$ node is visited, the monomials sampled from its subtrees are combined into one monomial. For the case of a parent node with children that are leaf nodes, if the parent is a $\times$ node, then each leaf node is returned, with the coefficient reduced to either $\{-1, 1\}$ depending on its sign. If the parent node is a $+$ node, then one of the chidlren is sampled as discussed previously. The algorithm concludes outputting $sign(c_i)\cdot\prob^{d_i}$. The pseudo code uses $isdist(\cdot)$ to mean a function that takes a single variable from $\vct{X}$ as input and using a $O(1)$ time hash structure for lookup, outputs whether or not we have seen this variable while computing the current sample.
\subsubsection{Pseudo Code}
\begin{algorithm}
\caption{\sampmon(\etree)}
\label{alg:sample}
\begin{algorithmic}[1]
\Require \etree: Binary Expression Tree
\Ensure \vari{vars}: TreeSet
\Ensure \vari{sgn}: Integer in $\{-1, 1\}$
\State $\vari{vars} \gets new$ $TreeSet()$
\State $\vari{sgn} \gets 1$
\If{$\etree.\vari{type} = +$}\Comment{Sample at every $+$ node}
\State $\etree_{\vari{samp}} \gets$ Sample from left ($\etree_{\vari{L}}$) and right ($\etree_{\vari{R}}$) w.p. $\frac{\vari{c}_\vari{L}}{|\etree_{\vari{L}}|(1,\ldots, 1)}$ and $\frac{\vari{c}_{\vari{R}}}{|\etree_{\vari{R}}|(1,\ldots, 1)}$
\State $(\vari{v}, \vari{s}) \gets \sampmon(\etree_{\vari{samp}})$
\State $\vari{vars} \gets \vari{vars} \;\cup \;\vari{v}$
\State $\vari{sgn} \gets \vari{sgn} \times \vari{s}$
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\etree.\vari{type} = \times$}\Comment{Multiply the sampled values of all subtree children}
\For {$child$ in $\etree.\vari{children}$}
\State $(\vari{v}, \vari{s}) \gets \sampmon(child)$
\State $\vari{vars} \gets \vari{vars} \cup \vari{v}$
\State $\vari{sgn} \gets \vari{sgn} \times \vari{s}$
\EndFor
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\etree.\vari{type} = numeric$}\Comment{The leaf is a sgnicient}
\State $\vari{sgn} \gets \vari{sgn} \times sign(\etree.\vari{val})$
\State $\Return ~(\vari{vars}, \vari{sgn})$
\ElsIf{$\etree.\vari{type} = var$}
\State $\vari{vars} \gets \vari{vars} \; \cup \; \{\;\etree.\vari{val}\;\}$\Comment{Add the variable to the set}
\State $\Return~(\vari{vars}, \vari{sgn})$
\EndIf
\end{algorithmic}
\end{algorithm}
\subsubsection{Correctness of Algorithm ~\ref{alg:sample}}
\begin{Lemma}\label{lem:sample}
For every $(m,c)$ in \expandtree, $\sampmon(\etree)$ returns $m$ with probability $\frac{|c|}{\abstree(1,\ldots, 1)}$.
\end{Lemma}
\begin{proof}[Proof of Lemma ~\ref{lem:sample}]
First, note that for any monomial sampled by algorithm ~\ref{alg:sample}, the nodes traversed form a subgraph of $\etree$ that is \textit{not} a subtree in the general case. We thus seek to prove that the subgraph traversed produces the correct probability corresponding to the monomial sampled.
Prove by structural induction on the depth $d$ of $\etree$. For the base case $d = 0$, by definition ~\ref{def:express-tree} we know that the root has to be either a coefficient or a variable. When the root is a variable $x$, we have the fact that the probability \sampmon returns $x$ is $1$, the algorithm correctly returns $(\{x\}, 1 )$, upholding correctness. When the root is a coefficient, \sampmon correctly returns $sign(c_i) \times 1$.
\AH{I don't know if I need to state why the latter statement (for the case of the root being a coefficient )is correct. I am not sure how to properly argue this either, whether is suffices to say that this follows by definition of our sampling scheme--or if there is a statistical claime, etc...}
%By definition of sampling scheme, this %For $|c_i| \leq 1$, $P(\randvar_i = c_i) = 1$, and correctness follows as the algorithm returns $sign(c_i) \times 1$. When $|c_i| \geq 2$, $P(|\randvar_i| = 1) = \frac{1}{|c_i|}$, and $sign(c_i) \times 1$ yields a properly weighted sampling for the case when $|c_i| \geq 2$.
For the inductive hypothesis, assume that for $d \leq k \geq 0$ lemma ~\ref{lem:sample} is true.
Prove now, that when $d = k + 1$ lemma ~\ref{lem:sample} holds. It is the case that the root of $\etree$ has up to two children $\etree_L$ and $\etree_R$. Since we have a maximal path of 1 from the root to either child, we know that by inductive hypothesis, $\etree_L$ and $\etree_R$ are both depth $d = k$, and lemma ~\ref{lem:sample} holds for either of them, thus, the probabilities computed on the sampled subgraphs of nodes visited in $\etree_L$ and $\etree_R$ are therefore correct.
Then the root has to be either a $+$ or $\times$ node.
Consider the case when the root is $\times$. Note that we are sampling a term from $\expandtree$. Consider $(m, c)$ in $\expandtree$, where $m$ is the sampled monomial. Notice also that it is the case that $m = m_L \times m_R$, where $m_L$ is coming from $\etree_L$ and $m_R$ from $\etree_R$. The probability that \sampmon$(\etree_{L})$ returns $m_L$ is $\frac{|c_{m_L}|}{|\etree_L|(1,\ldots, 1)}$ and symmetrically for $m_R$. The final probability for sample $m$ is then $\frac{|c_{m_L}| \cdot |c_{m_R}|}{|\etree_L|(1,\ldots, 1) \cdot |\etree_R|(1,\ldots, 1)}$. For $(m, c)$ in \expandtree, it is indeed the case that $|c_i| = |c_{m_L}| \cdot |c_{m_R}|$ and that $\abstree(1,\ldots, 1) = |\etree_L|(1,\ldots, 1) \cdot |\etree_R|(1,\ldots, 1)$, and therefore $m$ is sampled with correct probability $\frac{|c_i|}{\abstree(1,\ldots, 1)}$.
%When it is the former, algorithm ~\ref{alg:sample} will sample from $\etree_L$ and $\etree_R$ according to their computed weights in algorithm ~\ref{alg:one-pass}, and by inductive hypothesis correctness is ensured.
%the call to $WeightedSample$ over both subtrees will return either of the two subtrees with probability proportional to the distribution computed by \textsc{OnePass}, which is precisely $P(T_L) = \frac{|c_L|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$ and $P(T_R) = \frac{|c_R|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$. By inductive hypothesis, we know that $|c_L|$ and $|c_R|$ are correct, and combined with the fact that $|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1) = \abstree(1,\ldots, 1)$, since the algorithm makes a call to $WeightedSample$, this then proves the inductive step for the case when the root of $\etree$ is $+$.
For the case when the root is a $+$ node, \sampmon ~will sample monomial $m$ from one of its children. By inductive hypothesis we know that $m_L$ and $m_R$ have both been sampled with probability $\frac{|c_L|}{\etree_{\vari{L}}(1,\ldots, 1)}$ and $\frac{|c_R|}{|\etree_\vari{R}|(1,\ldots, 1)}$. Since algorithm ~\ref{alg:sample} is choosing $m$ from either \vari{E}($\etree_\vari{L}$) or \vari{E}($\etree_\vari{R}$), we are choosing $m$ out of $|\etree_\vari{L}|(1,\ldots, 1) + |\etree_\vari{R}|(1,\ldots, 1) = \abstree(1,\ldots, 1)$ possible monomials, thus for $m = m_L$ or $m = m_R$, it is the case that $\sampmon$ samples $m$ from $(m, c)$ of $\expandtree$ with probability of $\frac{|c|}{\abstree(1,\ldots, 1)}$, and correctness follows.
% Suppose the algorithm chooses the sample from $\etree_\vari{L}$. Then $P(\randvar = m) = \frac{|c_m|}{|\etree_\vari{L}|(1,\ldots, 1) + |\etree_\vari{R}|(1,\ldots, 1)}$. Since the algorithm is selecting between $m_i$ and $m_j$ in $\expandtree$
%, it is the case that both subtrees compose together one monomial. Here we have that the joint probability of selecting both $\etree_{\vari{L}}$ and $\etree_{\vari{R}}$ is $P(\etree_{\vari{L}} \text{ and } \etree_{\vari{R}}) = P(\etree_{\vari{L}}) \cdot \etree_{\vari{R}})$ which is the same computation made in $\onepass$, and by inductive hypothesis we have correctness.
%, thus no more sampling is necessary, and the algorithm correctly returns the product of the sample output for existing subtrees. This behavior is correct since it is the equivalent of the weights precomputed by \textsc{OnePass} for a $\times$ node, where we select both subtrees of the node with probability $\frac{|c_L| \cdot |c_R|}{|T_L|(1,\ldots, 1) + |T_R|(1,\ldots, 1)}$. This concludes the proof.
\end{proof}
\qed
\subsubsection{Run-time Analysis}
\begin{Lemma}\label{lem:alg-sample-runtime}
For $k = deg(\etree)$, algorithm ~\ref{alg:sample} has a runtime $O(k \cdot depth(\etree))$.
\end{Lemma}
For any $\etree$ of degree $k$, it is the case that the number of leaf nodes visited will be $O(k)$, since at most $k$ variable and $k$ coefficient leaves are visited.
Second, consider that in each level of binary tree $\etree$, $O(k)$ nodes are visited. It follows that algorithm ~\ref{alg:sample} runs in $O(k \cdot depth(\etree))$ time.