Todos for appendix and S4

This commit is contained in:
Aaron Huber 2020-12-19 16:13:42 -05:00
parent 9b9b575f6a
commit ec1e0854f1
2 changed files with 7 additions and 3 deletions

View file

@ -192,8 +192,12 @@ The algorithm to prove~\Cref{lem:approx-alg} follows from the following observat
Given the above, the algorithm is a sampling based algorithm for the above sum: we sample $(v,c)\in \expandtree{\etree}$ with probability proportional\footnote{We could have also uniformly sampled from $\expandtree{\etree}$ but this gives better parameters.}
%\AH{Regarding the footnote, is there really a difference? I \emph{suppose} technically, but in this case they are \emph{effectively} the same. Just wondering.}
%\AR{Yes, there is! If we used uniform distribution then in our bounds we will have a parameter that depends on the largest $\abs{coef}$, which e.g. could be dependent on $n$. But with the weighted probability distribution, we avoid paying this price. Though I guess perhaps we can say for the kinds of queries we consider thhese coefficients are all constants?}
to $\abs{c}$ and compute $Y=\indicator{\monom\mod{\mathcal{B}}\not\equiv 0}\cdot \prod_{X_i\in \var\inparen{v}} p_i$. Taking enough samples and computing the average of $Y$ gives us our final estimate. Algorithm~\ref{alg:mon-sam} has the details.
\OK{Even if the proof is offloaded to the appendix, it would be useful to state the formula for $N$ (line 4 of \Cref{alg:mon-sam}), along with a pointer to the appendix.}
to $\abs{c}$ and compute $Y=\indicator{\monom\mod{\mathcal{B}}\not\equiv 0}\cdot \prod_{X_i\in \var\inparen{v}} p_i$. Taking $\numsamp$ samples and computing the average of $Y$ gives us our final estimate. Algorithm~\ref{alg:mon-sam} has the details for $\approxq$. The derivation of $\numsamp$ (\Cref{alg:mon-sam-global2}) can be found in~\Cref{app:subsec-th-mon-samp}, and from those results, one can further see that
\begin{equation*}
2\exp{\left(-\frac{\samplesize\error^2}{2}\right)}\leq \conf \implies\samplesize \geq \frac{2\log{\frac{2}{\conf}}}{\error^2}.
%\exp{\left(-\frac{\samplesize\error^2}{2}\right)}\leq \frac{\conf}{2}\\
%\frac{\samplesize\error^2}{2}\geq \log{\frac{2}{\conf}}\\
\end{equation*}
%We state the approximation algorithm in terms of a $\bi$.
%\subsubsection{Description}

View file

@ -379,7 +379,7 @@ The claim on the runtime follows since
which completes the proof.
We now return to the proof of~\Cref{lem:mon-samp}:
\subsection{Proof of Theorem \ref{lem:mon-samp}}
\subsection{Proof of Theorem \ref{lem:mon-samp}}\label{app:subsec-th-mon-samp}
Consider now the random variables $\randvar_1,\dots,\randvar_\numvar$, where each $\randvar_i$ is the value of $\vari{Y}_{\vari{i}}$ after~\Cref{alg:mon-sam-product} is executed. In particular, note that we have
\[Y_i= \onesymbol\inparen{\monom\mod{\mathcal{B}}\not\equiv 0}\cdot \prod_{X_i\in \var\inparen{v}} p_i,\]
where the indicator variable handles the check in~\Cref{alg:check-duplicate-block}