%We begin the analysis by showing that with high probability an estimate is approximately $\numWorldsP$, where $p$ is a tuple's probability measure for a given TIPD. Note that
%\begin{equation}
%%\gVt{k\cdot}
%\numWorldsP = \numWorldsSum\label{eq:mu}.
%\end{equation}
%Furthermore, when $\genV$ is generalized to have elements in the range $\left[0, \infty\right]$, we obtain the result
We start off by making the claim that the expectation of the estimate of annotations across all worlds is $\sum\limits_{\wVec\in\pw}\genVParam{\wVec}$, formally
To verify this claim, we argue that $\forall\wVec\in\pw$, the expectation of the estimate of an annotation in a single world is its annotation, i.e. the output of $\genVParam{\wVec}$, i.e.
Since \eqref{eq:single-est} holds, by linearity of expectation, \eqref{eq:allWorlds-est} also must hold.
%We can now take \eqref{eq:single-est}, substitute it in for \eqref{eq:allWorlds-est} and show by linearity of expectation that \eqref{eq:allWorlds-est} holds.
%\AR{A general comment: The last display equation should have a period at the end. The idea is that display equations are considered part of a sentence and every sentence should end with a period.}
%\AH{Thank you for clarifying this, as I have always wondered what the convention was for display equations. Hopefully, I haven't missed any end display equations in this paper, and have them all fixed properly.}
\item\eq{\eqref{eq:var_step-one}} follows from substituting the definition of $\sketch$ and the commutativity of addition. Note the constraint on $\hash$ hashing to the same bucket follows from the definition of $\sketch$. Also, the sum can be rearranged to take each component item in the sum of each bucket and take its sum of products with each of the $\pol$ mapped to it. This can be done as previously stated, using the commutativity of addition.
Note that four-wise independence is assumed across all four random variables of \eqref{eq:var-sum-w}. Zooming in on the products of the $\pol$ functions,
we see that %it can be seen that for $\wOne, \wOneP \in \pw$ and $\wTwo, \wTwoP \in \pw'$, all four random variables in \eqref{eq:polar-product} take their values from $\pw$, although we have iteration over two separate sets $\pw$.
there are five possible sets of $\wVec$ variable combinations. The following sets all assume each $\wVec$ to be from the set $\pw$. For $a, b, c, d \in\{1, 1', 2, 2'\}\st a \neq b \neq c \neq d$:
With four random variables coming from sets containing the same elements, there exist five possibilities in how they relate to one another. This is true since they come from the same set or seperate, yet duplicate sets each containing the same members. Therefore, any $\wVec$ variable can be equal or alternatively not equal to its remaining counterparts. A simple enumeration in equalities (non-equalities) suffices to partition the set of all possible combinations. The variables could all be equal as we see in $\distPattern{1}$, or three of the variables could be equal, with the fourth different. Enumerating to having just two varibales sharing an equality generates two cases, because we have two variables left over, which themselves may either be equal or not equal. There is the case of $\distPattern{2}$ where a pair of variables could be the same with the remaining two equal to each other but not equal to the first two. $\distPattern{3}$ is the case when there are two variables the same, with the remaining variables not equal to any of the others. And finally, they could all be different as in $\distPattern{5}$.
The use of variable subscripts in the notation is necessary as different combinations of equal $\wVec$ variables produce different results in the variance computation, as we will see shortly.
We are interested in those particular cases whose expectation does not equal zero, since an expectation of zero will not add to the summation of \eqref{eq:var-sum-w}. In expectation we have that
because the same element of the image of $\pol$ is being multiplied to itself for each equality, producing a polarity of 1 for each equality, and then a final product of 1. For $\distPattern{3}, \distPattern{4}, \distPattern{5}$, we have a final product of two, three or four independent variables $\in\{-1, 1\}$, thus producing the following results:
For the distribution pattern $\cTwo$, we have three subsets $\distPattern{21}, \distPattern{22}, \distPattern{23}\subseteq\distPattern{2}$ to consider.
Note that for $\distPattern{22}$, we have the cardinality of a bucket as a multiplicative factor for each squared annotation. This is because of the constraint that $\wOne\neq\wOneP$ coupled with the additional constraint that $\hashP{\wOne}=\hashP{\wOneP}$. Since $\wOneP$ must belong to the same bucket as $\wOne$, yet not equal to $\wOne$, we have that each operand of the sum must be the annotation squared for each $\wOneP$ that belongs to the same bucket but is not equal to $\wOne$.
Looking at $\distPattern{23}$, we have a similar case as $\distPattern{22}$, but this time there is no multiplicative factor since $\wOneP$ and $\wTwoP$ are constrained to equal their opposite $\wVec$ counterparts, which are the arguments for both $\genV$ terms.
\item The LHS is the expectation squared. We obtain the RHS by first squaring the sum, and then, using the assoicative property of addition, rearranging the operands of the summation.
%Our current analysis is limited to TIPDBs, where the annotations are in the boolean $\mathbb{B}$ set. Because this is the case, the square of any element is itself.
\item\eqref{eq:s23-four} follows since $\sum\limits_{\wVec\in\pw}\genVParam{\wVec}=\norm{\genV}_1$; the second term also relies the preceding fact and the assumption of uniform distribution of $\hash$.
\item\eqref{eq:spaceTwo} is the result of the multiplication of the first two terms in \eqref{eq:s23-four} and that $\sum\limits_{\wVec\in\pw}\genVParam{\wVec}^2=\norm{\genV}_2^2$.
\end{itemize}
\end{Justification}
\AH{Can we not have the looser requirement of uniform distribution?}
%In both equations, the sum of $\genVParam{\wVec}$ over all $\wVec \in \pw$ is $\numWorldsP$ since as noted in equation \eqref{eq:mu} we are summing the number of worlds a tuple $t$ appears in, and for a TIPDB, that is exactly 2 to the power of the number of tuples in the TIPDB (due to the independence of tuples) times tuple $t$'s probability.
In equation \eqref{eq:spaceOne} we have the multiplicative factor which in expectation turns out to be the number of worlds $|\pw|$ divided evenly across the number of buckets $\sketchCols$ minus the one tuple that $\wVecPrime$ cannot be. This factor is multiplied to the sum of squares of each of the world values.
Equation \eqref{eq:spaceTwo} has each of the $|\pw|$ worlds times all the rest of the worlds appearing in the corresponding bucket. The equation is first rearranged, by allowing the duplicating of $\wVec$ in the second summation and subsequently subtracting the product afterwards. The product in the expectation yields two factors. The first factor is simply the sum of vector values. The latter is the same sum divided by bucket size. Finally, we subtract the quantity that shouldn't be there, specifically when $\wVecPrime=\wVec$, which is the sum of squares within a bucket.
Recall that $\sdRel=\frac{\sd}{\mu}$.% where $\mu$ is defined as $\numWorldsP$ in \eqref{eq:mu} for TIDB and $\norm{\genV}\prob$ for general $\genV$ in \eqref{eq:gen-mu}.
Since the sketch has multiple trials, a probability of exceeding error bound $\errB$ smaller than one half guarantees an estimate that is less than or equal to the error bound when taking the median of all trials. Expressing the error relative to $\mu$ in Chebyshev's Inequality yields
For the case when $\Delta=\mu\epsilon$, taking both Chebyshev bounds, setting them equal to each other, simplifying and solving for $\sketchCols$ results in
Notice that the constant term can be viewed as a vector of $1$'s with size $n$ (the size of $\genV$). Calling this vector $x$ and taking the L2 norm gives
Note that \eqref{eq:expandL1} can be further tightened by using a vector with ones appearing only in places where $\genV_i > 0$. This tightens \eqref{eq:norm1-cauchy} and \eqref{eq:norm1-sq-cauchy} by replacing the $|\pw|$ factor with $\norm{\genV}_0$.
\item\eqref{eq:l2-bnd1} is the defintion of L2 norm squared.
\item\eqref{eq:l2-bnd2} is an upper bound of the L2 norm, and it is true because the max of a vector $\genV$ is always greater than or equal to all the other elements in $\genV$, which implies that unless the max value is in every element, this is a strict upper bound.
\item\eqref{eq:l2-bnd3} is given by a simple substitution of notation.
\item\eqref{eq:l2-bnd4} is obtained by the equivalence of pushing the summation inside the product.
\item\eqref{eq:l2-bounds} is the result of substituting the definition of L1 norm.
\item\eqref{eq:sub-bounds1} results from substituting \eqref{eq:l2-bounds} for the L2 norm.
\item\eqref{eq:sub-bounds2} is obtained from substituting \eqref{eq:norm1-cauchy} for the L1 norm and \eqref{eq:norm1-sq-cauchy} for the L1 norm squared terms in both the numerator and denominator.
\item\eqref{eq:sub-bounds3} is the result of further substituting \eqref{eq:l2-bounds} for the newly introduced L2 norm terms in the numerator.
\item\eqref{eq:sub-bounds4} is the result of factoring out common terms in the numerator.
\item\eqref{eq:sub-bounds5} is the result of cancelling out common terms in the numerator and denominator.
\item\eqref{eq:sub-bounds-final} is simply a rearrangement of the two numerator terms, for the purpose of making things simpler.
In the above, recall that $\mu$ or the expectation of an estimate is $\sum\limits_{\wVec\in\pw}\genVParam{\wVec}$ as seen in equation \eqref{eq:allWorlds-est}.