[SPARK-19591][ML][MLLIB][FOLLOWUP] Add sample weights to decision trees - fix tolerance
This is a follow-up to PR: https://github.com/apache/spark/pull/21632 ## What changes were proposed in this pull request? This PR tunes the tolerance used for deciding whether to add zero feature values to a value-count map (where the key is the feature value and the value is the weighted count of those feature values). In the previous PR the tolerance scaled by the square of the unweighted number of samples, which is too aggressive for a large number of unweighted samples. Unfortunately using just "Utils.EPSILON * unweightedNumSamples" is not enough either, so I multiplied that by a factor tuned by the testing procedure below. ## How was this patch tested? This involved manually running the sample weight tests for decision tree regressor to see whether the tolerance was large enough to exclude zero feature values. Eg in SBT: ``` ./build/sbt > project mllib > testOnly *DecisionTreeRegressorSuite -- -z "training with sample weights" ``` For validation, I added a print inside the if in the code below and validated that the tolerance was large enough so that we would not include zero features (which don't exist in that test): ``` val valueCountMap = if (weightedNumSamples - partNumSamples > tolerance) { print("should not print this") partValueCountMap + (0.0 -> (weightedNumSamples - partNumSamples)) } else { partValueCountMap } ``` Closes #23682 from imatiach-msft/ilmat/sample-weights-tol. Authored-by: Ilya Matiach <ilmat@microsoft.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
This commit is contained in:
parent
bc6f191451
commit
b3b62ba303
|
@ -1050,8 +1050,11 @@ private[spark] object RandomForest extends Logging with Serializable {
|
||||||
// Calculate the expected number of samples for finding splits
|
// Calculate the expected number of samples for finding splits
|
||||||
val weightedNumSamples = samplesFractionForFindSplits(metadata) *
|
val weightedNumSamples = samplesFractionForFindSplits(metadata) *
|
||||||
metadata.weightedNumExamples
|
metadata.weightedNumExamples
|
||||||
|
// scale tolerance by number of samples with constant factor
|
||||||
|
// Note: constant factor was tuned by running some tests where there were no zero
|
||||||
|
// feature values and validating we are never within tolerance
|
||||||
|
val tolerance = Utils.EPSILON * unweightedNumSamples * 100
|
||||||
// add expected zero value count and get complete statistics
|
// add expected zero value count and get complete statistics
|
||||||
val tolerance = Utils.EPSILON * unweightedNumSamples * unweightedNumSamples
|
|
||||||
val valueCountMap = if (weightedNumSamples - partNumSamples > tolerance) {
|
val valueCountMap = if (weightedNumSamples - partNumSamples > tolerance) {
|
||||||
partValueCountMap + (0.0 -> (weightedNumSamples - partNumSamples))
|
partValueCountMap + (0.0 -> (weightedNumSamples - partNumSamples))
|
||||||
} else {
|
} else {
|
||||||
|
|
Loading…
Reference in a new issue