[SPARK-] [MLLIB] minor fix on tokenizer doc
A trivial fix for the comments of RegexTokenizer. Maybe this is too small, yet I just noticed it and think it can be quite misleading. I can create a jira if necessary. Author: Yuhao Yang <hhbyyh@gmail.com> Closes #7791 from hhbyyh/docFix and squashes the following commits: cdf2542 [Yuhao Yang] minor fix on tokenizer doc
This commit is contained in:
parent
d212a31422
commit
9c0501c5d0
|
@ -50,7 +50,7 @@ class Tokenizer(override val uid: String) extends UnaryTransformer[String, Seq[S
|
|||
/**
|
||||
* :: Experimental ::
|
||||
* A regex based tokenizer that extracts tokens either by using the provided regex pattern to split
|
||||
* the text (default) or repeatedly matching the regex (if `gaps` is true).
|
||||
* the text (default) or repeatedly matching the regex (if `gaps` is false).
|
||||
* Optional parameters also allow filtering tokens using a minimal length.
|
||||
* It returns an array of strings that can be empty.
|
||||
*/
|
||||
|
|
Loading…
Reference in a new issue