Spark 1916
The changes could be ported back to 0.9 as well.
Changing in.read to in.readFully to read the whole input stream rather than the first 1020 bytes.
This should ok considering that Flume caps the body size to 32K by default.
Author: David Lemieux <david.lemieux@radialpoint.com>
Closes #865 from lemieud/SPARK-1916 and squashes the following commits:
a265673 [David Lemieux] Updated SparkFlumeEvent to read the whole stream rather than the first X bytes.
(cherry picked from commit 0b769b73fb
)
Signed-off-by: Patrick Wendell <pwendell@gmail.com>
This commit is contained in:
parent
7801d44fd3
commit
4312cf0bad
|
@ -63,7 +63,7 @@ class SparkFlumeEvent() extends Externalizable {
|
||||||
def readExternal(in: ObjectInput) {
|
def readExternal(in: ObjectInput) {
|
||||||
val bodyLength = in.readInt()
|
val bodyLength = in.readInt()
|
||||||
val bodyBuff = new Array[Byte](bodyLength)
|
val bodyBuff = new Array[Byte](bodyLength)
|
||||||
in.read(bodyBuff)
|
in.readFully(bodyBuff)
|
||||||
|
|
||||||
val numHeaders = in.readInt()
|
val numHeaders = in.readInt()
|
||||||
val headers = new java.util.HashMap[CharSequence, CharSequence]
|
val headers = new java.util.HashMap[CharSequence, CharSequence]
|
||||||
|
|
Loading…
Reference in a new issue