You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But some serialization mechanisms needs the buffer size in order to deserialize properly (i.e. ProtocolBuffers). With current API the serialization needs to write the buffer size to the disk... But that is something we have already done with Pangool!. So we would be writing the buffer size twice.
That can be avoided by passing the buffer size to the deserialize() method. That would mean that we would have to stop using the Hadoop Deserialize class and use our own new interface for that.
The text was updated successfully, but these errors were encountered:
Currently we are using the Hadoop Deserializer class for the custom Pangool serialization: http://pangool.net/userguide/custom_serialization.html
But some serialization mechanisms needs the buffer size in order to deserialize properly (i.e. ProtocolBuffers). With current API the serialization needs to write the buffer size to the disk... But that is something we have already done with Pangool!. So we would be writing the buffer size twice.
That can be avoided by passing the buffer size to the deserialize() method. That would mean that we would have to stop using the Hadoop Deserialize class and use our own new interface for that.
The text was updated successfully, but these errors were encountered: