Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ class FileStreamSource(
}

/**
* Returns the next batch of data that is available after `start`, if any is available.
* Returns the data that is between the offsets (`start`, `end`].
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc should be updated, following Source.getBatch()'s doc change from Returns the next batch of data that is available after start, if any is available. to Returns the data that is between the offsets (start, end].

*/
override def getBatch(start: Option[Offset], end: Offset): DataFrame = {
val startId = start.map(_.asInstanceOf[LongOffset].offset).getOrElse(-1L)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import org.apache.spark.sql.DataFrame
trait Sink {

/**
* Adds a batch of data to this sink. The data for a given `batchId` is deterministic and if
* Adds a batch of data to this sink. The data for a given `batchId` is deterministic and if
* this method is called more than once with the same batchId (which will happen in the case of
* failures), then `data` should only be added once.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ trait Source {
def getOffset: Option[Offset]

/**
* Returns the data that is between the offsets (`start`, `end`]. When `start` is `None` then
* Returns the data that is between the offsets (`start`, `end`]. When `start` is `None` then
* the batch should begin with the first available record. This method must always return the
* same data for a particular `start` and `end` pair.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ case class MemoryStream[A : Encoder](id: Int, sqlContext: SQLContext)
}

/**
* Returns the next batch of data that is available after `start`, if any is available.
* Returns the data that is between the offsets (`start`, `end`].
*/
override def getBatch(start: Option[Offset], end: Offset): DataFrame = {
val startOrdinal =
Expand Down