@@ -236,6 +236,23 @@ class EdgeDataLoader:
236
236
of blocks as computation dependency of the said minibatch for edge classification,
237
237
edge regression, and link prediction.
238
238
239
+ For each iteration, the object will yield
240
+
241
+ * A tensor of input nodes necessary for computing the representation on edges, or
242
+ a dictionary of node type names and such tensors.
243
+
244
+ * A subgraph that contains only the edges in the minibatch and their incident nodes.
245
+ Note that the graph has an identical metagraph with the original graph.
246
+
247
+ * If a negative sampler is given, another graph that contains the "negative edges",
248
+ connecting the source and destination nodes yielded from the given negative sampler.
249
+
250
+ * A list of blocks necessary for computing the representation of the incident nodes
251
+ of the edges in the minibatch.
252
+
253
+ For more details, please refer to :ref:`guide-minibatch-edge-classification-sampler`
254
+ and :ref:`guide-minibatch-link-classification-sampler`.
255
+
239
256
Parameters
240
257
----------
241
258
g : DGLGraph
@@ -301,8 +318,9 @@ class EdgeDataLoader:
301
318
>>> reverse_eids = torch.cat([torch.arange(E, 2 * E), torch.arange(0, E)])
302
319
303
320
Note that the sampled edges as well as their reverse edges are removed from
304
- computation dependencies of the incident nodes. This is a common trick to avoid
305
- information leakage.
321
+ computation dependencies of the incident nodes. That is, the edge will not
322
+ involve in neighbor sampling and message aggregation. This is a common trick
323
+ to avoid information leakage.
306
324
307
325
>>> sampler = dgl.dataloading.MultiLayerNeighborSampler([15, 10, 5])
308
326
>>> dataloader = dgl.dataloading.EdgeDataLoader(
0 commit comments