diff --git a/docs/extend/losses.md b/docs/extend/losses.md index 4add13d0..825faf4c 100644 --- a/docs/extend/losses.md +++ b/docs/extend/losses.md @@ -93,7 +93,7 @@ There are also a few functions in ```self.distance``` that provide some of this ## Using ```indices_tuple``` -This is an optional argument passed in from the outside. (See the [overview](../../#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms: +This is an optional argument passed in from the outside. (See the [overview](../index.md#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms: - ```None``` - A tuple of size 4, representing the indices of mined pairs (anchors, positives, anchors, negatives) diff --git a/docs/imgs/PNP_loss_equation.png b/docs/imgs/PNP_loss_equation.png new file mode 100644 index 00000000..8bc2ca46 Binary files /dev/null and b/docs/imgs/PNP_loss_equation.png differ diff --git a/docs/index.md b/docs/index.md index 3503686b..bb572f86 100644 --- a/docs/index.md +++ b/docs/index.md @@ -13,7 +13,7 @@ This library contains 9 modules, each of which can be used independently within ## How loss functions work ### Using losses and miners in your training loop -Let’s initialize a plain [TripletMarginLoss](losses/#tripletmarginloss): +Let’s initialize a plain [TripletMarginLoss](losses.md#tripletmarginloss): ```python from pytorch_metric_learning import losses loss_func = losses.TripletMarginLoss() @@ -95,8 +95,8 @@ If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self- ## Highlights of the rest of the library -- For a convenient way to train your model, take a look at the [trainers](trainers). -- Want to test your model's accuracy on a dataset? Try the [testers](testers/). +- For a convenient way to train your model, take a look at the [trainers](trainers.md). +- Want to test your model's accuracy on a dataset? Try the [testers](testers.md). - To compute the accuracy of an embedding space directly, use [AccuracyCalculator](accuracy_calculation.md). If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples). diff --git a/docs/losses.md b/docs/losses.md index 85509cf9..2c0bfd14 100644 --- a/docs/losses.md +++ b/docs/losses.md @@ -889,6 +889,30 @@ loss = loss_fn(embeddings, labels) ```python losses.PNPLoss(b=2, alpha=1, anneal=0.01, variant="O", **kwargs) ``` +**Equation**: + +![PNP_loss_equation](imgs/PNP_loss_equation.png){: style="height:300px"} + +**Parameters**: + +* **b**: The boundary of PNP-Ib (see equation 9 above). The paper uses 2. +* **alpha**: The power of PNP-Dq (see equation 13 above). The paper uses 8. +* **anneal**: The temperature of the sigmoid function. (The sigmoid function is `R` in the equations above.) The paper uses 0.01. +* **variant**: The name of the variant. The options are {"Ds", "Dq", "Iu", "Ib", "O"}. The paper uses "Dq". + +**Default distance**: + +- [```CosineSimilarity()```](distances.md#cosinesimilarity) + - This is the only compatible distance. + +**Default reducer**: + + - [MeanReducer](reducers.md#meanreducer) + +**Reducer input**: + +* **loss**: The loss per element that has at least 1 positive in the batch. Reduction type is ```"element"```. + ## ProxyAnchorLoss