Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/extend/losses.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ There are also a few functions in ```self.distance``` that provide some of this

## Using ```indices_tuple```

This is an optional argument passed in from the outside. (See the [overview](../../#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms:
This is an optional argument passed in from the outside. (See the [overview](../index.md#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms:

- ```None```
- A tuple of size 4, representing the indices of mined pairs (anchors, positives, anchors, negatives)
Expand Down
Binary file added docs/imgs/PNP_loss_equation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This library contains 9 modules, each of which can be used independently within
## How loss functions work

### Using losses and miners in your training loop
Let’s initialize a plain [TripletMarginLoss](losses/#tripletmarginloss):
Let’s initialize a plain [TripletMarginLoss](losses.md#tripletmarginloss):
```python
from pytorch_metric_learning import losses
loss_func = losses.TripletMarginLoss()
Expand Down Expand Up @@ -95,8 +95,8 @@ If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self-

## Highlights of the rest of the library

- For a convenient way to train your model, take a look at the [trainers](trainers).
- Want to test your model's accuracy on a dataset? Try the [testers](testers/).
- For a convenient way to train your model, take a look at the [trainers](trainers.md).
- Want to test your model's accuracy on a dataset? Try the [testers](testers.md).
- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](accuracy_calculation.md).

If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples).
Expand Down
24 changes: 24 additions & 0 deletions docs/losses.md
Original file line number Diff line number Diff line change
Expand Up @@ -889,6 +889,30 @@ loss = loss_fn(embeddings, labels)
```python
losses.PNPLoss(b=2, alpha=1, anneal=0.01, variant="O", **kwargs)
```
**Equation**:

![PNP_loss_equation](imgs/PNP_loss_equation.png){: style="height:300px"}

**Parameters**:

* **b**: The boundary of PNP-Ib (see equation 9 above). The paper uses 2.
* **alpha**: The power of PNP-Dq (see equation 13 above). The paper uses 8.
* **anneal**: The temperature of the sigmoid function. (The sigmoid function is `R` in the equations above.) The paper uses 0.01.
* **variant**: The name of the variant. The options are {"Ds", "Dq", "Iu", "Ib", "O"}. The paper uses "Dq".

**Default distance**:

- [```CosineSimilarity()```](distances.md#cosinesimilarity)
- This is the only compatible distance.

**Default reducer**:

- [MeanReducer](reducers.md#meanreducer)

**Reducer input**:

* **loss**: The loss per element that has at least 1 positive in the batch. Reduction type is ```"element"```.



## ProxyAnchorLoss
Expand Down