Skip to content

Commit dd40036

Browse files
Merge pull request #679 from KevinMusgrave/master
master into dev
2 parents bd7b5ed + 3a14f82 commit dd40036

File tree

7 files changed

+40
-89
lines changed

7 files changed

+40
-89
lines changed

.github/CODE_OF_CONDUCT.md

Lines changed: 0 additions & 76 deletions
This file was deleted.

CONTENTS.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
| [**CircleLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss) | [Circle Loss: A Unified Perspective of Pair Similarity Optimization](https://arxiv.org/pdf/2002.10857.pdf)
1616
| [**ContrastiveLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss) | [Dimensionality Reduction by Learning an Invariant Mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf)
1717
| [**CosFaceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#cosfaceloss) | - [CosFace: Large Margin Cosine Loss for Deep Face Recognition](https://arxiv.org/pdf/1801.09414.pdf) <br/> - [Additive Margin Softmax for Face Verification](https://arxiv.org/pdf/1801.05599.pdf)
18+
| [**DynamicSoftMarginLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss) | [Learning Local Descriptors With a CDF-Based Dynamic Soft Margin](https://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Learning_Local_Descriptors_With_a_CDF-Based_Dynamic_Soft_Margin_ICCV_2019_paper.pdf)
1819
| [**FastAPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#fastaploss) | [Deep Metric Learning to Rank](http://openaccess.thecvf.com/content_CVPR_2019/papers/Cakir_Deep_Metric_Learning_to_Rank_CVPR_2019_paper.pdf)
1920
| [**GeneralizedLiftedStructureLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#generalizedliftedstructureloss) | [In Defense of the Triplet Loss for Person Re-Identification](https://arxiv.org/pdf/1703.07737.pdf)
2021
| [**HistogramLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss) | [Learning Deep Embeddings with Histogram Loss](https://arxiv.org/pdf/1611.00822.pdf)
@@ -33,6 +34,7 @@
3334
| [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) | [Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough](https://arxiv.org/pdf/2102.04640.pdf)
3435
| [**ProxyAnchorLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyanchorloss) | [Proxy Anchor Loss for Deep Metric Learning](https://arxiv.org/pdf/2003.13911.pdf)
3536
| [**ProxyNCALoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyncaloss) | [No Fuss Distance Metric Learning using Proxies](https://arxiv.org/pdf/1703.07464.pdf)
37+
| [**RankedListLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss) | [Ranked List Loss for Deep Metric Learning](https://arxiv.org/abs/1903.03238)
3638
| [**SignalToNoiseRatioContrastiveLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#signaltonoiseratiocontrastiveloss) | [Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yuan_Signal-To-Noise_Ratio_A_Robust_Distance_Metric_for_Deep_Metric_Learning_CVPR_2019_paper.pdf)
3739
| [**SoftTripleLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#softtripleloss) | [SoftTriple Loss: Deep Metric Learning Without Triplet Sampling](http://openaccess.thecvf.com/content_ICCV_2019/papers/Qian_SoftTriple_Loss_Deep_Metric_Learning_Without_Triplet_Sampling_ICCV_2019_paper.pdf)
3840
| [**SphereFaceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#spherefaceloss) | [SphereFace: Deep Hypersphere Embedding for Face Recognition](https://arxiv.org/pdf/1704.08063.pdf)

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,17 +18,16 @@
1818

1919
## News
2020

21+
**December 15**: v2.4.0
22+
- Added [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss).
23+
- Added [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss).
24+
- See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.4.0).
25+
- Thank you [domenicoMuscill0](https://github.com/domenicoMuscill0), [Puzer](https://github.com/Puzer), [interestingzhuo](https://github.com/interestingzhuo), and [GaetanLepage](https://github.com/GaetanLepage).
26+
2127
**July 25**: v2.3.0
2228
- Added [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss)
2329
- Thank you [domenicoMuscill0](https://github.com/domenicoMuscill0).
2430

25-
**June 18**: v2.2.0
26-
- Added [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) and [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss).
27-
- Added a `symmetric` flag to [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss).
28-
- See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.2.0).
29-
- Thank you [domenicoMuscill0](https://github.com/domenicoMuscill0).
30-
31-
3231
## Documentation
3332
- [**View the documentation here**](https://kevinmusgrave.github.io/pytorch-metric-learning/)
3433
- [**View the installation instructions here**](https://github.com/KevinMusgrave/pytorch-metric-learning#installation)
@@ -227,7 +226,7 @@ Thanks to the contributors who made pull requests!
227226

228227
| Contributor | Highlights |
229228
| -- | -- |
230-
|[domenicoMuscill0](https://github.com/domenicoMuscill0)| - [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) <br/> - [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss) <br/> - [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss)
229+
|[domenicoMuscill0](https://github.com/domenicoMuscill0)| - [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) <br/> - [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss) <br/> - [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss) <br/> - [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss) <br/> - [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss) |
231230
|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
232231
|[cwkeam](https://github.com/cwkeam) | - [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss) <br/> - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) <br/> - BaseLossWrapper|
233232
|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
@@ -246,7 +245,9 @@ Thanks to the contributors who made pull requests!
246245
| [layumi](https://github.com/layumi) | [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss) |
247246
| [NoTody](https://github.com/NoTody) | Helped add `ref_emb` and `ref_labels` to the distributed wrappers. |
248247
| [ElisonSherton](https://github.com/ElisonSherton) | Fixed an edge case in ArcFaceLoss. |
249-
| [stompsjo](https://github.com/stompsjo) | Improved documentation for NTXentLoss |
248+
| [stompsjo](https://github.com/stompsjo) | Improved documentation for NTXentLoss. |
249+
| [Puzer](https://github.com/Puzer) | Bug fix for PNPLoss. |
250+
| [GaetanLepage](https://github.com/GaetanLepage) | |
250251
| [z1w](https://github.com/z1w) | |
251252
| [thinline72](https://github.com/thinline72) | |
252253
| [tpanum](https://github.com/tpanum) | |

docs/extend/losses.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ There are also a few functions in ```self.distance``` that provide some of this
9393

9494
## Using ```indices_tuple```
9595

96-
This is an optional argument passed in from the outside. (See the [overview](../../#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms:
96+
This is an optional argument passed in from the outside. (See the [overview](../index.md#using-losses-and-miners-in-your-training-loop) for an example.) It currently has 3 possible forms:
9797

9898
- ```None```
9999
- A tuple of size 4, representing the indices of mined pairs (anchors, positives, anchors, negatives)

docs/imgs/PNP_loss_equation.png

257 KB
Loading

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ This library contains 9 modules, each of which can be used independently within
1313
## How loss functions work
1414

1515
### Using losses and miners in your training loop
16-
Let’s initialize a plain [TripletMarginLoss](losses/#tripletmarginloss):
16+
Let’s initialize a plain [TripletMarginLoss](losses.md#tripletmarginloss):
1717
```python
1818
from pytorch_metric_learning import losses
1919
loss_func = losses.TripletMarginLoss()
@@ -95,8 +95,8 @@ If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self-
9595

9696
## Highlights of the rest of the library
9797

98-
- For a convenient way to train your model, take a look at the [trainers](trainers).
99-
- Want to test your model's accuracy on a dataset? Try the [testers](testers/).
98+
- For a convenient way to train your model, take a look at the [trainers](trainers.md).
99+
- Want to test your model's accuracy on a dataset? Try the [testers](testers.md).
100100
- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](accuracy_calculation.md).
101101

102102
If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples).

docs/losses.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -902,6 +902,30 @@ loss = loss_fn(embeddings, labels)
902902
```python
903903
losses.PNPLoss(b=2, alpha=1, anneal=0.01, variant="O", **kwargs)
904904
```
905+
**Equation**:
906+
907+
![PNP_loss_equation](imgs/PNP_loss_equation.png){: style="height:300px"}
908+
909+
**Parameters**:
910+
911+
* **b**: The boundary of PNP-Ib (see equation 9 above). The paper uses 2.
912+
* **alpha**: The power of PNP-Dq (see equation 13 above). The paper uses 8.
913+
* **anneal**: The temperature of the sigmoid function. (The sigmoid function is used for `R` in the equations above.) The paper uses 0.01.
914+
* **variant**: The name of the variant. The options are {"Ds", "Dq", "Iu", "Ib", "O"}. The paper uses "Dq".
915+
916+
**Default distance**:
917+
918+
- [```CosineSimilarity()```](distances.md#cosinesimilarity)
919+
- This is the only compatible distance.
920+
921+
**Default reducer**:
922+
923+
- [MeanReducer](reducers.md#meanreducer)
924+
925+
**Reducer input**:
926+
927+
* **loss**: The loss per element that has at least 1 positive in the batch. Reduction type is ```"element"```.
928+
905929

906930

907931
## ProxyAnchorLoss

0 commit comments

Comments
 (0)