A PyTorch implementation for paper Unsupervised Domain Adaptation by Backpropagation
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
wogong 3f7a1eda3d add config init function, save to experiment's model root. 5 years ago
core add l2 norm. 5 years ago
datasets update svhnmodel structure, basically the same as paper, except for the last conv layers, kernel change from 5x5 to 4x4, get better result 75%. 5 years ago
experiments add config init function, save to experiment's model root. 5 years ago
models update svhnmodel structure, basically the same as paper, except for the last conv layers, kernel change from 5x5 to 4x4, get better result 75%. 5 years ago
utils add synsigns_gtsrb experiment, update core codes 5 years ago
.gitignore minor update, remove some params 7 years ago
LICENSE Initial commit 7 years ago
README.md minor update 5 years ago

README.md

PyTorch-DANN

A PyTorch implementation for paper Unsupervised Domain Adaptation by Backpropagation

InProceedings (icml2015-ganin15)
Ganin, Y. & Lempitsky, V.
Unsupervised Domain Adaptation by Backpropagation
Proceedings of the 32nd International Conference on Machine Learning, 2015

Environment

  • Python 3.6
  • PyTorch 1.0

Note

  • Config() 为针对特定任务的配置参数
  • MNISTmodel() 完全按照论文中的结构,但是 feature 部分添加了 Dropout2d(),实验发现是否添加 Dropout2d() 对于最后的性能影响很大。最后实验重现结果高于论文,因为使用了额外的技巧,这里还有值得探究的地方。
  • SVHNmodel() 无法理解论文中提出的结构,为自定义结构。最后实验重现结果完美。
  • MNIST-MNISTM: python mnist_mnistm.py
  • SVHN-MNIST: python svhn_mnist.py
  • Amazon-Webcam: python office.py 由于预训练网络的问题,无法复现

Result

MNIST-MNISTM SVHN-MNIST Amazon-Webcam Amazon-Webcam10
Source Only 0.5225 0.5490 0.6420 0.
DANN(paper) 0.7666 0.7385 0.7300 0.
This Repo Source Only - - - 0.
This Repo 0.8400 0.7339 0.6528 0.

Credit