2019-10-06 21:28:59    356    0    0

心血来潮买了手环4NFC版

结果竟然用不了NFC,要我重新注册账号???但我小米账号显示的是中国。

找了半天,终于在互联网某角落发现解决方案:

小米手环是华米的,数据库不互通。所以需要重新注册的是华米账号,所以到这里来注销自己的华米账号再注册就好了。

https://user.huami.com/hm_account/2.0.0/index.html?loginPlatform=web&platform_app=com.xiaomi.hm.health&v=3.7.8#/

2019-09-30 14:43:45    485    0    0

Background

  • Solving low resource domain adaptation.
  • Assume has few lable for Target Domain.

Structure

title

  • the ultimate goal of our approach is to use the mapped Source → Target samples (XST ) to
    augment the limited data of the target domain (XT ).

  • let MS and MT be the task specific models trained on domains PS(X,Y) and PT(X,Y).

Relaxed Cycle Consistency

.
title

In supervised case

title

Similarly, there's title

In unsupervised case

title

Performance

MNIST domain is limited to only 10 samples per class, denoted MNIST(10)

title

title

title

2019-09-27 16:19:50    545    0    0

硬核机器学习课

首先介绍什么是Compressed Sensing(压缩感知)

这一段参考自如何理解压缩感知(compressive sensing)?

  • 在现有的传统的信号处理模式中,信号要采样、压缩然后再传输,接收端要解压再恢复原始信号。采样过程要遵循奈奎斯特采样定理,也就是采样速率不能小于信号最高频率的两倍,这样才能保证根据采样所得的信息可以完整地恢复出原始信号。压缩感知在接收端通过合适的重构算法就可以恢复出原始信号,因此可以避免在传统的信号处理模式中的数据浪费和资源浪费问题。
  • 压缩感知最初提出时,是针对稀疏信号x,给出观测模型y=Φx时,要有怎么样的Φ,通过什么样的方式可以从y中恢复出x。(PS:稀疏信号,是指在这个信号x中非零元素的个数远小于其中零元素的个数。)
  • Tao他们还推导了Restricted Isotropic Property (RIP) (可以理解成空间映射能基本保持稀疏向量长度,“伸缩度”很小)等一系列理论,证明了为了达到完美恢复,采样矩阵和信号稀疏度需要满足的条件和相互之间的关系。
  • 然而,很多信号本身并非稀疏的,比如图像信号。此时可以通过正交变换Ψ’,将信号投影到另外一个空间,而在这个空间中,信号a=Ψx(analysis model)变得稀疏了。然后我们可以由模型y=Φa,即y=ΦΨx,来恢复原始信号x。(PS:正交变换——对一个由空间Rn
2019-09-26 19:16:15    465    0    0

Background

Tile is clear, "Few-Shot Adversarial Domain Adaptation"

  • They need labels for new domain, but only a few.

Structure

  • Definition of Pairs:

title

  • Trainning steps

DCD means domain-class discriminator(tells it's G1,G2,G3 or G4)
e

  • loss1: classfication loss
  • loss3: adversarial loss for discriminator
  • loss4: adversarial loss for generator( mixup (G1,G2) and (G3,G4))

Perfomance

title

* FADA - n stands for our method when we use n labeled target samples per category in training*

title

Inspiration

The model trained on large data with more noise have more ability to adapte other domains.

2019-09-25 18:45:10    412    0    0

Introduction

A muti-domain image-to-image translation model.

Model Structure

title

title

Loss consist of Adversarial Loss(D tries to distinguish whether a image x from domain C is real or not, while G tries to cheat it), Domain Classification Loss (D tries to tell where a image xb is from, while Gx>c tries to let D tell the fake image is from c instead of b), Reconstruction Loss for Generator.

title

The structure is adapted from CycleGAN.

Trainning

When trainning one domain, set other domains to be 0.

title

title

Performance

title

2019-09-24 17:55:56    356    0    0

Background

Doing the same Domain Adaptation job as CYCADA, translate images between GTA5 and Cityscapes.

This is published after CYCADA.

Method

The main difference is that there's a Embedding space in the middle.

title

The loss consis of 6 parts:

  • Normal Classfication (P1)

title

  • Reconstruct part (P2)

title

  • Discriminator z part (P3)

title

  • Discriminator x,y part (P4)

title

  • Cycle loss (P5)

title

  • Complex Classfication (P6)

title

In total, there's 3 discriminator Dz,Dx,Dy and 2 encoder fx,fy and 2 decoder gx,gy and a classifier h.

Performance

State of art performance when this paper was published.

title

In job above the encoder is LeNet.

title

Someting interesting: "Switching

2019-09-23 18:59:21    340    0    0

Aims

To match the joint distribution P(Ys,Xs) with P(Yt,Xt).

Solution

Find a matrix A to let P(ATXs),P(ATXt) as close as possible.
So is P(Ys|ATXs),P(Yt|ATXt).

For the first pair, we use a method called TCA to minimize

title

this equals to

title

where M0 is

title

For the next part, because we don't have Yt, so we train a classfier C to learn

2019-09-23 17:30:59    355    0    0

Background

We have (Xs,Ys) and Xt, but Xs,Xt are not from the same ditribution, i.e. P(Ys|Xs)P(Yt|Xt). To solve this, we use the following method.

Assumption

There exist a transformation ϕ that P(Ys|ϕ(Xs))P(Yt|ϕ(Xt)).

Solutioin

To find ϕ, we try to minimize following distance called MMD(maximum mean discrepancy).

title

To solve

2019-09-20 21:24:53    348    0    0

要打Kaggle可惜手上没有很强的服务器怎么办呢?

Google Colab! 良心首选

方案一

  • 上传数据到Google Drive,然后Colab Link到Google Drive即可。

  • 从Kaggle下载数据(校园网流量-1.3G TAT)

  • 解压Kaggle数据(......此处等待10分钟)
  • 上传数据到Google Drive(....................................预计还剩11个小时)
  • 弃疗

方案二

  • Colab上使用Kaggle API,直接在Google 服务器上下载+解压数据。

  • 从Kaggle -> Account -> Create New API Token

  • 往Colab 填入以下内容
  1. !mkdir ~/.kaggle
  2. import json
  3. token = {"username":"username","key":"yourtoken"}
  4. with open('/content/.kaggle/kaggle.json', 'w') as file:
  5. json.dump(token, file)
  6. !cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json
  7. !chmod 600 /root/.kaggle/kaggle.json
  8. !kaggle config set -n path -v{/content}
  • 然后你就可以开始下载并解压数据集啦
  1. !kaggle competitions download -c severstal-steel-defect-detection -p /content
  2. !ls
  3. !unzip -d ./test test_images.zip >/dev/null 2>&1
  4. !unzip -d ./train train_images.zip >/dev/null 2>&1
  5. !ls
  • 下载(准备去玩局游戏看看什么时候能下完,等等什么?! 133MB/S ?!)这就下完了???!!!
  • 解压 10s 没到(还是服务器SSD NB啊,笔记本里的TLC真实垃圾)
2019-09-20 18:54:09    355    0    0

Introduction

CVPR2017. They want to do Adversarial Discriminative Domain Adaptation.

Structure

As usual, there's Source Domain(the old one) S and Target Domain(the new one) T, and a Classifier C on Source Domain, Target Representations Mt and Ms, a Discriminator D. The loss is straight forward.

title

Training method

title

  1. Train Ms and C on S.
  2. Keep Ms, initialize