-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1)Image-to-Image Translation with Conditional Adversarial Nets #5
Comments
2017-11-10 拓展应用:使用一张静态图生成video |
2017-11-11 采用的架构C3D为: 最后通过和其他方法进行对比,提出C3D在动作识别、场景和目标识别,以及运算速度上具有优势 |
2017-11-11 架构: |
2017-11-13 pix2pix的判别器代码(tensorflow)链接: C3D的代码(keras): videogan 的生成器和判别器代码(torch): mocogan的生成器和判别器代码pytorch: |
0
10/24 工作内容: 10/25工作内容: 10/30工作内容: 11/2 工作内容: 11/3 工作内容: 11/4工作内容: 如果安装后重启电脑,还是出现循环登陆,那么请尝试以下方法(方法失败) 11-10工作内容参考以下链接 11-11工作内容参考以下链接 11-14日 工作内容记录: 3)mocogan总结: 文章中将视频预测的研究工作分为两类: 训练输入: 测试输入: 隐变量: 架构介绍: 损失函数的梯度更新: 训练细节: 应用拓展: 11-14日工作内容: |
2017-11-10
1)Image-to-Image Translation with Conditional Adversarial Nets
阅读总结:
训练input: 一个图片和随机高斯噪声(dropout) ,训练output:一个逼真的和输入图片相关的图片
测试input: 一个图片和随机高斯噪声(dropout) 。 测试output:一个逼真的和输入图片相关的图片
G: U-NET是an encoder-decoder with skip connections**
(encoder:
C64-C128-C256-C512-C512-C512-C512-C512
U-Net decoder:
CD512-CD1024-CD1024-C1024-C1024-C512-C256-C128
After the last layer in the decoder, a convolution is ap- plied to map to the number of output channels (3 in general, except in colorization, where it is 2), followed by a Tanh function. As an exception to the above notation, Batch- Norm is not applied to the first C64 layer in the encoder. All ReLUs in the encoder are leaky, with slope 0.2, while ReLUs in the decoder are not leaky.)
D:论文采用70x70的patchGAN
(C64-C128-C256-C512,After the last layer, a convolution is applied to map to a 1 dimensional output, followed by a Sigmoid function. As an exception to the above notation, BatchNorm is not applied to the first C64 layer. All ReLUs are leaky, with slope 0.2.)
优势:可以用于处理任意大的图片
loss函数:G⇤ = arg min max LcGAN (G, D) + lamda*LL1(G).
(Adding both terms together (with lamda = 100) reduces these artifacts.)
训练细节:
1)Weights were initialized from a Gaussian distribution with mean 0 and standard deviation 0.02.
2) apply batch normalization,use batch size 1 for certain experiments and 4 for others,noting little difference
The text was updated successfully, but these errors were encountered: