文章预览
点击上方 “机器学习研究会” 可以订阅哦 摘要 转自:爱可可-爱生活 There was a really cute paper at the GAN workshop this year, Generating Text via Adversarial Training
by Zhang, Gan, and Carin. In particular, they make a couple of unusual
choices that appear important. (Warning: if you are not familiar with
GANs, this post will not make a lot of sense.) They use a convolutional neural network (CNN) as a
discriminator, rather than an RNN. In retrospect this seems like a good
choice, e.g. Tong Zhang has been crushing it
in text classification with CNNs. CNNs are a bit easier to train than
RNNs, so the net result is a powerful discriminator with a relatively
easy optimization problem associated with it. They use a smooth approximation to the LSTM output in their
generator, but actually this kind of trick appears everywhere so isn't
………………………………