安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题

⠀ ⡀⠀⠀⠀⣠⣶⣿⣶⣶⣿⣿⣿⣿⣿⣿⣷⣶⣤⡀⠀⠀
⢰⣷⣤⣴⣾⣿⠟⡻⢿⣿⡿⠛⢉⠙⢿⣿⣿⣿⣿⣿⡄⠀
⠘⢿⣿⡿⠛⣡⣾⣿⣶⣦⣶⣾⣿⣷⣤⡙⠛⠛⠛⢿⣷⡀
⢰⠆⢠⣔⣭⣽⣿⣿⣿⣿⣿⢿⣿⣿⣿⣿⣿⣿⣷⠀⣿⣷
⠈⢀⣿⡟⠙⣿⣿⡟⣻⣿⣿⣶⣦⣝⣿⣿⣿⣉⠡⣴⣿⣿
⠀ ⣼⣿⣧⣼⣿⠟⣰⣿⣿⣉⣉⡉⣿⣿⣯⡏⣉⠳⡌⢻⠏
⠀ ⣿⣿⣿⣿⠟⢠⣿⣿⣿⣿⣿⣷⣿⣿⣿⠁⠙⢤⢳⡌⠀
⢸⣿⣿⡿⣿⣦⣈⣻⣿⣿⣿⣿⣿⣿⣿⣯⣀⣴⣾⣦⣀⠀
⠀ ⣿⡏⢁⣛⠻⠿⠿⠿⠿⠟⠙⠻⣿⣿⣿⣿⣿⣿⣿⣿⠀
⠀ ⠹⣿⣤⣝⠻⢿⣿⠿⠟⢋⣡⢀⣿⣿⣿⣿⣿⠿⠟⠋⠀
⠀ ⠀ ⠙⢿⣿⣿⡒⠲⣶⣾⣿⣿⣿⣿⣿⣿⡿⠁⠀⠀⠀⠀
⠀⠀ ⠀⠀ ⠙⠻⣿⣿⣿⣿⣿⣿⣿⡿⠟⠉
convolutional neural networks that include an encoder and a decoder. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer,
based solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to
be superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including
ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,
our model establishes a new single-model state-of-the-art BLEU score of 41.0 after
training for 3.5 days on eight GPUs, a small fraction of the training costs of the
best models from the literature.
⠀⠀⠀⠀⠀⣠⠎⠁⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄
⠀⠀⠀⠀⣸⠃⡠⢄⣀⠤⠀⠀⠴⠋⠉⠷⠀⠈⢷
⠀⠀⠀⢠⠇⠤⠤⢤⣤⡄⠀⠀⠤⢤⣤⡤⠄⠀⢸
⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡀⢀⡇
⠀⠀⠀⠘⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣩⠇⠁⣸⠃
⠀⠀⠀⠀⠘⣦⠀⠙⠚⠒⣒⣒⣚⡭⠅⢀⡴⠁
⠀⠀⠀⠀⠀⢀⡹⠶⠤⠤⡀⠠⠤⠤⠴⠒⠛⠲⣄
⠀⠀⠀⠀⢠⠞⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⢧
⠀⠀⠀⡴⠃⠀⠀⠀⠀⣠⣪⠽⠾⣝⣦⠀⠀⠀⠀⠀⠈⢳
⠀⢀⡞⠁⠀⢸⠁⡴⢋⡕⢂⣐⣶⣚⠋⠳⣄⠠⡆⠀⠀⢳⡀
⠀⡼⠁⠀⠀⢸⠟⠁⠈⣋⡡⠴⠤⣄⠽⠂⠙⣆⡇⠀⠀⠀⢳
⢰⠃⠀⠒⠛⠃⠀⠀⠘⠳⣤⠖⠒⠶⡀⠀⠀⠈⠳⠶⠂⠀⠸
⢹⣄⡀⠀⠀⠀⠀⠀⠀⣎⣉⣉⣓⣶⡚⠀⠀⠀⠀⠀⠀⠀⢀⠇
"Quien fueras Burgues para oprimirte las masas" -Karl Marx