安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题










Good question. It depends on the modding interface with the game's coded API, what language the embedded scripting environment and game engine pipeline is in, its code structure and system specs, and other factors. You have to do it to find out on a case-by-case basis unless you use a newer, more optimized language and more predictable performant and tested-true coding methods involving knowledge of how processors waste time shuffling high-level excess data around from memory to caches. You know they (CPUs), and GPUs even more so, have the ability to run billions of complex computation instructions per second, right? And most operations can be simplified dramatically if you put some thought into it (as AI has been showing us up by thousands of times, without even using neuromorphic non-von-neumann computation frameworks).
Now if only developers could keep up..