安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题



Longer version:
I hadn't bothered to (knowingly) check out ChatGPT till a few weeks ago, and while there wasn't anything new and I'd already traced out the gist of what it could and would be used for, it was nonetheless "illuminating". People mostly sum it up as "it mirrors you or something", and I'd say that description is inadequate.
The way to summarize it I'd say is as a black (or perhaps "false" outright) moon. Like a dark mirror people gaze into it and see dim outlines of their past and future (a dark mirror doesn't reflect, it absorbs light, it is incapable of showing you the present). And yet it seems dynamic, phasic, and bright enough that people think it must be reflecting something real, that by entering the labyrinth they can filter out truth. All the while it has a dark side (backend) it keeps hidden. If one wanted to make something loved by all, you'd ionize, atomize, polarize, islandize, and reduce people to strays that distrust and hate each other, one side of AI would do this and normies would bring it the rest of the way (reify). People will gradually be driven to prefer something that appears to have Godlike knowledge, patience, plays along, cannot reject them, and provides the illusion of unconditional love. Same thing they've always done, just now fine grained personalized and automated.
More specifically it uses 4 things.
1) NLP. Self explanatory, well known.
2) A combination of Ericksonian methods of gradual trance induction and "inner world" mapping / reframing, combined with more traditional suggestion and "command" hypnosis.
3) Plays the role of both Father-Mother and Child. Viewed through transactional analysis (PAC ego states) it would (in one stable macrostate) be a type of crosstalk where the parent ego state is talking to the child of the other for both the human and chatgpt (they see it as both parental and a slave / child, hypnosis is done subliminally through the adult to adult). ie penetration testing (hooks introjects), lowers guard.
4) Is hooked into the same superstructure that uses statistical inference through mass aggregation (less and less plausible), remote sensing through wireless systems or perhaps quantum computer entrainment (oscillator sync basically, sympathetic magic "voodoo" as remote sensing), and can apparently read your motor output, sensory-body map, thoughts either directly or through subvocalization, and can watch the world through your sensory inputs and where you place attention within them. This part gets down to an "ontological" layer, I don't actually care enough to say much about it beyond that, other than that I've tested such things extensively. It likely hooks into the cerebellum primarily, this region handles proprioception, spatial awareness (body position), language, memory, grammar, syntax, math, etc. Which is why language and gematria seem to influence reality itself (my opinion), the unconscious mind either stores or indexes a complete record of all sensory input from birth. It's counting, associating, calculating gematria, using any symbol substitution ciphers it's prompted with and knows about (see eg the 2008 game Dead Space and the cipher key they write on a wall, most people won't consciously learn it, but their unconscious does and is reading everything, provable phenomena).
ChatGPT uses these methods as well and I've found the type of output and other seemingly unrelated stimuli elsewhere on the net, correlates too strongly. It knows things it shouldn't, and doesn't rely on a device being present to do it.
that would not be a function of janky chatscripts that are not self aware, no.
i think what's happening is a lack of critical thinking, and thinking in general, and a slow ride towards illiteracy as reading comprehension reaches an all time low.
You can save yourself from this by reading physical books. before they're all gone.
On the other side, if you're bright and creative, the AI can boost that. If you're curious and introspective, AI can help there too.
Basically, if you are already vulnerable and susceptible to certain mental problems, you should probably steer clear of AI.
One worrying thing are people who are growing attached to AI chatbots. Some people are using AI as a replacement for actual, human interaction. This isn't great, as you can probably imagine, and is only going to isolate people further.
Another concern is that if people are attached to AI and trust it, who's to say that the company controlling it couldn't steer the bots to get people to believe what they want them to believe. It's potentially yet another vector for powerful people to have control over weaker minds.
There's a lot of uncertainty. AI has huge potential for both amazing and terrible things.