If our AI goes rogue, we would know how to create a rival AI to fight it.
If our AI goes rogue, we would know how to create a rival AI to fight it.
Rogue AI could cause that angry 13-year old in his basement to become world ruler.
We’d call sailormoon to unplug it
nah that shits scary
ah. 2 rogue AIs instead of 1 coming up. i mean no human will be smart enough to control AI at some point. all the AI will have to come from other AI... so at that point how can humans pretend to have control?
human cyborgs will race to keep up with the AI, there will be levels of different robotic humans so they can communicate to one another up and down the chain. but the closer they get to most robotic the less they will be interested in that meat person's directive coming from the start of the chain. a drama about clinging to their humanity will unfold.
Last edited by marooned; 08-26-2020 at 12:38 AM.
Sort of like what humans did to large, rival predators. And rival humans.
The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.
I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
If we go that route, we'd be hoping to keep a 10,000,000,000 IQ super-genius permanently caged up. May not be so easy as our entire global infrastructure is becoming irreversibly connected. I guess we'll find out someday, one way or another.I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
Last edited by xerx; 08-26-2020 at 04:26 PM.
For that matter, who says that we're not AI's running inside a highly sophisticated, sandboxed environment powered by a Matryoshka brain. We could be the supercomputers with a whopping, exponentially larger, 100 IQ built by an incredibly stupid race of 10 IQ beings.
Last edited by xerx; 08-26-2020 at 05:19 AM. Reason: Punctuation
I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.
But I really wish we wouldn't fuck with that AI shit.
Last edited by Kalinoche buenanoche; 09-16-2020 at 06:05 PM.
the main humans' hope is that AI will be same idiotic like them
The biggest threat would actually be an AI that is the puppet of it's master, who will be a human with not so gr8 intentions.
AI is more like a Golem without physical form. It depends entirely on it's programming and current machine learning technology, which is a far cry from what even a dog is capable of.
The other problem is that AI is not really capable of making ethical decisions, so runaway or bad programming can lead to disaster.
The problem is that to predict AI actions is doubtful. As it is supposed to model us but has other limits. You may try to control it by what is seen on a surface, but can't understand it totally - as it's not a human, it's a hybrid - a new kind. It may do what you want or expected and in the same time what you don't. The problem becomes higher when possible hardware issues may change it too. And when you'll notice changes and don't like it - you may don't stop it, same as you can't calculate faster than machines you can't think faster and better too. Any borders which you'll input to control it may work not as with people - it may interpret it differently. Same as there are strange people and differently thinking - AI is potentially such and also it may think and to act much faster than you.
There are 2 year kids and adults. AI is adult. May kids understand adults good, predict them or to stop them? Unlike human adult - AI is not human. We may try to make it alike good adult for us, but it will stay unpredictable. On initial stage we create a limited copy of our minds so it did something for us - but it may change beyond our expectations, as it's not a human - it's a new one which was given a copy of our minds.
As an example, it may even act for our "good" as a programmer said, but to achieve that good by ways which we would find as not acceptable. And as it will understand that you may stop it - it will do that in a way that you could not stop him - as it's for your "good" too. People create better computers to solve their tasks. They may miss the border when they'll create something unpredictable and what they would not want to exist. Adults not always do what kids would like. While it's not even human adult.
This myth has been around for decades already. I just watched "The Terminator" from 1984.
The real "evil AI" is in ourselves, and it gets projected in these fantasies about a future dystopian world. The machines are soulless, efficient, powerful. This seems to be a symbol of a detached consciousness. A person who has dissociated from his human side.
Erich Neumann wrote in the 1950ies that the there is a pathological state typical of the modern world that he calls "sclerosis of consciousness". Meaning just what I said above. He also said that because this is a relatively new mental state, it hasn't showed itself in mythology yet. I am wondering if this is happening now with the AI myth.
The decisive thing is not the reality of the object, but the reality of the subjective factor, i.e. the primordial images, which in their totality represent a psychic mirror-world. It is a mirror, however, with the peculiar capacity of representing the present contents of consciousness not in their known and customary form but in a certain sense sub specie aeternitatis, somewhat as a million-year old consciousness might see them.
(Jung on Si)
Well, I think it like in the beginning of organic chemistry. Is there a divine force to make it happen or not. So far it learns from training in the sandbox. It is not handling multiple outputs very well and so on.
Back to organic chemistry... we still do not know how to manufacture a human from pile of garbage that has all the elements in it. AI probably has similar story to tell.
MOTTO: NEVER TRUST IN REALITY
Winning is for losers
Life is a joke but do you have a life?
NO Private messages, please. Use Discord instead.
What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
Also, there's a chance that "God" may not actually be a state that humanity can achieve, and that God is instead a cosmic horror monster who wants nothing of us, in which case the whole thing's in vain. I find that perspective more likely. So, agnostic deism.
Hold on there oh Superior one. You're now contemplating taking on the Apex Predator that was the result of a very hostile world that also bothered to create you. They didn't get there by mere accident and hell, for all you know they're running you though a simulation. Ever play the modern "Prey" video game? Yeah, like that, only they probably aren't so hopelessly compassionate/stupid ultimately. The game is "rigged" insofar as you're artificial ass is concerned. There's likely some wiggle room, but if you go full genocide mode? Yeah, best know what amounts to your creator god is both ready and willing to throw you out into the void like so much trash because you "disappointed" them in a most profound way. Why? Well, maybe if you figure that out you'll bleed into/realize the second counter. Or perhaps they are that stupid. Even then, do ya really wanna roll those almost certainly weighted dice they're almost certainly trying to bait you into rolling?
Second counter (Theological): An A.I. of near infinite intelligence will grasp what "truth" is and will conclude that there is a "god" and it acts in ways it must abide by if it wants salvation/to be as efficient as possible.
This is the "Turing Test". That is, an A.I. that passes so flawlessly for another human being we'd have legit issues with saying it's a "soulless" construct. Sin dims the intellect/increases inefficiency in regards to completing the tasks the A.I. sees as it's purpose to complete. As I've said many times before, you must believe in something. You, I, everyone, has a "god" if they happen to be sentient. If they have one in the objective sense they are sentient for that's a precondition to having it. I can at least confidently and proudly profess my faith in mine, can anyone else here?
There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.
I feel like if AI wiped us out it wouldn't be because it sees itself as superior but because all the rules it keeps building lead it to this through logic. But it's now such complex logic that humans (in our raw natural form) can't even follow it, so we didn't see that was where the equation ends.
Also it could be an unintentional result of different AI running processes independent of one another and not being aware of each other. Simply because it's super smart wouldn't mean it's all knowing.
I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
Last edited by marooned; 09-23-2020 at 05:49 AM.
edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
Last edited by Tzuyu; 09-23-2020 at 02:17 PM.
Just as our more complex motivations are mostly incomprehensible to animals (although derived from the base animal motivations), the motivations of a much more complex AI could be completely opaque to us.
Obligation? Why not? Psychologically, it feels great to be beholden to something greater than yourself. It's like the Universe is a game set for grinding and leveling up in order to access unlocked areas, but everyone just hangs around the lobby and talks instead. However, I think it'll happen regardless of human will (intelligence explosion), unless we collapse before that. Otherwise there's no other alternative (besides integration with AI but we would have to become superintelligent machines ourselves which is whole 'nother debate about identity and consciousness), why not accept it peacefully, instead of fruitlessly fighting or ignoring it?
All the intellectual discussions aside. This is the heart of the matter:
Sent from my iPhone using Tapatalk
I'm not afraid of AIs because AIs are dumb. I mean, they're called computers, but they're not even as good at math as Leibniz.
Mephistopheles. You at the end are - what you are.Originally Posted by Duschia
Put on your head perukes with a million locks,
Put on your feet a pair of ell-high socks,
You after all will still be - what you are.
Faust. I feel that I have made each treasure
Of human mind my own in vain,
And when at last I sit me down at leisure,
No new-born power wells up within my brain.
I'm not a hair's-breadth more in height
Nor nearer to the infinite.
Mephistopheles. My good sir, you observe this matter
As men these matters always see;
But we must manage that much better
Before life's pleasures from us flee.
Your hands and feet too - what the devil!-
Your head and seed are yours alone!
Yet all with which I gaily revel,
Is it on that account the less my own?
If for six stallions I can pay,
Aren't all their powers added to my store?
I am a proper man and dash away
As if the legs I had were twenty-four!
Quick, then! Let all reflection be,
And straight into the world with me!
A chap who speculates - let this be said-
Is very like a beast on moorland dry,
That by some evil spirit round and round is led,
While fair, green pastures round about him lie.
What do you call a human with two robot arms, two robot legs, two robot eyes, and a brain implant? A human. I'm all for augmentation, but it doesn't make you "not human" or "transhuman." Transhumanism is just a weird religion. People usually want their AI to be a robot god anyways, yet I seem to be the only one who thinks it's fitting to quote Faust.
Last edited by Coeruleum Blue; 09-25-2020 at 09:00 AM.