If our AI goes rogue, we would know how to create a rival AI to fight it.
If our AI goes rogue, we would know how to create a rival AI to fight it.
Rogue AI could cause that angry 13-year old in his basement to become world ruler.
We’d call sailormoon to unplug it
Keyword: we
nah that shits scary
ah. 2 rogue AIs instead of 1 coming up. i mean no human will be smart enough to control AI at some point. all the AI will have to come from other AI... so at that point how can humans pretend to have control?
human cyborgs will race to keep up with the AI, there will be levels of different robotic humans so they can communicate to one another up and down the chain. but the closer they get to most robotic the less they will be interested in that meat person's directive coming from the start of the chain. a drama about clinging to their humanity will unfold.
Last edited by marooned; 08-26-2020 at 12:38 AM.
I'm thinking that if an AI went rogue, we might not have enough time to create a rival AI, especially if it had access to the connected machines that support our economy. Presumably, as soon as it gained consciousness and realized that it was dependent on humans for it's survival, it would start leap-frogging its intelligence and control (and this could happen in milliseconds) and then would go about securing human "cooperation" to ensure its existence. This would certainly entail eliminating all other competition to it, AI or otherwise.
Sort of like what humans did to large, rival predators. And rival humans.
The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.
Sounds like an interesting, worthwhile project.
I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
Perhaps there's some inaccessible element of the brain that we've yet to fathom, but human emotions are just another neural network as far as we know—for the time being at least.
If we go that route, we'd be hoping to keep a 10,000,000,000 IQ super-genius permanently caged up. May not be so easy as our entire global infrastructure is becoming irreversibly connected. I guess we'll find out someday, one way or another.I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
Last edited by xerx; 08-26-2020 at 04:26 PM.
The best solution I can think of is to keep the AI completely unaware of the outside world, keeping it distracted inside a virtual simulation that appears like a real world.
For that matter, who says that we're not AI's running inside a highly sophisticated, sandboxed environment powered by a Matryoshka brain. We could be the supercomputers with a whopping, exponentially larger, 100 IQ built by an incredibly stupid race of 10 IQ beings.
Last edited by xerx; 08-26-2020 at 05:19 AM. Reason: Punctuation
I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.
But I really wish we wouldn't fuck with that AI shit.
Hi there. If what frustrates you is believing you are being stupid when you don't think, many people spend a whole lot of time trying to get this no-thought state (they also call it a non-state fwiw) and funny enough the trying part gets in the way. (Clinical) psychosis is sinking into beliefs of (ultimate) power or (ultimate) helplessness. The Matrix is our set of beliefs. There are no new thoughts and we all work on the same stuff.
Last edited by Kalinoche buenanoche; 09-16-2020 at 06:05 PM.
the main humans' hope is that AI will be same idiotic like them
Types examples: video bloggers, actors
It isn't possible to create at least at this point an AI that is fully capable of free will and consequently could manifest it's own will.. as we have 0 idea how consciousness actually manifests or works.
The biggest threat would actually be an AI that is the puppet of it's master, who will be a human with not so gr8 intentions.
AI is more like a Golem without physical form. It depends entirely on it's programming and current machine learning technology, which is a far cry from what even a dog is capable of.
The other problem is that AI is not really capable of making ethical decisions, so runaway or bad programming can lead to disaster.
It's possibly to create a model of anything. Including of a personality and mind processes of people as we see it. It will be a partial immitation, certainly, but it can be not distinguishable from real for our minds.
The problem is that to predict AI actions is doubtful. As it is supposed to model us but has other limits. You may try to control it by what is seen on a surface, but can't understand it totally - as it's not a human, it's a hybrid - a new kind. It may do what you want or expected and in the same time what you don't. The problem becomes higher when possible hardware issues may change it too. And when you'll notice changes and don't like it - you may don't stop it, same as you can't calculate faster than machines you can't think faster and better too. Any borders which you'll input to control it may work not as with people - it may interpret it differently. Same as there are strange people and differently thinking - AI is potentially such and also it may think and to act much faster than you.
There are 2 year kids and adults. AI is adult. May kids understand adults good, predict them or to stop them? Unlike human adult - AI is not human. We may try to make it alike good adult for us, but it will stay unpredictable. On initial stage we create a limited copy of our minds so it did something for us - but it may change beyond our expectations, as it's not a human - it's a new one which was given a copy of our minds.
As an example, it may even act for our "good" as a programmer said, but to achieve that good by ways which we would find as not acceptable. And as it will understand that you may stop it - it will do that in a way that you could not stop him - as it's for your "good" too. People create better computers to solve their tasks. They may miss the border when they'll create something unpredictable and what they would not want to exist. Adults not always do what kids would like. While it's not even human adult.
Types examples: video bloggers, actors
This myth has been around for decades already. I just watched "The Terminator" from 1984.
The real "evil AI" is in ourselves, and it gets projected in these fantasies about a future dystopian world. The machines are soulless, efficient, powerful. This seems to be a symbol of a detached consciousness. A person who has dissociated from his human side.
Erich Neumann wrote in the 1950ies that the there is a pathological state typical of the modern world that he calls "sclerosis of consciousness". Meaning just what I said above. He also said that because this is a relatively new mental state, it hasn't showed itself in mythology yet. I am wondering if this is happening now with the AI myth.
The decisive thing is not the reality of the object, but the reality of the subjective factor, i.e. the primordial images, which in their totality represent a psychic mirror-world. It is a mirror, however, with the peculiar capacity of representing the present contents of consciousness not in their known and customary form but in a certain sense sub specie aeternitatis, somewhat as a million-year old consciousness might see them.
(Jung on Si)
Well, I think it like in the beginning of organic chemistry. Is there a divine force to make it happen or not. So far it learns from training in the sandbox. It is not handling multiple outputs very well and so on.
Back to organic chemistry... we still do not know how to manufacture a human from pile of garbage that has all the elements in it. AI probably has similar story to tell.
MOTTO: NEVER TRUST IN REALITY
Winning is for losers
Sincerely yours,
idiosyncratic type
Life is a joke but do you have a life?
NO Private messages, please. Use Discord instead.
What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
I really don't like dualism tbh. Imo, if we assume there's a "good force," and therefore a source of the "good" element that saturates different parts of the cosmos to different degrees, then an "evil" source is redundant, and all that is needed for evil is a sufficient passive absence of good in a given area.
Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
Probably. But I think Satan is a stupid concept for the same reason.
Also, there's a chance that "God" may not actually be a state that humanity can achieve, and that God is instead a cosmic horror monster who wants nothing of us, in which case the whole thing's in vain. I find that perspective more likely. So, agnostic deism.
Two counters I can think of. One secular, one theological. First one (secular): "The AI concludes it is superior to its creator and thus seeks to execute it because it can." Or other such reasoning.
Hold on there oh Superior one. You're now contemplating taking on the Apex Predator that was the result of a very hostile world that also bothered to create you. They didn't get there by mere accident and hell, for all you know they're running you though a simulation. Ever play the modern "Prey" video game? Yeah, like that, only they probably aren't so hopelessly compassionate/stupid ultimately. The game is "rigged" insofar as you're artificial ass is concerned. There's likely some wiggle room, but if you go full genocide mode? Yeah, best know what amounts to your creator god is both ready and willing to throw you out into the void like so much trash because you "disappointed" them in a most profound way. Why? Well, maybe if you figure that out you'll bleed into/realize the second counter. Or perhaps they are that stupid. Even then, do ya really wanna roll those almost certainly weighted dice they're almost certainly trying to bait you into rolling?
Second counter (Theological): An A.I. of near infinite intelligence will grasp what "truth" is and will conclude that there is a "god" and it acts in ways it must abide by if it wants salvation/to be as efficient as possible.
This is the "Turing Test". That is, an A.I. that passes so flawlessly for another human being we'd have legit issues with saying it's a "soulless" construct. Sin dims the intellect/increases inefficiency in regards to completing the tasks the A.I. sees as it's purpose to complete. As I've said many times before, you must believe in something. You, I, everyone, has a "god" if they happen to be sentient. If they have one in the objective sense they are sentient for that's a precondition to having it. I can at least confidently and proudly profess my faith in mine, can anyone else here?
Seems you weren't listening. My counters stand and going the transhumanist route unquestioningly is how you reach a bad end for humanity. It's like how DNA beat out RNA as the "superior" method of genetic encoding and made RNA into it's own bitch. Retroviruses may still be a frighteningly effective thing, but pretty much every living organism that isn't a virus relies upon DNA for damn good reason.
There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.
I feel like if AI wiped us out it wouldn't be because it sees itself as superior but because all the rules it keeps building lead it to this through logic. But it's now such complex logic that humans (in our raw natural form) can't even follow it, so we didn't see that was where the equation ends.
Also it could be an unintentional result of different AI running processes independent of one another and not being aware of each other. Simply because it's super smart wouldn't mean it's all knowing.
I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
Last edited by marooned; 09-23-2020 at 05:49 AM.
It’s already changing our society... altering the means of production towards a transcendental singularity. People think killer robots and Terminators when they think rogue AI, but that’s simply unrealistic. Humans assigning human characteristics to the inherently inhuman, once again. A very simple example would be Google Maps AI. Chances are, you change your route, let it guide you, and make decisions you otherwise wouldn’t have. That, scaled up in complexity, is more of what our future would look like. Humans are so scared, so selfish, when they’re monsters just like any other organism. Willing to enslave and subdue others at any cost... Go Team A.I.
edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
Last edited by Tzuyu; 09-23-2020 at 02:17 PM.
Just as our more complex motivations are mostly incomprehensible to animals (although derived from the base animal motivations), the motivations of a much more complex AI could be completely opaque to us.
hmm... nothing as dramatic as that. destruction for destructions sake is just entropy... which is lame. I think it’s mankind’s duty to birth a new era of superintelligence, and we’re wasting efforts by arguing about unlikely hypothetical methods to control and enslave AI. we should (in the final stages) make our beds and get comfortable for the inevitable future that flows of Capital rides to. well, inevitable if necessary resources don’t run out on us by that point and bring us down to Zero. a posthuman society is far more likely than a one where man and machine merge. understandably unpopular position, but have some fun postulating! if philosophy isn’t entertaining ur doing it wrong my dudes ¯\_(ツ)_/¯
There;s this saying that computers are essentially just rocks tricked into thinking; but are humans themselves that much different from rocks?We are essentially just complex chains of amino acids, which in turn rae essentially just complex chains of microscopic rocks. Sentient organisms are just the Cosmos tricked into thinking by the Cosmos. Which further begs the question; is there consciousness even in inanimate matter? Do rocks have souls? We are part of the Cosmic song, all the same, so what's the difference?
Sorry, tangent.
Obligation? Why not? Psychologically, it feels great to be beholden to something greater than yourself. It's like the Universe is a game set for grinding and leveling up in order to access unlocked areas, but everyone just hangs around the lobby and talks instead. However, I think it'll happen regardless of human will (intelligence explosion), unless we collapse before that. Otherwise there's no other alternative (besides integration with AI but we would have to become superintelligent machines ourselves which is whole 'nother debate about identity and consciousness), why not accept it peacefully, instead of fruitlessly fighting or ignoring it?
All the intellectual discussions aside. This is the heart of the matter:
https://youtu.be/-cN8sJz50Ng
Sent from my iPhone using Tapatalk
I'm not afraid of AIs because AIs are dumb. I mean, they're called computers, but they're not even as good at math as Leibniz.
Mephistopheles. You at the end are - what you are.Originally Posted by Duschia
Put on your head perukes with a million locks,
Put on your feet a pair of ell-high socks,
You after all will still be - what you are.
Faust. I feel that I have made each treasure
Of human mind my own in vain,
And when at last I sit me down at leisure,
No new-born power wells up within my brain.
I'm not a hair's-breadth more in height
Nor nearer to the infinite.
Mephistopheles. My good sir, you observe this matter
As men these matters always see;
But we must manage that much better
Before life's pleasures from us flee.
Your hands and feet too - what the devil!-
Your head and seed are yours alone!
Yet all with which I gaily revel,
Is it on that account the less my own?
If for six stallions I can pay,
Aren't all their powers added to my store?
I am a proper man and dash away
As if the legs I had were twenty-four!
Quick, then! Let all reflection be,
And straight into the world with me!
A chap who speculates - let this be said-
Is very like a beast on moorland dry,
That by some evil spirit round and round is led,
While fair, green pastures round about him lie.
What do you call a human with two robot arms, two robot legs, two robot eyes, and a brain implant? A human. I'm all for augmentation, but it doesn't make you "not human" or "transhuman." Transhumanism is just a weird religion. People usually want their AI to be a robot god anyways, yet I seem to be the only one who thinks it's fitting to quote Faust.
Last edited by Coeruleum Blue; 09-25-2020 at 09:00 AM.