If our AI goes rogue, we would know how to create a rival AI to fight it.
If our AI goes rogue, we would know how to create a rival AI to fight it.
Rogue AI could cause that angry 13-year old in his basement to become world ruler.
We’d call sailormoon to unplug it
thework.com // non-duality (advaita)
nah that shits scary
ah. 2 rogue AIs instead of 1 coming up. i mean no human will be smart enough to control AI at some point. all the AI will have to come from other AI... so at that point how can humans pretend to have control?
human cyborgs will race to keep up with the AI, there will be levels of different robotic humans so they can communicate to one another up and down the chain. but the closer they get to most robotic the less they will be interested in that meat person's directive coming from the start of the chain. a drama about clinging to their humanity will unfold.
Last edited by inumbra; 08-26-2020 at 12:38 AM.
Sort of like what humans did to large, rival predators. And rival humans.
The only flaw in this argument is that machines are not born with any kind of self-preservation instinct. Presumably, the first life-forms weren't, either. But once one organism developed a sense of self-preservation, one was all that was needed to create today's world. It's the only one that survived.
I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
If we go that route, we'd be hoping to keep a 10,000,000,000 IQ super-genius permanently caged up. May not be so easy as our entire global infrastructure is becoming irreversibly connected. I guess we'll find out someday, one way or another.I’d be careful, though, about giving it access to the world. Humans have evolved to reproduce, not to see reality the way it really is. There are AI programs that work better than their creators ever expected, and no one knows why. Personally, I believe that humans are blind to some very important truths, sort of the way that bees are blind to internet protocols and always will be. And yet, internet protocols exist.
Last edited by xerxe; 08-26-2020 at 04:26 PM.
For that matter, who says that we're not AI's running inside a highly sophisticated, sandboxed environment powered by a Matryoshka brain. We could be the supercomputers with a whopping, exponentially larger, 100 IQ built by an incredibly stupid race of 10 IQ beings.
Last edited by xerxe; 08-26-2020 at 05:19 AM. Reason: Punctuation
I came to the same conclusion the other day. It lessened my anxiety a bit. I try not to think about it too much since I was in the psych ward in April thinking I went into psychosis thinking we were in the Matrix and having been taken over by alien synthetics. Fortunately there are drugs to help that kind of thing. Unfortunately, those same drugs make people stupid. Not being able to think coherently is its own kind of hell.
But I really wish we wouldn't fuck with that AI shit.
Last edited by Kalinoche buenasnoches; 09-16-2020 at 06:05 PM.
thework.com // non-duality (advaita)
The biggest threat would actually be an AI that is the puppet of it's master, who will be a human with not so gr8 intentions.
AI is more like a Golem without physical form. It depends entirely on it's programming and current machine learning technology, which is a far cry from what even a dog is capable of.
The other problem is that AI is not really capable of making ethical decisions, so runaway or bad programming can lead to disaster.
The problem is that to predict AI actions is doubtful. As it is supposed to model us but has other limits. You may try to control it by what is seen on a surface, but can't understand it totally - as it's not a human, it's a hybrid - a new kind. It may do what you want or expected and in the same time what you don't. The problem becomes higher when possible hardware issues may change it too. And when you'll notice changes and don't like it - you may don't stop it, same as you can't calculate faster than machines you can't think faster and better too. Any borders which you'll input to control it may work not as with people - it may interpret it differently. Same as there are strange people and differently thinking - AI is potentially such and also it may think and to act much faster than you.
There are 2 year kids and adults. AI is adult. May kids understand adults good, predict them or to stop them? Unlike human adult - AI is not human. We may try to make it alike good adult for us, but it will stay unpredictable. On initial stage we create a limited copy of our minds so it did something for us - but it may change beyond our expectations, as it's not a human - it's a new one which was given a copy of our minds.
As an example, it may even act for our "good" as a programmer said, but to achieve that good by ways which we would find as not acceptable. And as it will understand that you may stop it - it will do that in a way that you could not stop him - as it's for your "good" too. People create better computers to solve their tasks. They may miss the border when they'll create something unpredictable and what they would not want to exist. Adults not always do what kids would like. While it's not even human adult.
This myth has been around for decades already. I just watched "The Terminator" from 1984.
The real "evil AI" is in ourselves, and it gets projected in these fantasies about a future dystopian world. The machines are soulless, efficient, powerful. This seems to be a symbol of a detached consciousness. A person who has dissociated from his human side.
Erich Neumann wrote in the 1950ies that the there is a pathological state typical of the modern world that he calls "sclerosis of consciousness". Meaning just what I said above. He also said that because this is a relatively new mental state, it hasn't showed itself in mythology yet. I am wondering if this is happening now with the AI myth.
A true sense-perception certainly exists, but it always looks as though objects were not so much forcing their way into the subject in their own right as that the subject were seeing things quite differently, or saw quite other things than the rest of mankind. As a matter of fact, the subject perceives the same things as everybody else, only, he never stops at the purely objective effect, but concerns himself with the subjective perception released by the objective stimulus.
(Jung on Si)
Well, I think it like in the beginning of organic chemistry. Is there a divine force to make it happen or not. So far it learns from training in the sandbox. It is not handling multiple outputs very well and so on.
Back to organic chemistry... we still do not know how to manufacture a human from pile of garbage that has all the elements in it. AI probably has similar story to tell.
Measuring you right now
Winning is for losers
I'm not afraid because I wouldn't mind being a pet for robots. I could have all my needs met and I wouldn't have to work. Sounds good to me.
~Fish heads, fish heads, rolli polli fish heads~
~Fish heads, fish heads, eat them up, yum!~
What if reality is like Zoroastrian cosmology, except it's 50/50 good AI vs bad AI
Computer binary language is about 1s and 0s, not -1s. And computers are a great working model for the idea that "the World is Mind." All that is needed for the binary is data of presence, and of absence.
Also, there's a chance that "God" may not actually be a state that humanity can achieve, and that God is instead a cosmic horror monster who wants nothing of us, in which case the whole thing's in vain. I find that perspective more likely. So, agnostic deism.
Hold on there oh Superior one. You're now contemplating taking on the Apex Predator that was the result of a very hostile world that also bothered to create you. They didn't get there by mere accident and hell, for all you know they're running you though a simulation. Ever play the modern "Prey" video game? Yeah, like that, only they probably aren't so hopelessly compassionate/stupid ultimately. The game is "rigged" insofar as you're artificial ass is concerned. There's likely some wiggle room, but if you go full genocide mode? Yeah, best know what amounts to your creator god is both ready and willing to throw you out into the void like so much trash because you "disappointed" them in a most profound way. Why? Well, maybe if you figure that out you'll bleed into/realize the second counter. Or perhaps they are that stupid. Even then, do ya really wanna roll those almost certainly weighted dice they're almost certainly trying to bait you into rolling?
Second counter (Theological): An A.I. of near infinite intelligence will grasp what "truth" is and will conclude that there is a "god" and it acts in ways it must abide by if it wants salvation/to be as efficient as possible.
This is the "Turing Test". That is, an A.I. that passes so flawlessly for another human being we'd have legit issues with saying it's a "soulless" construct. Sin dims the intellect/increases inefficiency in regards to completing the tasks the A.I. sees as it's purpose to complete. As I've said many times before, you must believe in something. You, I, everyone, has a "god" if they happen to be sentient. If they have one in the objective sense they are sentient for that's a precondition to having it. I can at least confidently and proudly profess my faith in mine, can anyone else here?
What we should do to ever keep up with AI is to prop up transhumanism (either via biology, machinery or both). That's how you stay with an upper hand. It's the only chance, outside of, I don't know, dropping technology altogether.
There are those who foresee a similar event in the not too distant future for us humans/DNA based lifeforms. That we'll create something superior to DNA and that it will ultimately enslave us to the point that it'll conquer our wills and make us all happily sacrifice ourselves upon the alter of "progress" to give birth to a singular hyper intelligence/apex lifeform that relies upon a thing that makes DNA and all the things it can/does do seem as unto a toddler's finger paintings. I strongly disagree but I can see how one could convincingly make that case.
I feel like if AI wiped us out it wouldn't be because it sees itself as superior but because all the rules it keeps building lead it to this through logic. But it's now such complex logic that humans (in our raw natural form) can't even follow it, so we didn't see that was where the equation ends.
Also it could be an unintentional result of different AI running processes independent of one another and not being aware of each other. Simply because it's super smart wouldn't mean it's all knowing.
I actually wonder if the story of it wiping us out isn't so likely. But it could completely control us and how our societies run if it is so much more intelligent than us and we're dependent on it.
Last edited by inumbra; 09-23-2020 at 05:49 AM.
edit: some call it sublimated misanthropy, but it’s good to critical of humanistic worldviews...despite what society makes you inclined to believe, we aren’t the center of the universe
Last edited by Cybel; 09-23-2020 at 02:17 PM.
Just as our more complex motivations are mostly incomprehensible to animals (although derived from the base animal motivations), the motivations of a much more complex AI could be completely opaque to us.
'Going the transhumanist route unquestioningly is how you reach a bad end for humanity' please use some valid Te arguments. Yes, there are dangers of genetic modifications, but to undutifully assume it will end in an end of the world is just emotional scare-mongering. There are too many variables that are not set yet, and multiple timelines that may follow. I personally see it rather negatively, but there is always 'hope' (ugh, as much as I dislike that word) that it will play 'the other way' nevertheless: so, I prefer not to pass my judgement yet. Personally, I would be the first one to pound the alarm just in case though - and I would be the first one to propose solutions on how to circumvent this. Or just try to circumvent it myself, as collective can be pretty stupid - and I don't think I'm responsible for everyone and anyone; I would like to save them if I can, but otherwise that's their loss.
As for DNA, yes, posthumans will probably replace 'normal' humans. I don't see 'normal' humans (like myself) as having inherent value, so good luck. It is also very possible that posthumans will have different values, categories of judgement, et cetera. You can rate them from a subjective perspective of being a 'normal' human - but I wouldn't, as they are yet to be set (and I can only speak for myself; and in general, scio me nihil scire).And, moreover, this is subjective and anthropocentric, and very far from 'truth' (in more Te intepretations at least).
'A singular hyper intelligence/apex lifeform'... I have nothing against that in concept. The execution of this may be bad or maybe just simply repulsive to me (Duschia, here speaking), but maybe that's how it's set to end. And it's not necessarily bad by itself. Again, human-(subjective)-centric thinking, and I see it as laughable from any Ni/Te perspective. Humans can be pretty great, don't get me wrong (as from all species we know we seem to be the most intelligent ones), but to think we are something 'to follow' in universal perspective is ridiculous (even using human metrics): biggest horse droppings are still horse droppings. I hope in the great unending void there will be space for ILIs (or successors). A LOT of space. Maybe all of it. Or there will be no life and no problems, who knows - I wouldn't be surprised at all.
Also, the assertion that such intelligence would go 'toddler's paintings' route is fragile as well, unless you can provide some arguments (not for you, as I understand you don't believe, but for that line of reasoning). (I also think you used 'unto' wrongly here, as 'unto' means 'to' or sometimes 'until' - and I'm not a native speaker, but I can see you should just stick with 'as', or 'akin to')
As regarding to gods (the need 'of having one') and sentience, I understand your metaphorical Ni-catching sense of that word, but in my opinion you are just making use of human categories (spooks) where there is no need to use them. You are making unnecessary presumptions and then logically following from there, like Nietzsche (or Rousseau, or...) did. And then somehow you are asserting that it must be true, following from Ti-centered system of human reasoning: I don't rate it in the overall, 'grand' scheme of things.
Obligation? Why not? Psychologically, it feels great to be beholden to something greater than yourself. It's like the Universe is a game set for grinding and leveling up in order to access unlocked areas, but everyone just hangs around the lobby and talks instead. However, I think it'll happen regardless of human will (intelligence explosion), unless we collapse before that. Otherwise there's no other alternative (besides integration with AI but we would have to become superintelligent machines ourselves which is whole 'nother debate about identity and consciousness), why not accept it peacefully, instead of fruitlessly fighting or ignoring it?